Fast Singular Value Shrinkage with Chebyshev Polynomial Approximation Based on Signal Sparsity(
IEEE Transactions on Signal Processing)^{1}Tokyo University of Agriculture and Technology^{2}Tokyo Institute of Technology^{3}Shinshu University^{4}JST PRESTOAbstract We propose an approximation method for thresholding of singular values using Chebyshev polynomial approximation (CPA). Many signal processing problems require iterative application of singular value decomposition (SVD) for minimizing the rank of a given data matrix with other cost functions and/or constraints, which is called matrix rank minimization. In matrix rank minimization, singular values of a matrix are shrunk by hard-thresholding, soft-thresholding, or weighted soft-thresholding. However, the computational cost of SVD is generally too expensive to handle high dimensional signals such as images; hence, in this case, matrix rank minimization requires enormous computation time. In this paper, we leverage CPA to (approximately) manipulate singular values without computing singular values and vectors. The thresholding of singular values is expressed by a multiplication of certain matrices, which is derived from a characteristic of CPA. The multiplication is also efficiently computed using the sparsity of signals. As a result, the computational cost is significantly reduced. Experimental results suggest the effectiveness of our method through several image processing applications based on matrix rank minimization with nuclear norm relaxation in terms of computation time and approximation precision.
Sample codeThis sample code is under construction. Please wait for a few days.
Summary In many applications [1]-[5], the low-rank structure is incorporated into a minimization problem involving the rank function or its continuous relaxation. The problem is solved using some iterative algorithms, with which the thresholding of singular values is usually required at each iteration. We refer to this methodology as
matrix rank minimization. The exact method for the matrix rank minimization is very difficult to solve due to the non-convexity and combinatorial nature of the rank function. Therefore, we focused on the nuclear norm relaxation [6] in this paper. Since the nuclear norm, the sum of the singular values of a matrix, is the tightest convex relaxation of the rank function, we can efficiently solve the resulting problem via convex optimization techniques.
Fig. 1
The nuclear norm relaxation requires the thresholding of singular values, which we call
singular value shrinkage, at each iteration of certain optimization method as shown in Fig. 1. That is, most methods for matrix rank minimization must carry out singular value decomposition (SVD) many times. This is a serious problem in terms of computational cost when we handle large matrices, even with high-spec computers.Fig2
In this paper, we propose a fast singular value shrinkage method for reducing the computational cost in the matrix rank minimization of high dimensional matrices.
Note that our method computes neither singular values nor vectors during the process of singular value shrinkage. To achieve it, Chebyshev polynomial approximation (CPA) [7] is applied to the singular value shrinkage, which is called CPA-based singular value shrinkage. Hence, we can represent the singular value shrinkage without using SVD as shown in Fig. 2. In addition, we used the sparsity of signals to further accelerate the singular value shrinkage. Since the CPA-based singular value shrinkage can be represented as a multiplication of matrices, the computation time is reduced when the target matrices are sparse.Experimental resultsExample results of image inpainting are shown as follows (Please see the other results in our paper.):
References[1] E. J. Candes and B. Recht, "Exact matrix completion via convex optimization," Found. Comput. Math. (FoCM), pp. 712-772, 2009[2] S. Ono, T. Miyata, and I. Yamada, "Cartoon-texture image decomposition using blockwise low-rank texture characterization," IEEE Trans. Image Process., vol. 23, no. 3, pp. 1128-1142, 2014[3] L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang, and Y. Ma, "Robust photometric stereo via low-rank matrix completion and recovery," in Proc. Asian Conf. Comput. Vis. (ACCV), 2011, pp. 703-717[4] X. Liang, X. Ren, Z. Zhang, and Y. Ma, "Repairing sparse low-rank texture," in Proc. Euro. Conf. Conput. Vis. (ECCV), 2012, pp. 482-495[5] X. Zhou, C. Yang, H. Zhao, and W. Yu, "Low-rank modeling and its applications in image analysis," ACM Comput. Surv. (CSUR), vol. 47, no. 2, pp. 36:1-36:33, 2015[6] E, J. Candes and T. Tao, "The power of convex relaxation: near-optimal matrix completion," IEEE Trans. Info. Theory, vol. 56, no.5, pp. 2053-2080, 2010[7] J. C. Mason and D. C. Handscomb,
Chebyshev Polynomials, Chapman and Hall/CRC, 2002 |