You are here
Weighted LowRank Approximation of Matrices:Some Analytical and Numerical Aspects
 Date Issued:
 2016
 Abstract/Description:
 This dissertation addresses some analytical and numerical aspects of a problem of weighted lowrank approximation of matrices. We propose and solve two different versions of weighted lowrank approximation problems. We demonstrate, in addition, how these formulations can be efficiently used to solve some classic problems in computer vision. We also present the superior performance of our algorithms over the existing stateoftheart unweighted and weighted lowrank approximation algorithms.Classical principal component analysis (PCA) is constrained to have equal weighting on the elements of the matrix, which might lead to a degraded design in some problems. To address this fundamental flaw in PCA, Golub, Hoffman, and Stewart proposed and solved a problem of constrained lowrank approximation of matrices: For a given matrix $A = (A_1\;A_2)$, find a low rank matrix $X = (A_1\;X_2)$ such that ${\rm rank}(X)$ is less than $r$, a prescribed bound, and $\AX\$ is small.~Motivated by the above formulation, we propose a weighted lowrank approximation problem that generalizes the constrained lowrank approximation problem of Golub, Hoffman and Stewart.~We study a general framework obtained by pointwise multiplication with the weight matrix and consider the following problem:~For a given matrix $A\in\mathbb{R}^{m\times n}$ solve:\begin{eqnarray*}\label{weighted problem}\min_{\substack{X}}\\left(AX\right)\odot W\_F^2~{\rm subject~to~}{\rm rank}(X)\le r,\end{eqnarray*}where $\odot$ denotes the pointwise multiplication and $\\cdot\_F$ is the Frobenius norm of matrices.In the first part, we study a special version of the above general weighted lowrank approximation problem.~Instead of using pointwise multiplication with the weight matrix, we use the regular matrix multiplication and replace the rank constraint by its convex surrogate, the nuclear norm, and consider the following problem:\begin{eqnarray*}\label{weighted problem 1}\hat{X} (&)=(&) \arg \min_X \{\frac{1}{2}\(AX)W\_F^2 +\tau\X\_\ast\},\end{eqnarray*}where $\\cdot\_*$ denotes the nuclear norm of $X$.~Considering its resemblance with the classic singular value thresholding problem we call it the weighted singular value thresholding~(WSVT)~problem.~As expected,~the WSVT problem has no closed form analytical solution in general,~and a numerical procedure is needed to solve it.~We introduce auxiliary variables and apply simple and fast alternating direction method to solve WSVT numerically.~Moreover, we present a convergence analysis of the algorithm and propose a mechanism for estimating the weight from the data.~We demonstrate the performance of WSVT on two computer vision applications:~background estimation from video sequences~and facial shadow removal.~In both cases,~WSVT shows superior performance to all other models traditionally used. In the second part, we study the general framework of the proposed problem.~For the special case of weight, we study the limiting behavior of the solution to our problem,~both analytically and numerically.~In the limiting case of weights,~as $(W_1)_{ij}\to\infty, W_2=\mathbbm{1}$, a matrix of 1,~we show the solutions to our weighted problem converge, and the limit is the solution to the constrained lowrank approximation problem of Golub et. al. Additionally, by asymptotic analysis of the solution to our problem,~we propose a rate of convergence.~By doing this, we make explicit connections between a vast genre of weighted and unweighted lowrank approximation problems.~In addition to these, we devise a novel and efficient numerical algorithm based on the alternating direction method for the special case of weight and present a detailed convergence analysis.~Our approach improves substantially over the existing weighted lowrank approximation algorithms proposed in the literature.~Finally, we explore the use of our algorithm to realworld problems in a variety of domains, such as computer vision and machine learning. Finally, for a special family of weights, we demonstrate an interesting property of the solution to the general weighted lowrank approximation problem. Additionally, we devise two accelerated algorithms by using this property and present their effectiveness compared to the algorithm proposed in Chapter 4.
Title:  Weighted LowRank Approximation of Matrices:Some Analytical and Numerical Aspects. 
14 views
6 downloads 

Name(s): 
Dutta, Aritra, Author Li, Xin, Committee Chair Sun, Qiyu, Committee CoChair Mohapatra, Ram, Committee Member Nashed, M, Committee Member Shah, Mubarak, Committee Member University of Central Florida, Degree Grantor 

Type of Resource:  text  
Date Issued:  2016  
Publisher:  University of Central Florida  
Language(s):  English  
Abstract/Description:  This dissertation addresses some analytical and numerical aspects of a problem of weighted lowrank approximation of matrices. We propose and solve two different versions of weighted lowrank approximation problems. We demonstrate, in addition, how these formulations can be efficiently used to solve some classic problems in computer vision. We also present the superior performance of our algorithms over the existing stateoftheart unweighted and weighted lowrank approximation algorithms.Classical principal component analysis (PCA) is constrained to have equal weighting on the elements of the matrix, which might lead to a degraded design in some problems. To address this fundamental flaw in PCA, Golub, Hoffman, and Stewart proposed and solved a problem of constrained lowrank approximation of matrices: For a given matrix $A = (A_1\;A_2)$, find a low rank matrix $X = (A_1\;X_2)$ such that ${\rm rank}(X)$ is less than $r$, a prescribed bound, and $\AX\$ is small.~Motivated by the above formulation, we propose a weighted lowrank approximation problem that generalizes the constrained lowrank approximation problem of Golub, Hoffman and Stewart.~We study a general framework obtained by pointwise multiplication with the weight matrix and consider the following problem:~For a given matrix $A\in\mathbb{R}^{m\times n}$ solve:\begin{eqnarray*}\label{weighted problem}\min_{\substack{X}}\\left(AX\right)\odot W\_F^2~{\rm subject~to~}{\rm rank}(X)\le r,\end{eqnarray*}where $\odot$ denotes the pointwise multiplication and $\\cdot\_F$ is the Frobenius norm of matrices.In the first part, we study a special version of the above general weighted lowrank approximation problem.~Instead of using pointwise multiplication with the weight matrix, we use the regular matrix multiplication and replace the rank constraint by its convex surrogate, the nuclear norm, and consider the following problem:\begin{eqnarray*}\label{weighted problem 1}\hat{X} (&)=(&) \arg \min_X \{\frac{1}{2}\(AX)W\_F^2 +\tau\X\_\ast\},\end{eqnarray*}where $\\cdot\_*$ denotes the nuclear norm of $X$.~Considering its resemblance with the classic singular value thresholding problem we call it the weighted singular value thresholding~(WSVT)~problem.~As expected,~the WSVT problem has no closed form analytical solution in general,~and a numerical procedure is needed to solve it.~We introduce auxiliary variables and apply simple and fast alternating direction method to solve WSVT numerically.~Moreover, we present a convergence analysis of the algorithm and propose a mechanism for estimating the weight from the data.~We demonstrate the performance of WSVT on two computer vision applications:~background estimation from video sequences~and facial shadow removal.~In both cases,~WSVT shows superior performance to all other models traditionally used. In the second part, we study the general framework of the proposed problem.~For the special case of weight, we study the limiting behavior of the solution to our problem,~both analytically and numerically.~In the limiting case of weights,~as $(W_1)_{ij}\to\infty, W_2=\mathbbm{1}$, a matrix of 1,~we show the solutions to our weighted problem converge, and the limit is the solution to the constrained lowrank approximation problem of Golub et. al. Additionally, by asymptotic analysis of the solution to our problem,~we propose a rate of convergence.~By doing this, we make explicit connections between a vast genre of weighted and unweighted lowrank approximation problems.~In addition to these, we devise a novel and efficient numerical algorithm based on the alternating direction method for the special case of weight and present a detailed convergence analysis.~Our approach improves substantially over the existing weighted lowrank approximation algorithms proposed in the literature.~Finally, we explore the use of our algorithm to realworld problems in a variety of domains, such as computer vision and machine learning. Finally, for a special family of weights, we demonstrate an interesting property of the solution to the general weighted lowrank approximation problem. Additionally, we devise two accelerated algorithms by using this property and present their effectiveness compared to the algorithm proposed in Chapter 4.  
Identifier:  CFE0006833 (IID), ucf:51789 (fedora)  
Note(s): 
20161201 Ph.D. Sciences, Mathematics Doctoral This record was generated from author submitted information. 

Subject(s):  Weighted LowRank Approximations  Singular Value Decomposition  QR Decomposition  Singular Value Thresholding  Robust Principal Component Analysis  Alternating Direction Method  Background Estimation  Weighted Singular Value Thresholding  
Persistent Link to This Record:  http://purl.flvc.org/ucf/fd/CFE0006833  
Restrictions on Access:  public 20170615  
Host Institution:  UCF 