Optimization with a Low-Rank Regularization

PhD Thesis Proposal Defence


Title: "Optimization with a Low-Rank Regularization"

by

Mr. Quanming YAO


Abstract:

Low-rank modeling is important for machine learning, which has already been 
widely used in many other areas. Examples are recommender systems and social 
network analysis in data mining, image and video processing in computer vision.

As direct rank minimization is NP-hard, the nuclear norm, which is the tightest 
convex envelope to the rank function, is often used for optimization. We first 
show a state-of-the-art matrix completion algorithm, i.e., Soft-Impute, can be 
accelerated. Its efficiency lies in utilizing the special "sparse plus 
low-rank" structure of the matrix iterates, which allows efficient SVD in each 
iteration. Though Soft-Impute is a proximal algorithm, it is generally believed 
that acceleration destroys the special structure and is thus not useful. We 
show that Soft-Impute can indeed be accelerated without comprising this 
structure. To further reduce the iteration time complexity, we propose an 
approximate singular value thresholding scheme based on the power method. 
Theoretical analysis shows that the proposed algorithm still enjoys the fast 
convergence rate of accelerated proximal algorithms.

While the matrix rank is often approximated by the convex nuclear norm, the use 
of nonconvex low-rank regularizers has demonstrated better empirical 
performance. However, the resulting optimization problem is much more 
challenging. Recent state-of-the-art requires an expensive full SVD in each 
iteration. We then show that for many commonly-used nonconvex low-rank 
regularizers, a cutoff can be derived to automatically threshold the singular 
values obtained from the proximal operator. This allows such operator being 
efficiently approximated by power method. Based on it, we develop a proximal 
algorithm (and its accelerated variant) with inexact proximal splitting and 
prove its convergence to some critical points.

Finally, extensive experiments on both synthetic and real datasets demonstrate 
the efficiency of developed algorithms.


Date:			Tuesday, 21 November 2017

Time:                  	1:00pm - 3:00pm

Venue:                  Room 3494
                         (lifts 25/26)

Committee Members:	Prof. James Kwok (Supervisor)
  			Dr. Yangqiu Song (Chairperson)
 			Prof. Dit-Yan Yeung
  			Prof. Nevin Zhang


**** ALL are Welcome ****