Please use this identifier to cite or link to this item: https://doi.org/10.1137/120864192
Title: A proximal point algorithm for log-determinant optimization with group Lasso regularization
Authors: Yang, J.
Sun, D. 
Toh, K.-C. 
Keywords: Alternating direction method
Augmented Lagrangian
Covariance selection
Gaussian graphical model
Group Lasso regularization
Log-determinant optimization
Newton's method
Proximal point algorithm
Issue Date: 2013
Citation: Yang, J., Sun, D., Toh, K.-C. (2013). A proximal point algorithm for log-determinant optimization with group Lasso regularization. SIAM Journal on Optimization 23 (2) : 857-893. ScholarBank@NUS Repository. https://doi.org/10.1137/120864192
Abstract: We consider the covariance selection problem where variables are clustered into groups and the inverse covariance matrix is expected to have a blockwise sparse structure. This problem is realized via penalizing the maximum likelihood estimation of the inverse covariance matrix by group Lasso regularization. We propose to solve the resulting log-determinant optimization problem with the classical proximal point algorithm (PPA). At each iteration, as it is difficult to update the primal variables directly, we first solve the dual subproblem by an inexact semismooth Newton-CG method and then update the primal variables by explicit formulas based on the computed dual variables. We also propose to accelerate the PPA by an inexact generalized Newton's method when the iterate is close to the solution. Theoretically, we prove that at the optimal solution, the nonsingularity of the generalized Hessian matrices of the dual subproblem is equivalent to the constraint nondegeneracy condition for the primal problem. Global and local convergence results are also presented for the proposed PPA. Moreover, based on the augmented Lagrangian function of the dual problem we derive an alternating direction method (ADM), which is easily implementable and is demonstrated to be efficient for random problems. Numerical results, including comparisons with the ADM on both synthetic and real data, are presented to demonstrate that the proposed Newton-CG based PPA is stable and efficient and, in particular, outperforms the ADM when high accuracy is required. © 2013 Society for Industrial and Applied Mathematics.
Source Title: SIAM Journal on Optimization
URI: http://scholarbank.nus.edu.sg/handle/10635/102737
ISSN: 10526234
DOI: 10.1137/120864192
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.