Please use this identifier to cite or link to this item: https://doi.org/10.1007/s10107-011-0471-1
DC FieldValue
dc.titleA block coordinate gradient descent method for regularized convex separable optimization and covariance selection
dc.contributor.authorYun, S.
dc.contributor.authorTseng, P.
dc.contributor.authorToh, K.-C.
dc.date.accessioned2014-10-28T02:27:36Z
dc.date.available2014-10-28T02:27:36Z
dc.date.issued2011-10
dc.identifier.citationYun, S., Tseng, P., Toh, K.-C. (2011-10). A block coordinate gradient descent method for regularized convex separable optimization and covariance selection. Mathematical Programming 129 (2) : 331-355. ScholarBank@NUS Repository. https://doi.org/10.1007/s10107-011-0471-1
dc.identifier.issn00255610
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/102605
dc.description.abstractWe consider a class of unconstrained nonsmooth convex optimization problems, in which the objective function is the sum of a convex smooth function on an open subset of matrices and a separable convex function on a set of matrices. This problem includes the covariance selection problem that can be expressed as an ℓ1-penalized maximum likelihood estimation problem. In this paper, we propose a block coordinate gradient descent method (abbreviated asBCGD)for solving this class of nonsmooth separable problems with the coordinate block chosen by a Gauss-Seidel rule. The method is simple, highly parallelizable, and suited for large-scale problems. We establish global convergence and, under a local Lipschizian error bound assumption, linear rate of convergence for this method. For the covariance selection problem, the method can terminate in O(n3/ε) iterations with an ε-optimal solution. We compare the performance of the BCGD method with the first-order methods proposed by Lu (SIAM J Optim 19:1807-1827, 2009; SIAM J Matrix Anal Appl 31:2000-2016, 2010) for solving the covariance selection problem on randomly generated instances. Our numerical experience suggests that the BCGD method can be efficient for largescale covariance selection problems with constraints. © Springer and Mathematical Optimization Society 2011.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1007/s10107-011-0471-1
dc.sourceScopus
dc.subjectℓ1- penalization
dc.subjectBlock coordinate gradient descent
dc.subjectComplexity
dc.subjectConvex optimization
dc.subjectCovariance selection
dc.subjectGlobal convergence
dc.subjectLinear rate convergence
dc.subjectMaximum likelihood estimation
dc.typeArticle
dc.contributor.departmentMATHEMATICS
dc.description.doi10.1007/s10107-011-0471-1
dc.description.sourcetitleMathematical Programming
dc.description.volume129
dc.description.issue2
dc.description.page331-355
dc.identifier.isiut000295785000008
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.