Please use this identifier to cite or link to this item:
https://doi.org/10.1007/s10107-011-0471-1
DC Field | Value | |
---|---|---|
dc.title | A block coordinate gradient descent method for regularized convex separable optimization and covariance selection | |
dc.contributor.author | Yun, S. | |
dc.contributor.author | Tseng, P. | |
dc.contributor.author | Toh, K.-C. | |
dc.date.accessioned | 2014-10-28T02:27:36Z | |
dc.date.available | 2014-10-28T02:27:36Z | |
dc.date.issued | 2011-10 | |
dc.identifier.citation | Yun, S., Tseng, P., Toh, K.-C. (2011-10). A block coordinate gradient descent method for regularized convex separable optimization and covariance selection. Mathematical Programming 129 (2) : 331-355. ScholarBank@NUS Repository. https://doi.org/10.1007/s10107-011-0471-1 | |
dc.identifier.issn | 00255610 | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/102605 | |
dc.description.abstract | We consider a class of unconstrained nonsmooth convex optimization problems, in which the objective function is the sum of a convex smooth function on an open subset of matrices and a separable convex function on a set of matrices. This problem includes the covariance selection problem that can be expressed as an ℓ1-penalized maximum likelihood estimation problem. In this paper, we propose a block coordinate gradient descent method (abbreviated asBCGD)for solving this class of nonsmooth separable problems with the coordinate block chosen by a Gauss-Seidel rule. The method is simple, highly parallelizable, and suited for large-scale problems. We establish global convergence and, under a local Lipschizian error bound assumption, linear rate of convergence for this method. For the covariance selection problem, the method can terminate in O(n3/ε) iterations with an ε-optimal solution. We compare the performance of the BCGD method with the first-order methods proposed by Lu (SIAM J Optim 19:1807-1827, 2009; SIAM J Matrix Anal Appl 31:2000-2016, 2010) for solving the covariance selection problem on randomly generated instances. Our numerical experience suggests that the BCGD method can be efficient for largescale covariance selection problems with constraints. © Springer and Mathematical Optimization Society 2011. | |
dc.description.uri | http://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1007/s10107-011-0471-1 | |
dc.source | Scopus | |
dc.subject | ℓ1- penalization | |
dc.subject | Block coordinate gradient descent | |
dc.subject | Complexity | |
dc.subject | Convex optimization | |
dc.subject | Covariance selection | |
dc.subject | Global convergence | |
dc.subject | Linear rate convergence | |
dc.subject | Maximum likelihood estimation | |
dc.type | Article | |
dc.contributor.department | MATHEMATICS | |
dc.description.doi | 10.1007/s10107-011-0471-1 | |
dc.description.sourcetitle | Mathematical Programming | |
dc.description.volume | 129 | |
dc.description.issue | 2 | |
dc.description.page | 331-355 | |
dc.identifier.isiut | 000295785000008 | |
Appears in Collections: | Staff Publications |
Show simple item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.