Please use this identifier to cite or link to this item:
https://doi.org/10.1007/s10957-008-9458-3
DC Field | Value | |
---|---|---|
dc.title | Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization | |
dc.contributor.author | Tseng, P. | |
dc.contributor.author | Yun, S. | |
dc.date.accessioned | 2014-12-02T08:39:02Z | |
dc.date.available | 2014-12-02T08:39:02Z | |
dc.date.issued | 2009-03 | |
dc.identifier.citation | Tseng, P., Yun, S. (2009-03). Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization. Journal of Optimization Theory and Applications 140 (3) : 513-535. ScholarBank@NUS Repository. https://doi.org/10.1007/s10957-008-9458-3 | |
dc.identifier.issn | 00223239 | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/114629 | |
dc.description.abstract | We consider the problem of minimizing the weighted sum of a smooth function f and a convex function P of n real variables subject to m linear equality constraints. We propose a block-coordinate gradient descent method for solving this problem, with the coordinate block chosen by a Gauss-Southwell-q rule based on sufficient predicted descent. We establish global convergence to first-order stationarity for this method and, under a local error bound assumption, linear rate of convergence. If f is convex with Lipschitz continuous gradient, then the method terminates in O(n 2/ε) iterations with an ε-optimal solution. If P is separable, then the Gauss-Southwell-q rule is implementable in O(n) operations when m=1 and in O(n 2) operations when m>1. In the special case of support vector machines training, for which f is convex quadratic, P is separable, and m=1, this complexity bound is comparable to the best known bound for decomposition methods. If f is convex, then, by gradually reducing the weight on P to zero, the method can be adapted to solve the bilevel problem of minimizing P over the set of minima of f+δ X , where X denotes the closure of the feasible set. This has application in the least 1-norm solution of maximum-likelihood estimation. © 2008 Springer Science+Business Media, LLC. | |
dc.description.uri | http://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1007/s10957-008-9458-3 | |
dc.source | Scopus | |
dc.subject | ℓ1-regularization | |
dc.subject | Bilevel optimization | |
dc.subject | Complexity bound | |
dc.subject | Coordinate gradient descent | |
dc.subject | Global convergence | |
dc.subject | Linear constraints | |
dc.subject | Linear convergence rate | |
dc.subject | Nonsmooth optimization | |
dc.subject | Support vector machines | |
dc.type | Article | |
dc.contributor.department | SINGAPORE-MIT ALLIANCE | |
dc.description.doi | 10.1007/s10957-008-9458-3 | |
dc.description.sourcetitle | Journal of Optimization Theory and Applications | |
dc.description.volume | 140 | |
dc.description.issue | 3 | |
dc.description.page | 513-535 | |
dc.identifier.isiut | 000263420100009 | |
Appears in Collections: | Staff Publications |
Show simple item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.