ScholarBank@NUShttps://scholarbank.nus.edu.sgThe DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Mon, 28 Nov 2022 16:17:40 GMT2022-11-28T16:17:40Z5021- A coordinate gradient descent method for l1-regularized convex minimizationhttps://scholarbank.nus.edu.sg/handle/10635/114661Title: A coordinate gradient descent method for l1-regularized convex minimization
Authors: Yun, S.; Toh, K.-C.
Abstract: In applications such as signal processing and statistics, many problems involve finding sparse solutions to under-determined linear systems of equations. These problems can be formulated as a structured nonsmooth optimization problems, i.e., the problem of minimizing l1regularized linear least squares problems. In this paper, we propose a block coordinate gradient descent method (abbreviated as CGD) to solve the more general l 1regularized convex minimization problems, i.e., the problem of minimizing an l1regularized convex smooth function. We establish a Q-linear convergence rate for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We propose efficient implementations of the CGD method and report numerical results for solving large-scale l1regularized linear least squares problems arising in compressed sensing and image deconvolution as well as large-scale l 1regularized logistic regression problems for feature selection in data classification. Comparison with several state-of-the-Art algorithms specifically designed for solving large-scale l1regularized linear least squares or logistic regression problems suggests that an efficiently implemented CGD method may outperform these algorithms despite the fact that the CGD method is not specifically designed just to solve these special classes of problems. © Springer Science+Business Media, LLC 2009.
Tue, 01 Mar 2011 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1146612011-03-01T00:00:00Z
- Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimizationhttps://scholarbank.nus.edu.sg/handle/10635/114629Title: Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization
Authors: Tseng, P.; Yun, S.
Abstract: We consider the problem of minimizing the weighted sum of a smooth function f and a convex function P of n real variables subject to m linear equality constraints. We propose a block-coordinate gradient descent method for solving this problem, with the coordinate block chosen by a Gauss-Southwell-q rule based on sufficient predicted descent. We establish global convergence to first-order stationarity for this method and, under a local error bound assumption, linear rate of convergence. If f is convex with Lipschitz continuous gradient, then the method terminates in O(n 2/ε) iterations with an ε-optimal solution. If P is separable, then the Gauss-Southwell-q rule is implementable in O(n) operations when m=1 and in O(n 2) operations when m>1. In the special case of support vector machines training, for which f is convex quadratic, P is separable, and m=1, this complexity bound is comparable to the best known bound for decomposition methods. If f is convex, then, by gradually reducing the weight on P to zero, the method can be adapted to solve the bilevel problem of minimizing P over the set of minima of f+δ X , where X denotes the closure of the feasible set. This has application in the least 1-norm solution of maximum-likelihood estimation. © 2008 Springer Science+Business Media, LLC.
Sun, 01 Mar 2009 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1146292009-03-01T00:00:00Z