Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/148566
Title: STOCHASTIC AND RANDOMIZED ALGORITHMS FOR LARGE-SCALE OPTIMIZATION IN MACHINE LEARNING
Authors: ZHAO RENBO
ORCID iD:   orcid.org/0000-0002-8226-9243
Keywords: large-scale optimization; stochastic optimization; L-BFGS; primal-dual algorithms; saddle-point problems; first-order methods
Issue Date: 28-Jun-2018
Citation: ZHAO RENBO (2018-06-28). STOCHASTIC AND RANDOMIZED ALGORITHMS FOR LARGE-SCALE OPTIMIZATION IN MACHINE LEARNING. ScholarBank@NUS Repository.
Abstract: This thesis advances the state-of-the-art for two classes of optimization problems in machine learning. In the first part, we revisit the stochastic limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm for strongly convex smooth optimization. By proposing a new coordinate transformation framework, we show that the convergence rates and computational complexities of the stochastic L-BFGS algorithms can be significantly improved compared to previous works. In addition, we propose several practical acceleration strategies to speed up the empirical performance of such algorithms. In the second part, we develop accelerated primal-dual fist-order algorithms for a class of stochastic three-composite convex optimization problems. The method leads to the optimal convergence rate, which significantly improves the convergence rates in previous works. In addition, we establish the connection between these algorithms to the accelerated stochastic alternating direction method of multipliers (ADMM) algorithms.
URI: http://scholarbank.nus.edu.sg/handle/10635/148566
Appears in Collections:Master's Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
thesis.pdf1.37 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.