Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/181961
DC FieldValue
dc.titleRESTRICTEDLY SCALED QUASI-NEWTON METHODS
dc.contributor.authorYUELIN ZENG
dc.date.accessioned2020-10-29T06:32:22Z
dc.date.available2020-10-29T06:32:22Z
dc.date.issued1997
dc.identifier.citationYUELIN ZENG (1997). RESTRICTEDLY SCALED QUASI-NEWTON METHODS. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/181961
dc.description.abstractThe self-scaling quasi-Newton (SSQN) method, proposed by Oren and Luenberger [42,45] is one of the most attractive modified quasi-Newton (QN) methods because of its simplicity in algorithm design and excellent performance when applied to solve some categories of optimization problems. Shanno and Phua [63] however, indicate that, for a large number of optimization problems, SSQN methods actually deteriorate the performance of the original QN methods. Furthermore, a recent investigation on the SSQN methods conducted by Nocedal and Yuan [40] shows that a locally superlinear convergence of the methods is very difficult to ensure in practice with SSQN. This thesis is intended to conduct a new investigation on the SSQN methods and to present a special class of modified quasi-Newton methods, Restrictedly Sealed Quasi-Newton (RSQN) methods. A numerical study, combined with a theoretical analysis on the eigenvalues of the scaled update matrices, is first given to the SSQN methods. As a result, we propose a class of restrictedly scaled BFGS methods and prove that the RSQN methods are globally and super linearly convergent. Numerical results are also presented to demonstrate the RSQN methods are much more efficient than both SSQN and the original BFGS methods. Thus our research has shown that it is possible to develop both super linearly convergent and practically very efficient RSQN methods with the same kind of simplicity as the SSQN methods. The restricted scaling to the BFGS update is then generalized to the QN methods within the convex class of the Broyden family, but excluding the DFP update. We show that for a broad class of properly selected restricted scaling parameters, the convex-RSQN methods have the same global and (locally) superlinear convergence. Two different methodologies are used to develop the convergence theory of the RSQN methods in our research. For the scaled BFGS method, we use a new methodology developed by Byrd and Nocedal [1] which simultaneously considers the trace and determinant of the update matrices while for the restrictedly scaled convex class, a classic method [12, 56] is used to estimate the bounds of the trace and determinant of the scaled updates for deriving the desired results. We also develop a parallelization framework for solving unconstrained optimization problems. The framework utilizes a multi-dimensional parallel search at each iteration and can be considered as a generalization to the serial line search algorithms. As an application, we investigate a class of multi-directional search algorithms, proposed by Phua [49] and present numerical results to illustrate that the new parallelization framework can be very efficient.
dc.sourceCCK BATCHLOAD 20201023
dc.typeThesis
dc.contributor.departmentINFORMATION SYSTEMS & COMPUTER SCIENCE
dc.contributor.supervisorPHUA KANG HOH
dc.description.degreeMaster's
dc.description.degreeconferredMASTER OF SCIENCE
Appears in Collections:Master's Theses (Restricted)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
B20840123.PDF2.11 MBAdobe PDF

RESTRICTED

NoneLog In

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.