Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/136271
DC FieldValue
dc.titleTOWARDS SCALABLE GRADIENT-BASED HYPERPARAMETER OPTIMIZATION IN DEEP NEURAL NETWORKS
dc.contributor.authorFU JIE
dc.date.accessioned2017-07-31T18:00:46Z
dc.date.available2017-07-31T18:00:46Z
dc.date.issued2016-08-19
dc.identifier.citationFU JIE (2016-08-19). TOWARDS SCALABLE GRADIENT-BASED HYPERPARAMETER OPTIMIZATION IN DEEP NEURAL NETWORKS. ScholarBank@NUS Repository.
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/136271
dc.description.abstractIt is well-known that the performance of large-sized deep neural networks (DNNs) is sensitive to the setting of their hyperparameters. Hyperparameter optimization is thus recognized as a crucial step in the process of applying DNNs to achieve best performance and drive industrial applications. The works described in this thesis represent the first forays into the scalable gradient-based methods for elementary- and hyper-parameter optimization in DNNs in a unified manner.
dc.language.isoen
dc.subjecthyperparameter optimization, deep neural networks
dc.typeThesis
dc.contributor.departmentNUS GRAD SCH FOR INTEGRATIVE SCI & ENGG
dc.contributor.supervisorCHUA TAT SENG
dc.description.degreePh.D
dc.description.degreeconferredDOCTOR OF PHILOSOPHY
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Ph.D Theses (Open)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
FuJ.pdf1.52 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.