Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/231562
Title: UNDERSTANDING AND IMPROVING NEURAL ARCHITECTURE SEARCH
Authors: SHU YAO
Keywords: Neural Architecture Search, Neural Ensemble Search, Training-based, Training-free, Neural Tangent Kernel, Interpretability
Issue Date: 6-Jan-2022
Citation: SHU YAO (2022-01-06). UNDERSTANDING AND IMPROVING NEURAL ARCHITECTURE SEARCH. ScholarBank@NUS Repository.
Abstract: Despite recent advances in Neural Architecture Search (NAS), there are still certain essential aspects of NAS that have not been well investigated in the literature. Firstly, only a few efforts have been devoted to understanding the neural architectures selected by popular NAS algorithms in the literature. In the first work, we take the first step of investigating this problem. Secondly, standard NAS algorithms typically aim to select only a single architecture. So, in the second work, we present Neural Ensemble Search via Bayesian Sampling (NESBS) framework that can select better-performing neural ensembles. Thirdly, the search efficiency of NAS algorithms usually is limited by the need for model training. To this end, we propose NAS at Initialization (NASI) algorithm. Finally, the reason why training-free NAS using training-free metrics performs well remains a mystery in the literature and thus is studied in the last work.
URI: https://scholarbank.nus.edu.sg/handle/10635/231562
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
ShuY.pdf8.31 MBAdobe PDF

OPEN

NoneView/Download

Page view(s)

21
checked on Dec 1, 2022

Download(s)

1
checked on Dec 1, 2022

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.