Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/204915
DC FieldValue
dc.titleTOWARDS CONCISE REPRESENTATION LEARNING ON DEEP NEURAL NETWORKS
dc.contributor.authorYUAN LI
dc.date.accessioned2021-10-31T18:01:28Z
dc.date.available2021-10-31T18:01:28Z
dc.date.issued2021-06-22
dc.identifier.citationYUAN LI (2021-06-22). TOWARDS CONCISE REPRESENTATION LEARNING ON DEEP NEURAL NETWORKS. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/204915
dc.description.abstractThe success of deep neural networks is greatly attributed to their strong representation learning ability from raw data. The data representations of deep neural networks contain crucial information to understand the data. For example, the image representations from Convolutional Neural Networks (CNNs) can express the low-level and high-level features from the images; the video representations from deep neural networks convey both spatial and temporal information. In this thesis, we focus on learning ’concise representations’ on deep neural networks for various high-level vision applications, including image classifica- tion, image hashing, video summarization/hashing, where the ‘concise representations’ in deep neural network is condensed or compressed representation that can be learned with high efficiency, which includes three types: 1) lightweight representations in model size, such as the representations from small neural networks; 2) condensed representations in length or width, such as binary representations in bit level and summarized representation in batch level; 3) data-efficient representations in data-size level. However, the learned data representations from current deep neural networks are further from concise, and it is generally computationally expensive to learn data representation by deep neural networks. Learning better concise representations can result in more efficient training and inference on neural networks, smaller model size, and small data size on various vision applications. To achieve it, we systematically proposed a series of techniques to learning concise representations on deep neural networks, including learning: 1) compressed repre- sentations from lightweight models by representation distillation (Chapter 2); 2) highly condensed representations such as summarized and binary representations for image and video (Chapter 3); 3) data-efficient learnable representations for Transformer architecture (Chapter 4).
dc.language.isoen
dc.subjectconcise representation learning; deep neural networks.
dc.typeThesis
dc.contributor.departmentMECHANICAL ENGINEERING
dc.contributor.supervisorTay Eng Hock
dc.contributor.supervisorFeng Jiashi
dc.description.degreePh.D
dc.description.degreeconferredDOCTOR OF PHILOSOPHY (FOE)
Appears in Collections:Ph.D Theses (Open)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
1. amended_thesis.pdf20.16 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.