Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/227615
Title: LEARNING STRUCTURED REPRESENTATIONS OF VISUAL SCENES
Authors: CHIOU MENG-JIUN
ORCID iD:   orcid.org/0000-0003-0312-9136
Keywords: visual relationship detection, scene graph generation, human-object interaction detection, scene understanding, computer vision, machine learning
Issue Date: 23-Jan-2022
Citation: CHIOU MENG-JIUN (2022-01-23). LEARNING STRUCTURED REPRESENTATIONS OF VISUAL SCENES. ScholarBank@NUS Repository.
Abstract: As the intermediate-level representations bridging the two levels, structured representations of visual scenes, such as visual relationships between pairwise objects, have been shown to not only benefit compositional models in learning to reason along with the structures but provide higher interpretability for model decisions. Nevertheless, these representations receive much less attention than traditional recognition tasks, leaving numerous open challenges unsolved. In the thesis, we study how machines can describe the content of the individual image or video with visual relationships as the structured representations. Specifically, we explore how structured representations of visual scenes can be effectively constructed and learned in both the static-image and video settings, with improvements resulting from external knowledge incorporation, bias-reducing mechanism, and enhanced representation models. At the end of this thesis, we also discuss some open challenges and limitations to shed light on future directions of structured representation learning for visual scenes.
URI: https://scholarbank.nus.edu.sg/handle/10635/227615
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
ChiouMJ.pdf8.9 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.