Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/118218
DC FieldValue
dc.titleContext-Based Visual Object Segmentation
dc.contributor.authorXIA WEI
dc.date.accessioned2014-12-31T18:00:52Z
dc.date.available2014-12-31T18:00:52Z
dc.date.issued2014-08-06
dc.identifier.citationXIA WEI (2014-08-06). Context-Based Visual Object Segmentation. ScholarBank@NUS Repository.
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/118218
dc.description.abstractIn this thesis, we aim to solve the problem of object segmentation. It has been proved that both classification and detection can provide useful contextual information to guide the segmentation process. We first proposed a detection-based method that formulates the segmentation task as pursuing the optimal latent mask inside the bounding box via sparse reconstruction. Furthermore, we proposed an approach based on detection without any additional segment annotation. Finally, besides global classification and detection, we explore the contextual cues from the unlabeled background regions that are usually ignored. The proposed approaches achieve new state-of-the-art performance on various benchmark datasets, like PASCAL VOC, Weizman Horse, Grabcut-50 and MSRC-21.
dc.language.isoen
dc.subjectComputer Vision, Recognition, Semantic Segmentation, Detection, Context, Sparse Reconstruction
dc.typeThesis
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.contributor.supervisorCHEONG LOONG FAH
dc.contributor.supervisorYAN SHUICHENG
dc.description.degreePh.D
dc.description.degreeconferredDOCTOR OF PHILOSOPHY
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Ph.D Theses (Open)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
document_01.pdf444.71 kBAdobe PDF

OPEN

NoneView/Download
document_02.pdf844.74 kBAdobe PDF

OPEN

NoneView/Download
document_03.pdf1.17 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.