Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/13633
DC FieldValue
dc.titleAutomatic extraction and tracking of face sequences in MPEG video
dc.contributor.authorZHAO YUNLONG
dc.date.accessioned2010-04-08T10:34:57Z
dc.date.available2010-04-08T10:34:57Z
dc.date.issued2004-02-25
dc.identifier.citationZHAO YUNLONG (2004-02-25). Automatic extraction and tracking of face sequences in MPEG video. ScholarBank@NUS Repository.
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/13633
dc.description.abstractThis PhD work focuses on the problem of extracting multiple face sequences from MPEG video based on face detection and tracking. It aims to facilitate the strata-based digital video modelling to achieve efficient video retrieval and browsing. The research includes the design of efficient algorithms for face detection and tracking, and tools for compressed domain video processing. It differs from existing efforts in that it focuses on developing efficient techniques to make use of the features in DCT domain and the characteristics pertaining to the image and video compression standards, such as JPEG, MPEGs, H.26x, etc. In summary, the research work consists of: 1. A system to extract multiple face sequences from MPEG video. 2. A DCT-domain approach to face detection in MPEG video. 3. DCT-domain approach to fractional scaling and inverse motion compensation on image and video.
dc.language.isoen
dc.subjectface detection, face tracking, DCT-domain processing, video retrieval, MPEG, fast algorithm
dc.typeThesis
dc.contributor.departmentCOMPUTER SCIENCE
dc.contributor.supervisorCHUA TAT SENG
dc.description.degreePh.D
dc.description.degreeconferredDOCTOR OF PHILOSOPHY
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Ph.D Theses (Open)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Zhao_Yunlong_PhD_thesis.pdf4.9 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.