Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/177225
DC Field | Value | |
---|---|---|
dc.title | VIDEO OBJECT SEGMENTATION FOR OBJECT-ORIENTED VIDEO CODING | |
dc.contributor.author | SHAO LIN | |
dc.date.accessioned | 2020-10-08T07:11:16Z | |
dc.date.available | 2020-10-08T07:11:16Z | |
dc.date.issued | 1999 | |
dc.identifier.citation | SHAO LIN (1999). VIDEO OBJECT SEGMENTATION FOR OBJECT-ORIENTED VIDEO CODING. ScholarBank@NUS Repository. | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/177225 | |
dc.description.abstract | The problem addressed is automatic video object segmentation for object-based video coding. Two approaches have been attempted with to deal with static and moving cameras. In the case of a static camera, we first make use of change detection to obtain a rough location of the moving objects. However, moving regions with little intensity variation cannot be detected by this means. In our method, change detection mask is first modified into a solid motion kernel via binary morphological filters. Then seeds are selected and region growing is performed starting from them. After the whole frame has been segmented, regions that originate from the kernel are grouped together so that they cover the moving objects. Substantial modification to the change detection mask can thus be achieved so that it becomes a more accurate object mask. In addition, an object tracking method based on a similar idea is also proposed to make segmentation easier by taking advantage of the results for previous frames. In the case of a moving camera, a two-stage motion segmentation method is developed based on optical flow and dominant motion estimation. In the initial stage, knowledge about existing motion classes is extracted and a rough segmentation is obtained. These two pieces of information play important roles in reaching the final segmentation in the refining stage, where border pixels on the rough segmentation map are reassigned to one of the candidate classes that corresponds to the least motion prediction error. Finally, a scheme is introduced to integrate the above two methods. This gives rise to a universal method which can be used regardless of camera motion. | |
dc.source | CCK BATCHLOAD 20201023 | |
dc.type | Thesis | |
dc.contributor.department | ELECTRICAL ENGINEERING | |
dc.contributor.supervisor | LIN WEI-SI | |
dc.contributor.supervisor | KO CHI CHUNG | |
dc.description.degree | Master's | |
dc.description.degreeconferred | MASTER OF ENGINEERING | |
Appears in Collections: | Master's Theses (Restricted) |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
b22109869.pdf | 6.4 MB | Adobe PDF | RESTRICTED | None | Log In |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.