Please use this identifier to cite or link to this item: http://scholarbank.nus.edu.sg/handle/10635/18441
Title: Video inpainting for non-repetitive motion
Authors: GUO JIAYAN
Keywords: Video inpainting, foreground/background separation, non-repetitive motion,priority-based scheme,orientation code histograms, orientation code matching
Issue Date: 30-Jun-2010
Source: GUO JIAYAN (2010-06-30). Video inpainting for non-repetitive motion. ScholarBank@NUS Repository.
Abstract: In this thesis, we present an approach for inpainting missing/damaged parts of a video sequence. Compared with existing methods for video inpainting, our approach can handle the non-repetitive motion in the video sequence effectively, removing the periodicity assumption in many state-of-the-art video inpainting algorithms. This periodicity assumption claims that the objects in the missing parts (the hole) should appear in some parts of the frame or in other frames in the video, so that the inpainting can be done by searching the entire video sequence for a good match and copying suitable information from other frames to fill in the hole. In other words, the objects should move in a repetitive fashion, so that there is sufficient information to use to fill in the hole. However, repetitive motion may be absent or imperceptible. Our approach uses the orientation codes for matching to solve this problem. Our approach consists of a preprocessing stage and two steps of video inpainting. In the preprocessing stage, each frame is segmented into moving foreground and static background using the combination of optical flow and mean-shift color segmentation methods. Then this segmentation is used to build three image mosaics: background mosaic, foreground mosaic and optical flow mosaic. These three mosaics are to help maintaining the temporal consistency and also improving the performance of the algorithm by reducing the searching space. In the first video inpainting step, a priority-based scheme is used to choose the patch with the highest priority to be inpainted, and then we use orientation code matching to find the best matching patch in other frames, and calculate the approximated rotation angle between these two patches. Then rotate and copy the best matching patch to fill the moving objects in the foreground that are occluded by the region to be inpainted. In the second step, the background is filled in by temporal copying and priority based texture synthesis. Experimental results show that our approach is fast and easy to be implemented. Since it does not require any statistical models of the foreground or background, it works well even when the background is complex. In addition, it can effectively deal with non-repetitive motion in damaged video sequence, which, has not been done by other people before, surpassing some state-of-the-art algorithms that cannot deal with such types of data. Our approach is of practical value.
URI: http://scholarbank.nus.edu.sg/handle/10635/18441
Appears in Collections:Master's Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
GuoJiayan_Thesis.pdf1.74 MBAdobe PDF

OPEN

NoneView/Download

Page view(s)

449
checked on Dec 11, 2017

Download(s)

281
checked on Dec 11, 2017

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.