Please use this identifier to cite or link to this item: https://doi.org/10.1109/TIP.2005.849330
Title: Spatiotemporal video segmentation based on graphical models
Authors: Wang, Y.
Loe, K.-F. 
Tan, T.
Wu, J.-K.
Keywords: Bayesian network
Graphical model
Markov random field (MRF)
Motion segmentation
Region merging
Spatiotemporal segmentation
Issue Date: 2005
Citation: Wang, Y., Loe, K.-F., Tan, T., Wu, J.-K. (2005). Spatiotemporal video segmentation based on graphical models. IEEE Transactions on Image Processing 14 (7) : 937-947. ScholarBank@NUS Repository. https://doi.org/10.1109/TIP.2005.849330
Abstract: This paper proposes a probabilistic framework for spatiotemporal segmentation of video sequences. Motion information, boundary information from intensity segmentation, and spatial connectivity of segmentation are unified in the video segmentation process by means of graphical models. A Bayesian network is presented to model interactions among the motion vector field, the intensity segmentation field, and the video segmentation field. The notion of the Markov random field is used to encourage the formation of continuous regions. Given consecutive frames, the conditional joint probability density of the three fields is maximized in an iterative way. To effectively utilize boundary information from the intensity segmentation, distance transformation is employed in local objective functions. Experimental results show that the method is robust and generates spatiotemporally coherent segmentation results. Moreover, the proposed video segmentation approach can be viewed as the compromise of previous motion based approaches and region merging approaches. © 2005 IEEE.
Source Title: IEEE Transactions on Image Processing
URI: http://scholarbank.nus.edu.sg/handle/10635/39307
ISSN: 10577149
DOI: 10.1109/TIP.2005.849330
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.