Please use this identifier to cite or link to this item:
Title: Video co-segmentation for meaningful action extraction
Authors: Guo, J.
Li, Z.
Cheong, L.-F. 
Zhou, S.Z. 
Issue Date: 2013
Citation: Guo, J., Li, Z., Cheong, L.-F., Zhou, S.Z. (2013). Video co-segmentation for meaningful action extraction. Proceedings of the IEEE International Conference on Computer Vision : 2232-2239. ScholarBank@NUS Repository.
Abstract: Given a pair of videos having a common action, our goal is to simultaneously segment this pair of videos to extract this common action. As a preprocessing step, we first remove background trajectories by a motion-based figure ground segmentation. To remove the remaining background and those extraneous actions, we propose the trajectory co saliency measure, which captures the notion that trajectories recurring in all the videos should have their mutual saliency boosted. This requires a trajectory matching process which can compare trajectories with different lengths and not necessarily spatiotemporally aligned, and yet be discriminative enough despite significant intra-class variation in the common action. We further leverage the graph matching to enforce geometric coherence between regions so as to reduce feature ambiguity and matching errors. Finally, to classify the trajectories into common action and action outliers, we formulate the problem as a binary labeling of a Markov Random Field, in which the data term is measured by the trajectory co-saliency and the smoothness term is measured by the spatiotemporal consistency between trajectories. To evaluate the performance of our framework, we introduce a dataset containing clips that have animal actions as well as human actions. Experimental results show that the proposed method performs well in common action extraction. © 2013 IEEE.
Source Title: Proceedings of the IEEE International Conference on Computer Vision
ISBN: 9781479928392
DOI: 10.1109/ICCV.2013.278
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.


checked on Jan 13, 2020


checked on Jan 13, 2020

Page view(s)

checked on Dec 29, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.