Please use this identifier to cite or link to this item:
|Title:||Dynamic Bayesian framework for extracting temporal structure in video||Authors:||Mittal, A.
|Issue Date:||2001||Citation:||Mittal, A., Cheong, L.F., Sing, L.T. (2001). Dynamic Bayesian framework for extracting temporal structure in video. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2 : II110-II115. ScholarBank@NUS Repository.||Abstract:||In this paper, we develop the concept of descriptors based on perceptual-level motion features such as time-to-collision, shot transition and temporal motion and it is shown that by including them the representational level of the video classes is significantly enhanced, e.g. violence could be detected. The temporal context cues, which had been largely neglected by present content-based retrieval (CBR) systems, are integrated into the framework. A Dynamic bayesian framework for the CBR systems which can learn the temporal structure through the fusion of all the features is designed. The experimental results for more than 4 hours of videos are presented for a number of key applications like sequence identifier, highlight extraction for sports, and detecting climax or violence.||Source Title:||Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition||URI:||http://scholarbank.nus.edu.sg/handle/10635/43194||ISSN:||10636919|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Jun 23, 2022
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.