Please use this identifier to cite or link to this item: https://doi.org/10.1109/ICVGIP.2008.91
Title: Integrated detect-track framework for Multi-view face detection in video
Authors: Anoop, K.R.
Anandathirtha, P.
Ramakrishnan, K.R.
Kankanhalli, M.S. 
Issue Date: 2008
Source: Anoop, K.R.,Anandathirtha, P.,Ramakrishnan, K.R.,Kankanhalli, M.S. (2008). Integrated detect-track framework for Multi-view face detection in video. Proceedings - 6th Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP 2008 : 336-343. ScholarBank@NUS Repository. https://doi.org/10.1109/ICVGIP.2008.91
Abstract: An Experiential sampling and Meanshift tracker based Multi-view face detection in video is proposed in this paper. In this framework, instead of performing face detection at every position in a frame, we determine certain key positions to run the multiview face detectors. These key positions are statistical samples drawn from a density function that is estimated based on color cues, past detection results, Meanshift tracker results and a temporal continuity model. These samples are then propogated using a Particle filter framework. We use a Meanshift tracker to track faces that are missed by the multiview face detectors. Our framework results in a significant reduction in computation time and accounts for the detection of complete 180 degree pose of the face. We also come up with a novel likelihood measure for track termination, which becomes important when used for detection purposes. © 2008 IEEE.
Source Title: Proceedings - 6th Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP 2008
URI: http://scholarbank.nus.edu.sg/handle/10635/41165
ISBN: 9780769534763
DOI: 10.1109/ICVGIP.2008.91
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

2
checked on Dec 11, 2017

Page view(s)

43
checked on Dec 9, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.