Please use this identifier to cite or link to this item: https://doi.org/10.1109/ICCV.2011.6126248
DC FieldValue
dc.titleLearning universal multi-view age estimator using video context
dc.contributor.authorSong, Z.
dc.contributor.authorNi, B.
dc.contributor.authorGuo, D.
dc.contributor.authorSim, T.
dc.contributor.authorYan, S.
dc.date.accessioned2013-07-04T08:27:32Z
dc.date.available2013-07-04T08:27:32Z
dc.date.issued2011
dc.identifier.citationSong, Z.,Ni, B.,Guo, D.,Sim, T.,Yan, S. (2011). Learning universal multi-view age estimator using video context. Proceedings of the IEEE International Conference on Computer Vision : 241-248. ScholarBank@NUS Repository. <a href="https://doi.org/10.1109/ICCV.2011.6126248" target="_blank">https://doi.org/10.1109/ICCV.2011.6126248</a>
dc.identifier.isbn9781457711015
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/41437
dc.description.abstractMany existing techniques for analyzing face images assume that the faces are at nearly frontal. Generalizing to non-frontal faces is often difficult, due to a dearth of ground truth for non-frontal faces and also to the inherent challenges in handling pose variations. In this work, we investigate how to learn a universal multi-view age estimator by harnessing 1) unlabeled web videos, 2) a publicly available labeled frontal face corpus, and 3) zero or more non-frontal faces with age labels. First, a large diverse human-involved video corpus is collected from online video sharing website. Then, multi-view face detection and tracking are performed to build a large set of frontal-vs-profile face bundles, each of which is from the same tracking sequence, and thus exhibiting the same age. These unlabeled face bundles constitute the so-called video context, and the parametric multi-view age estimator is trained by 1) enforcing the face-to-age relation for the partially labeled faces, 2) imposing the consistency of the predicted ages for the non-frontal and frontal faces within each face bundle, and 3) mutually constraining the multi-view age models with the spatial correspondence priors derived from the face bundles. Our multi-view age estimator performs well on a realistic evaluation dataset that contains faces under varying poses, and whose ground truth age was manually annotated. © 2011 IEEE.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/ICCV.2011.6126248
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1109/ICCV.2011.6126248
dc.description.sourcetitleProceedings of the IEEE International Conference on Computer Vision
dc.description.page241-248
dc.description.codenPICVE
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.