Please use this identifier to cite or link to this item:
|Title:||Generating animatable 3D virtual faces from scan data||Authors:||Zhang, Y.
|Issue Date:||2003||Citation:||Zhang, Y.,Sim, T. (2003). Generating animatable 3D virtual faces from scan data. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication : 43-48. ScholarBank@NUS Repository. https://doi.org/10.1109/ROMAN.2003.1251791||Abstract:||In this paper a new adaptation-based approach is presented to reconstruct animatable facial models of individual people from scan data with minimum user intervention. A generic control model that represents both the face shape and layered biomechanical structure serves as the starting point for our face adaptation algorithm. After a minimum set of anthropometric landmarks have been specified on the 2D images, the algorithm automatically recovers their 3D positions on the face surface using a projection-mapping approach. Based on a series of measurements between the 3D landmarks, a global adaptation is carried out to align the generic control model to the measured surface data using affine transformations. A local adaptation then deforms the geometry of the generic model to fit all of its vertices to the scanned surface. The reconstructed model accurately represents the shape of the individual face and can synthesize various expressions using transferred muscle actuators. Key features of our method are near-automated reconstruction process, no restriction on the position and orientation of the generic model and scanned surface, and efficient framework to animate any human data-set. © 2003 IEEE.||Source Title:||Proceedings - IEEE International Workshop on Robot and Human Interactive Communication||URI:||http://scholarbank.nus.edu.sg/handle/10635/53523||ISBN:||078038136X||DOI:||10.1109/ROMAN.2003.1251791|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Mar 31, 2020
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.