Please use this identifier to cite or link to this item:
https://doi.org/10.1016/j.jvlc.2005.10.002
Title: | Anatomy-based face reconstruction for animation using multi-layer deformation | Authors: | Zhang, Y. Sim, T. Tan, C.L. Sung, E. |
Keywords: | Anatomy-based model Face reconstruction Facial animation Multi-layer deformation Multi-layer skin/muscle/skull structure Scanned data |
Issue Date: | 2006 | Citation: | Zhang, Y., Sim, T., Tan, C.L., Sung, E. (2006). Anatomy-based face reconstruction for animation using multi-layer deformation. Journal of Visual Languages and Computing 17 (2) : 126-160. ScholarBank@NUS Repository. https://doi.org/10.1016/j.jvlc.2005.10.002 | Abstract: | This paper presents a novel multi-layer deformation (MLD) method for reconstructing animatable, anatomy-based human facial models with minimal manual intervention. Our method is based on adapting a prototype model with the multi-layer anatomical structure to the acquired range data in an "outside-in" manner: Deformation applied to the external skin layer is propagated along with the subsequent transformations to the muscles, with the final effect of warping the underlying skull. The prototype model has a known topology and incorporates a multi-layer structure hierarchy of physically based skin, muscles, and skull. In the MLD, a global alignment is first carried out to adapt the position, size, and orientation of the prototype model to align it with the scanned data based on measurements between a subset of specified anthropometric landmarks. In the skin layer adaptation, the generic skin mesh is represented as a dynamic deformable model which is subjected to internal force stemming from the elastic properties of the surface and external forces generated by input data points and features. A fully automated approach has been developed for adapting the underlying muscle layer which consists of three types of physically based facial muscle models. MLD deforms a set of automatically generated skull feature points according to the adapted external skin and muscle layers. The new positions of these feature points are then used to drive a volume morphing applied to the template skull model. We demonstrate our method by applying it to generate a wide range of different facial models on which various facial expressions are animated. © 2005 Elsevier Ltd. All rights reserved. | Source Title: | Journal of Visual Languages and Computing | URI: | http://scholarbank.nus.edu.sg/handle/10635/43140 | ISSN: | 1045926X | DOI: | 10.1016/j.jvlc.2005.10.002 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
SCOPUSTM
Citations
27
checked on Jun 28, 2022
WEB OF SCIENCETM
Citations
14
checked on Jun 28, 2022
Page view(s)
186
checked on Jun 23, 2022
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.