Please use this identifier to cite or link to this item:
Title: An enhance framework on hair modeling and real-time animation
Keywords: Hair Modeling, Hair Animation, Collision Detection, Collision Response, Real-time Simulation
Issue Date: 7-Jan-2004
Citation: LIANG WENQI (2004-01-07). An enhance framework on hair modeling and real-time animation. ScholarBank@NUS Repository.
Abstract: Maximizing visual effect is a major problem in real-time animation. One work was proposed on real-time hair animation based on 2D representation and texture mapping. Due to its 2D nature, it lacks of the volumetric effect as the real hair. This thesis presents a technique to solve the problem. The hair is still modeled in 2D strip. However, each visible strip is warped into U-shape for rendering after tessellation. Alpha and texture mappings are then applied to the polygon meshes of the U-shape. Note that the U-shape is not explicitly stored in the data structure. Because of the U-shape technique, we also adapt the collision detection and response mechanism in the former method. There are collisions between hair strips and other objects, e.g. scalp and shoulder, and the self-collisions of hair. Finally, a hair modeling function that is capable of modeling different hairstyles is also implemented.
Appears in Collections:Master's Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
An_Enhance_Framework_on_Hair__Modeling.pdf1.44 MBAdobe PDF



Page view(s)

checked on Nov 10, 2018


checked on Nov 10, 2018

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.