Please use this identifier to cite or link to this item:
|Title:||Modeling Layered Meaning with Gesture Parameters|
|Authors:||Ong, S.C.W. |
|Source:||Ong, S.C.W.,Ranganath, S.,Venkatesh, Y.V. (2002). Modeling Layered Meaning with Gesture Parameters. Proceedings of the 7th International Conference on Control, Automation, Robotics and Vision, ICARCV 2002 : 1591-1596. ScholarBank@NUS Repository.|
|Abstract:||Signs produced by gestures (such as in American Sign Language) can have a basic meaning coupled with additional meanings that are like layers added to the basic meaning of the sign. These layered meanings are conveyed by systematic temporal and spatial modification of the basic form of the gesture. The work reported in this paper seeks to recognize temporal and spatial modifiers of hand movement and integrates them with the recognition of the basic meaning of the sign. To this end, a Bayesian network framework is explored with a simulated vocabulary of 4 basic signs which give rise to 14 different combinations of basic meanings and layered meanings. In this paper we approached the problem of deciphering layered meanings by drawing analogies to the gesture parameters in Parametric HMM which represent systematic spatial modifications to gesture movement. Various Bayesian network structures were compared for recognizing the signs with layered meanings. The best performing network yielded 85.5% accuracy.|
|Source Title:||Proceedings of the 7th International Conference on Control, Automation, Robotics and Vision, ICARCV 2002|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 9, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.