Please use this identifier to cite or link to this item: https://doi.org/10.1109/TSMCA.2004.826274
Title: Analysis of lip geometric features for audio-visual speech recognition
Authors: Kaynak, M.N.
Zhi, Q.
Cheok, A.D. 
Sengupta, K. 
Jian, Z.
Chung, K.C. 
Issue Date: Jul-2004
Citation: Kaynak, M.N., Zhi, Q., Cheok, A.D., Sengupta, K., Jian, Z., Chung, K.C. (2004-07). Analysis of lip geometric features for audio-visual speech recognition. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans. 34 (4) : 564-570. ScholarBank@NUS Repository. https://doi.org/10.1109/TSMCA.2004.826274
Abstract: Audio-visual speech recognition employing both acoustic and visual speech information is a novel extension of acoustic speech recognition and it significantly improves the recognition accuracy in noisy environments. Although various audio-visual speech-recognition systems have been developed, a rigorous and detailed comparison of the potential geometric visual features from speakers' faces is essential. Thus, in this paper the geometric visual features are compared and analyzed rigorously for their importance in audio-visual speech recognition. Experimental results show that among the geometric visual features analyzed, lip vertical aperture is the most relevant; and the visual feature vector formed by vertical and horizontal lip apertures and the first-order derivative of the lip corner angle leads to the best recognition results. Speech signals are modeled by hidden Markov models (HMMs) and using the optimized HMMs and geometric visual features the accuracy of acoustic-only, visual-only, and audio-visual speech recognition methods are compared. The audio-visual speech recognition scheme has a much improved recognition accuracy compared to acoustic-only and visual-only speech recognition especially at high noise levels. The experimental results showed that a set of as few as three labial geometric features are sufficient to improve the recognition rate by as much as 20% (from 62%, with acoustic-only information, to 82%, with audio-visual information at a signal-to-noise ratio of 0 dB). © 2004 IEEE.
Source Title: IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans.
URI: http://scholarbank.nus.edu.sg/handle/10635/55084
ISSN: 10834427
DOI: 10.1109/TSMCA.2004.826274
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.