Please use this identifier to cite or link to this item:
Title: Recognition of visual speech elements using adaptively boosted hidden markov models
Authors: Foo, S.W.
Lian, Y. 
Dong, L.
Keywords: Adaptive boosting (AdaBoost)
Automatic lip reading
Hidden Markov model (HMM)
Visual speech processing
Issue Date: May-2004
Citation: Foo, S.W., Lian, Y., Dong, L. (2004-05). Recognition of visual speech elements using adaptively boosted hidden markov models. IEEE Transactions on Circuits and Systems for Video Technology 14 (5) : 693-705. ScholarBank@NUS Repository.
Abstract: The performance of automatic speech recognition (ASR) system can be significantly enhanced with additional information from visual speech elements such as the movement of lips, tongue, and teeth, especially under noisy environment. In this paper, a novel approach for recognition of visual speech elements is presented. The approach makes use of adaptive boosting (AdaBoost) and hidden Markov models (HMMs) to build an AdaBoost-HMM classifier. The composite HMMs of the AdaBoost-HMM classifier are trained to cover different groups of training samples using the AdaBoost technique and the biased Baum-Welch training method. By combining the decisions of the component classifiers of the composite HMMs according to a novel probability synthesis rule, a more complex decision boundary is formulated than using the single HMM classifier. The method is applied to the recognition of the basic visual speech elements. Experimental results show that the AdaBoost-HMM classifier outperforms the traditional HMM classifier in accuracy, especially for visemes extracted from contexts.
Source Title: IEEE Transactions on Circuits and Systems for Video Technology
ISSN: 10518215
DOI: 10.1109/TCSVT.2004.826773
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.


checked on Nov 14, 2019


checked on Nov 6, 2019

Page view(s)

checked on Oct 28, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.