Please use this identifier to cite or link to this item: https://doi.org/10.1109/MMSP.1997.602607
Title: Using IHMM's in audio-to-visual conversion
Authors: Rao R.
Mersereau R.
Chen T. 
Issue Date: 1997
Publisher: Institute of Electrical and Electronics Engineers Inc.
Citation: Rao R., Mersereau R., Chen T. (1997). Using IHMM's in audio-to-visual conversion. 1997 IEEE 1st Workshop on Multimedia Signal Processing, MMSP 1997 : 19-24. ScholarBank@NUS Repository. https://doi.org/10.1109/MMSP.1997.602607
Abstract: One emerging application which exploits the correlation between audio and video is speech-driven facial animation. The goal of speech-driven facial animation is to synthesize realistic video sequences from acoustic speech. Much of the previous research has implemented this audio-to-visual conversion strategy with existing techniques such as vector quantization and neural networks. In this paper, we examine how this conversion process can be accomplished with hidden Markov models.
Source Title: 1997 IEEE 1st Workshop on Multimedia Signal Processing, MMSP 1997
URI: http://scholarbank.nus.edu.sg/handle/10635/146421
ISBN: 0780337808
9780780337800
DOI: 10.1109/MMSP.1997.602607
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.