Please use this identifier to cite or link to this item:
https://doi.org/10.1109/MMSP.1997.602607
DC Field | Value | |
---|---|---|
dc.title | Using IHMM's in audio-to-visual conversion | |
dc.contributor.author | Rao R. | |
dc.contributor.author | Mersereau R. | |
dc.contributor.author | Chen T. | |
dc.date.accessioned | 2018-08-21T05:13:25Z | |
dc.date.available | 2018-08-21T05:13:25Z | |
dc.date.issued | 1997 | |
dc.identifier.citation | Rao R., Mersereau R., Chen T. (1997). Using IHMM's in audio-to-visual conversion. 1997 IEEE 1st Workshop on Multimedia Signal Processing, MMSP 1997 : 19-24. ScholarBank@NUS Repository. https://doi.org/10.1109/MMSP.1997.602607 | |
dc.identifier.isbn | 0780337808 | |
dc.identifier.isbn | 9780780337800 | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/146421 | |
dc.description.abstract | One emerging application which exploits the correlation between audio and video is speech-driven facial animation. The goal of speech-driven facial animation is to synthesize realistic video sequences from acoustic speech. Much of the previous research has implemented this audio-to-visual conversion strategy with existing techniques such as vector quantization and neural networks. In this paper, we examine how this conversion process can be accomplished with hidden Markov models. | |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | |
dc.source | Scopus | |
dc.type | Conference Paper | |
dc.contributor.department | OFFICE OF THE PROVOST | |
dc.contributor.department | DEPARTMENT OF COMPUTER SCIENCE | |
dc.description.doi | 10.1109/MMSP.1997.602607 | |
dc.description.sourcetitle | 1997 IEEE 1st Workshop on Multimedia Signal Processing, MMSP 1997 | |
dc.description.page | 19-24 | |
dc.published.state | published | |
Appears in Collections: | Staff Publications |
Show simple item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.