Please use this identifier to cite or link to this item:
|Title:||Multi-stream temporally varying weight regression for cross-lingual speech recognition|
decision tree clustering
|Source:||Liu, S.,Sim, K.C. (2013). Multi-stream temporally varying weight regression for cross-lingual speech recognition. 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2013 - Proceedings : 434-439. ScholarBank@NUS Repository. https://doi.org/10.1109/ASRU.2013.6707769|
|Abstract:||Building a good Automatic Speech Recognition (ASR) system with limited resources is a very challenging task due to the existing many speech variations. Multilingual and cross-lingual speech recognition techniques are commonly used for this task. This paper investigates the recently proposed Temporally Varying Weight Regression (TVWR) method for cross-lingual speech recognition. TVWR uses posterior features to implicitly model the long-term temporal structures in acoustic patterns. By leveraging on the well-trained foreign recognizers, high quality monophone/state posteriors can be easily incorporated into TVWR to boost the ASR performance on low-resource languages. Furthermore, multi-stream TVWR is proposed, where multiple sets of posterior features are used to incorporate richer (temporal and spatial) context information. Finally, a separate state-tying for the TVWR regression parameters is used to better utilize the more reliable posterior features. Experimental results are evaluated for English and Malay speech recognition with limited resources. By using the Czech, Hungarian and Russian posterior features, TVWR was found to consistently outperform the tandem systems trained on the same features. © 2013 IEEE.|
|Source Title:||2013 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2013 - Proceedings|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Feb 13, 2018
checked on Feb 17, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.