Please use this identifier to cite or link to this item:
https://doi.org/10.1109/ASRU.2013.6707743
Title: | Improving robustness of deep neural networks via spectral masking for automatic speech recognition | Authors: | Li, B. Sim, K.C. |
Keywords: | Deep Neural Network Noise Robustness Spectral Masking |
Issue Date: | 2013 | Citation: | Li, B.,Sim, K.C. (2013). Improving robustness of deep neural networks via spectral masking for automatic speech recognition. 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2013 - Proceedings : 279-284. ScholarBank@NUS Repository. https://doi.org/10.1109/ASRU.2013.6707743 | Abstract: | The performance of human listeners degrades rather slowly compared to machines in noisy environments. This has been attributed to the ability of performing auditory scene analysis which separates the speech prior to recognition. In this work, we investigate two mask estimation approaches, namely the state dependent and the deep neural network (DNN) based estimations, to separate speech from noises for improving DNN acoustic models' noise robustness. The second approach has been experimentally shown to outperform the first one. Due to the stereo data based training and ill-defined masks for speech with channel distortions, both methods do not generalize well to unseen conditions and fail to beat the performance of the multi-style trained baseline system. However, the model trained on masked features demonstrates strong complementariness to the baseline model. The simple average of the two system's posteriors yields word error rates of 4.4% on Aurora2 and 12.3% on Aurora4. © 2013 IEEE. | Source Title: | 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2013 - Proceedings | URI: | http://scholarbank.nus.edu.sg/handle/10635/78187 | ISBN: | 9781479927562 | DOI: | 10.1109/ASRU.2013.6707743 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.