Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/42048
Title: Sequential classification criteria for NNs in automatic speech recognition
Authors: Wang, G.
Sim, K.C. 
Keywords: Discriminative training
Lattices
Neural networks
Issue Date: 2011
Citation: Wang, G.,Sim, K.C. (2011). Sequential classification criteria for NNs in automatic speech recognition. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH : 441-444. ScholarBank@NUS Repository.
Abstract: Neural networks (NNs) are discriminative classifiers which have been successfully integrated with hidden Markov models (HMMs), either in the hybrid NN/HMM or tandem connectionist systems. Typically, the NNs are trained with the frame-based cross-entropy criterion to classify phonemes or phoneme states. However, for word recognition, the word error rate is more closely related to the sequence classification criteria, such as maximum mutual information and minimum phone error. In this paper, the lattice-based sequence classification criteria are used to train the NNs in the hybrid NN/HMM system and the tandem system. A product-of-expert-based factorization and smoothing scheme is proposed for the hybrid system to scale the lattice-based NN training up to 6000 triphone states. Experimental results on the WSJCAM0 reveal that the NNs trained with the sequential classification criterion yield a 24.2% relative improvement compared to the cross-entropy trained NNs for the hybrid system. Copyright © 2011 ISCA.
Source Title: Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
URI: http://scholarbank.nus.edu.sg/handle/10635/42048
ISSN: 19909772
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.