Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/39924
DC FieldValue
dc.titleThin client front-end processor for distributed speech recognition
dc.contributor.authorChow, K.F.
dc.contributor.authorLiew, S.C.
dc.contributor.authorLua, K.T.
dc.date.accessioned2013-07-04T07:52:42Z
dc.date.available2013-07-04T07:52:42Z
dc.date.issued2003
dc.identifier.citationChow, K.F.,Liew, S.C.,Lua, K.T. (2003). Thin client front-end processor for distributed speech recognition. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2 : 29-32. ScholarBank@NUS Repository.
dc.identifier.issn15206149
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/39924
dc.description.abstractWe present a front-end feature processor for distributed speech recognition for an integer-based DSP, and we employ block floating point and range reduction for the computation of elementary functions. We show that by reducing the numerical accuracy of the block floating point and the elementary functions, we are able to reduce the operational requirements to 12.6 wMOPs, 2. 4 kWords of RAM, 3.7 kWords of ROM. When used on a small vocabulary of 800 words 6.4 perplexity, and a large vocabulary of 20,200 words 102.5 perplexity, our optimized DSP front-end produces recognition accuracy comparable to an equivalent implementation on a floating point processor, without requiring a retrain of the recognition system with features produced by our DSP front-end.
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.sourcetitleICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
dc.description.volume2
dc.description.page29-32
dc.description.codenIPROD
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.