Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.csl.2013.06.002
DC FieldValue
dc.titleSpeaker state classification based on fusion of asymmetric simple partial least squares (SIMPLS) and support vector machines
dc.contributor.authorHuang, D.-Y.
dc.contributor.authorZhang, Z.
dc.contributor.authorGe, S.S.
dc.date.accessioned2014-10-07T04:36:37Z
dc.date.available2014-10-07T04:36:37Z
dc.date.issued2014-03
dc.identifier.citationHuang, D.-Y., Zhang, Z., Ge, S.S. (2014-03). Speaker state classification based on fusion of asymmetric simple partial least squares (SIMPLS) and support vector machines. Computer Speech and Language 28 (2) : 392-414. ScholarBank@NUS Repository. https://doi.org/10.1016/j.csl.2013.06.002
dc.identifier.issn08852308
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/83045
dc.description.abstractThis paper presents our studies of the effects of acoustic features, speaker normalization methods, and statistical modeling techniques on speaker state classification. We focus on the investigation of the effect of simple partial least squares (SIMPLS) in unbalanced binary classification. Beyond dimension reduction and low computational complexity, SIMPLS classifier (SIMPLSC) shows, especially, higher prediction accuracy to the class with the smaller data number. Therefore, an asymmetric SIMPLS classifier (ASIMPLSC) is proposed to enhance the performance of SIMPLSC to the class with the larger data number. Furthermore, we combine multiple system outputs (ASIMPLS classifier and Support Vector Machines) by score-level fusion to exploit the complementary information in diverse systems. The proposed speaker state classification system is evaluated with several experiments on unbalanced data sets. Within the Interspeech 2011 Speaker State Challenge, we could achieve the best results for the 2-class task of the Sleepiness Sub-Challenge with an unweighted average recall of 71.7%. Further experimental results on the SEMAINE data sets show that the ASIMPLSC achieves an absolute improvement of 6.1%, 6.1%, 24.5%, and 1.3% on the weighted average recall value, over the AVEC 2011 baseline system on the emotional speech binary classification tasks of four dimensions, namely, activation, expectation, power, and valence, respectively. © 2013 Elsevier Inc. All rights reserved.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1016/j.csl.2013.06.002
dc.sourceScopus
dc.subjectAsymmetric SIMPLS
dc.subjectFusion
dc.subjectPartial least squares
dc.subjectSleepiness detection
dc.subjectSpeaker state recognition
dc.subjectSpeech emotion recognition
dc.subjectSupport Vector Machine
dc.typeArticle
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.doi10.1016/j.csl.2013.06.002
dc.description.sourcetitleComputer Speech and Language
dc.description.volume28
dc.description.issue2
dc.description.page392-414
dc.description.codenCSPLE
dc.identifier.isiut000329415400004
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.