Please use this identifier to cite or link to this item: https://doi.org/10.1007/978-3-642-02713-0_61
DC FieldValue
dc.titlePartially observable markov decision process (POMDP) technologies for sign language based human-computer interaction
dc.contributor.authorOng, S.C.W.
dc.contributor.authorHsu, D.
dc.contributor.authorLee, W.S.
dc.contributor.authorKurniawati, H.
dc.date.accessioned2013-07-04T08:16:49Z
dc.date.available2013-07-04T08:16:49Z
dc.date.issued2009
dc.identifier.citationOng, S.C.W.,Hsu, D.,Lee, W.S.,Kurniawati, H. (2009). Partially observable markov decision process (POMDP) technologies for sign language based human-computer interaction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 5616 LNCS (PART 3) : 577-586. ScholarBank@NUS Repository. <a href="https://doi.org/10.1007/978-3-642-02713-0_61" target="_blank">https://doi.org/10.1007/978-3-642-02713-0_61</a>
dc.identifier.isbn3642027121
dc.identifier.issn03029743
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/40980
dc.description.abstractSign language (SL) recognition modules in human-computer interaction systems need to be both fast and reliable. In cases where multiple sets of features are extracted from the SL data, the recognition system can speed up processing by taking only a subset of extracted features as its input. However, this should not be realised at the expense of a drop in recognition accuracy. By training different recognizers for different subsets of features, we can formulate the problem as the task of planning the sequence of recognizer actions to apply to SL data, while accounting for the trade-off between recognition speed and accuracy. Partially observable Markov decision processes (POMDPs) provide a principled mathematical framework for such planning problems. A POMDP explicitly models the probabilities of observing various outputs from the individual recognizers and thus maintains a probability distribution (or belief) over the set of possible SL input sentences. It then computes a policy that maps every belief to an action. This allows the system to select actions in real-time during online policy execution, adapting its behaviour according to the observations encountered. We illustrate the POMDP approach with a simple sentence recognition problem and show in experiments the advantages of this approach over "fixed action" systems that do not adapt their behaviour in real-time. © 2009 Springer Berlin Heidelberg.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1007/978-3-642-02713-0_61
dc.sourceScopus
dc.subjectHuman-computer interaction
dc.subjectPlanning under uncertainty
dc.subjectSign language recognition
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1007/978-3-642-02713-0_61
dc.description.sourcetitleLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
dc.description.volume5616 LNCS
dc.description.issuePART 3
dc.description.page577-586
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.