Please use this identifier to cite or link to this item:
https://doi.org/10.1007/978-3-642-34475-6_79
DC Field | Value | |
---|---|---|
dc.title | Towards IMACA: Intelligent multimodal affective conversational agent | |
dc.contributor.author | Hussain, A. | |
dc.contributor.author | Cambria, E. | |
dc.contributor.author | Mazzocco, T. | |
dc.contributor.author | Grassi, M. | |
dc.contributor.author | Wang, Q.-F. | |
dc.contributor.author | Durrani, T. | |
dc.date.accessioned | 2014-12-12T07:54:13Z | |
dc.date.available | 2014-12-12T07:54:13Z | |
dc.date.issued | 2012 | |
dc.identifier.citation | Hussain, A.,Cambria, E.,Mazzocco, T.,Grassi, M.,Wang, Q.-F.,Durrani, T. (2012). Towards IMACA: Intelligent multimodal affective conversational agent. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 7663 LNCS (PART 1) : 656-663. ScholarBank@NUS Repository. <a href="https://doi.org/10.1007/978-3-642-34475-6_79" target="_blank">https://doi.org/10.1007/978-3-642-34475-6_79</a> | |
dc.identifier.isbn | 9783642344749 | |
dc.identifier.issn | 03029743 | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/116796 | |
dc.description.abstract | A key aspect when trying to achieve natural interaction in machines is multimodality. Besides verbal communication, in fact, humans interact also through many other channels, e.g., facial expressions, gestures, eye contact, posture, and voice tone. Such channels convey not only semantics, but also emotional cues that are essential for interpreting the message transmitted. The importance of the affective information and the capability of properly managing it, in fact, has been more and more understood as fundamental for the development of a new generation of emotion-aware applications for several scenarios like e-learning, e-health, and human-computer interaction. To this end, this work investigates the adoption of different paradigms in the fields of text, vocal, and video analysis, in order to lay the basis for the development of an intelligent multimodal affective conversational agent. © 2012 Springer-Verlag. | |
dc.description.uri | http://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1007/978-3-642-34475-6_79 | |
dc.source | Scopus | |
dc.subject | AI | |
dc.subject | HCI | |
dc.subject | Multimodal Sentiment Analysis | |
dc.type | Conference Paper | |
dc.contributor.department | TEMASEK LABORATORIES | |
dc.description.doi | 10.1007/978-3-642-34475-6_79 | |
dc.description.sourcetitle | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | |
dc.description.volume | 7663 LNCS | |
dc.description.issue | PART 1 | |
dc.description.page | 656-663 | |
dc.identifier.isiut | NOT_IN_WOS | |
Appears in Collections: | Staff Publications |
Show simple item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.