Please use this identifier to cite or link to this item:
|Title:||Automated localization of affective objects and actions in images via caption text-cum-eye gaze analysis|
|Authors:||Ramanathan, S. |
|Keywords:||Affect model for world concepts|
Automated localization and labeling
Caption text-cum-eye gaze analysis
|Citation:||Ramanathan, S.,Katti, H.,Huang, R.,Chua, T.-S.,Kankanhalli, M. (2009). Automated localization of affective objects and actions in images via caption text-cum-eye gaze analysis. MM'09 - Proceedings of the 2009 ACM Multimedia Conference, with Co-located Workshops and Symposiums : 729-732. ScholarBank@NUS Repository. https://doi.org/10.1145/1631272.1631399|
|Abstract:||We propose a novel framework to localize and label affective objects and actions in images through a combination of text, visual and gaze-based analysis. Human gaze provides useful cues to infer locations and interactions of affective objects. While concepts (labels) associated with an image can be determined from its caption, we demonstrate localization of these concepts upon learning from a statistical affect model for world concepts. The affect model is derived from non-invasively acquired fixation patterns on labeled images, and guides localization of affective objects (faces, reptiles) and actions (look, read) from fixations in unlabeled images. Experimental results obtained on a database of 500 images confirm the effectiveness and promise of the proposed approach. Copyright 2009 ACM.|
|Source Title:||MM'09 - Proceedings of the 2009 ACM Multimedia Conference, with Co-located Workshops and Symposiums|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Jan 20, 2019
checked on Dec 8, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.