Please use this identifier to cite or link to this item: https://doi.org/10.1145/1631272.1631399
Title: Automated localization of affective objects and actions in images via caption text-cum-eye gaze analysis
Authors: Ramanathan, S. 
Katti, H. 
Huang, R.
Chua, T.-S. 
Kankanhalli, M. 
Keywords: Affect model for world concepts
Automated localization and labeling
Caption text-cum-eye gaze analysis
Statistical model
Issue Date: 2009
Citation: Ramanathan, S.,Katti, H.,Huang, R.,Chua, T.-S.,Kankanhalli, M. (2009). Automated localization of affective objects and actions in images via caption text-cum-eye gaze analysis. MM'09 - Proceedings of the 2009 ACM Multimedia Conference, with Co-located Workshops and Symposiums : 729-732. ScholarBank@NUS Repository. https://doi.org/10.1145/1631272.1631399
Abstract: We propose a novel framework to localize and label affective objects and actions in images through a combination of text, visual and gaze-based analysis. Human gaze provides useful cues to infer locations and interactions of affective objects. While concepts (labels) associated with an image can be determined from its caption, we demonstrate localization of these concepts upon learning from a statistical affect model for world concepts. The affect model is derived from non-invasively acquired fixation patterns on labeled images, and guides localization of affective objects (faces, reptiles) and actions (look, read) from fixations in unlabeled images. Experimental results obtained on a database of 500 images confirm the effectiveness and promise of the proposed approach. Copyright 2009 ACM.
Source Title: MM'09 - Proceedings of the 2009 ACM Multimedia Conference, with Co-located Workshops and Symposiums
URI: http://scholarbank.nus.edu.sg/handle/10635/41087
ISBN: 9781605586083
DOI: 10.1145/1631272.1631399
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.