Please use this identifier to cite or link to this item: https://doi.org/10.1145/1873951.1874047
DC FieldValue
dc.titleMaking computers look the way we look: Exploiting visual attention for image understanding
dc.contributor.authorKatti, H.
dc.contributor.authorSubramanian, R.
dc.contributor.authorKankanhalli, M.
dc.contributor.authorSebe, N.
dc.contributor.authorChua, T.-S.
dc.contributor.authorRamakrishnan, K.R.
dc.date.accessioned2013-07-04T08:26:12Z
dc.date.available2013-07-04T08:26:12Z
dc.date.issued2010
dc.identifier.citationKatti, H.,Subramanian, R.,Kankanhalli, M.,Sebe, N.,Chua, T.-S.,Ramakrishnan, K.R. (2010). Making computers look the way we look: Exploiting visual attention for image understanding. MM'10 - Proceedings of the ACM Multimedia 2010 International Conference : 667-670. ScholarBank@NUS Repository. <a href="https://doi.org/10.1145/1873951.1874047" target="_blank">https://doi.org/10.1145/1873951.1874047</a>
dc.identifier.isbn9781605589336
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/41381
dc.description.abstractHuman Visual attention (HVA) is an important strategy to focus on specific information while observing and understanding visual stimuli. HVA involves making a series of fixations on select locations while performing tasks such as object recognition, scene understanding, etc. We present one of the first works that combines fixation information with automated concept detectors to (i) infer abstract image semantics, and (ii) enhance performance of object detectors. We develop visual attention-based models that sample fixation distributions and fixation transition distributions in regions-of-interest (ROI) to infer abstract semantics such as expressive faces and interactions (such as look, read, etc.). We also exploit eye-gaze information to deduce possible locations and scale of salient concepts and aid state-of-art detectors. A 18% performance increase with over 80% reduction in computational time for a state-of-art object detector [4]. © 2010 ACM.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1145/1873951.1874047
dc.sourceScopus
dc.subjectabstract
dc.subjecteye-tracker
dc.subjectfixations
dc.subjectsalient regions
dc.subjectvisual attention
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1145/1873951.1874047
dc.description.sourcetitleMM'10 - Proceedings of the ACM Multimedia 2010 International Conference
dc.description.page667-670
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

11
checked on Nov 23, 2022

Page view(s)

116
checked on Nov 24, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.