Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.sigpro.2012.06.014
Title: Learning saliency-based visual attention: A review
Authors: Zhao, Q. 
Koch, C.
Keywords: Central fixation bias
Feature representation
Machine learning
Public eye tracking datasets
Visual attention
Issue Date: Jun-2013
Citation: Zhao, Q., Koch, C. (2013-06). Learning saliency-based visual attention: A review. Signal Processing 93 (6) : 1401-1407. ScholarBank@NUS Repository. https://doi.org/10.1016/j.sigpro.2012.06.014
Abstract: Humans and other primates shift their gaze to allocate processing resources to a subset of the visual input. Understanding and emulating the way that human observers free-view a natural scene has both scientific and economic impact. It has therefore attracted the attention from researchers in a wide range of science and engineering disciplines. With the ever increasing computational power, machine learning has become a popular tool to mine human data in the exploration of how people direct their gaze when inspecting a visual scene. This paper reviews recent advances in learning saliency-based visual attention and discusses several key issues in this topic. © 2012 Elsevier B.V.
Source Title: Signal Processing
URI: http://scholarbank.nus.edu.sg/handle/10635/56478
ISSN: 01651684
DOI: 10.1016/j.sigpro.2012.06.014
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.