Please use this identifier to cite or link to this item: https://doi.org/10.1167/12.6.22
Title: Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost
Authors: Zhao, Q. 
Koch, C.
Keywords: AdaBoost
Computational saliency model
Feature integration
Issue Date: 2012
Citation: Zhao, Q., Koch, C. (2012). Learning visual saliency by combining feature maps in a nonlinear manner using AdaBoost. Journal of Vision 12 (6) : -. ScholarBank@NUS Repository. https://doi.org/10.1167/12.6.22
Abstract: To predict where subjects look under natural viewing conditions, biologically inspired saliency models decompose visual input into a set of feature maps across spatial scales. The output of these feature maps are summed to yield the final saliency map. We studied the integration of bottom-up feature maps across multiple spatial scales by using eye movement data from four recent eye tracking datasets. We use AdaBoost as the central computational module that takes into account feature selection, thresholding, weight assignment, and integration in a principled and nonlinear learning framework. By combining the output of feature maps via a series of nonlinear classifiers, the new model consistently predicts eye movements better than any of its competitors. © 2012 ARVO.
Source Title: Journal of Vision
URI: http://scholarbank.nus.edu.sg/handle/10635/56481
ISSN: 15347362
DOI: 10.1167/12.6.22
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.