Please use this identifier to cite or link to this item:
|Title:||Visual Attention Prediction Using Saliency Determination of Scene Understanding for Social Robots|
|Citation:||He, H., Ge, S.S., Zhang, Z. (2011-11). Visual Attention Prediction Using Saliency Determination of Scene Understanding for Social Robots. International Journal of Social Robotics 3 (4) : 457-468. ScholarBank@NUS Repository. https://doi.org/10.1007/s12369-011-0105-z|
|Abstract:||In this paper, the biological ability of visual attention is modeled for social robots to understand scenes and circumstance. Visual attention is determined by evaluating visual stimuli and prior knowledge in the intelligent saliency searching. Visual stimuli are measured using information entropy and biological color sensitivities, where the information entropy evaluates information qualities and the color sensitivity assesses biological attraction of a presented scene. We also learn and utilize the prior knowledge of people's focus in the prediction of visual attention. The performance of the proposed technique is studied on different sorts of natural scenes and evaluated with fixation data of actual eye-tracking database. The experimental results proved the effectiveness of the proposed technique in discovering salient regions and predicting visual attention. The robustness of the proposed technique to transformation and illumination variance is also investigated. Social robots equipped with the proposed technique can autonomously determine their attention to a scene autonomously so as to behave naturally in the human robot interaction. © Springer Science & Business Media BV 2011.|
|Source Title:||International Journal of Social Robotics|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Jan 10, 2019
WEB OF SCIENCETM
checked on Dec 13, 2017
checked on Nov 17, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.