Please use this identifier to cite or link to this item:
DC FieldValue
dc.titleUltra-rapid object categorization in real-world scenes with top-down manipulations
dc.contributor.authorXu, B.
dc.contributor.authorKankanhalli, M.S.
dc.contributor.authorZhao, Q.
dc.identifier.citationXu, B., Kankanhalli, M.S., Zhao, Q. (2019). Ultra-rapid object categorization in real-world scenes with top-down manipulations. PLoS ONE 14 (4) : e0214444. ScholarBank@NUS Repository.
dc.description.abstractHumans are able to achieve visual object recognition rapidly and effortlessly. Object categorization is commonly believed to be achieved by interaction between bottom-up and top-down cognitive processing. In the ultra-rapid categorization scenario where the stimuli appear briefly and response time is limited, it is assumed that a first sweep of feedforward information is sufficient to discriminate whether or not an object is present in a scene. However, whether and how feedback/top-down processing is involved in such a brief duration remains an open question. To this end, here, we would like to examine how different top-down manipulations, such as category level, category type and real-world size, interact in ultra-rapid categorization. We have constructed a dataset comprising real-world scene images with a built-in measurement of target object display size. Based on this set of images, we have measured ultra-rapid object categorization performance by human subjects. Standard feedforward computational models representing scene features and a state-of-the-art object detection model were employed for auxiliary investigation. The results showed the influences from 1) animacy (animal, vehicle, food), 2) level of abstraction (people, sport), and 3) real-world size (four target size levels) on ultra-rapid categorization processes. This had an impact to support the involvement of top-down processing when rapidly categorizing certain objects, such as sport at a fine grained level. Our work on human vs. model comparisons also shed light on possible collaboration and integration of the two that may be of interest to both experimental and computational vision researches. All the collected images and behavioral data as well as code and models are publicly available at © 2019 Xu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
dc.publisherPublic Library of Science
dc.rightsAttribution 4.0 International
dc.sourceScopus OA2019
dc.contributor.departmentDEPT OF COMPUTER SCIENCE
dc.description.sourcetitlePLoS ONE
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_1371_journal_pone_0214444.pdf1.92 MBAdobe PDF




checked on Jan 25, 2023

Page view(s)

checked on Jan 26, 2023

Google ScholarTM



This item is licensed under a Creative Commons License Creative Commons