Please use this identifier to cite or link to this item: https://doi.org/10.1371/journal.pone.0214444
Title: Ultra-rapid object categorization in real-world scenes with top-down manipulations
Authors: Xu, B.
Kankanhalli, M.S. 
Zhao, Q.
Issue Date: 2019
Publisher: Public Library of Science
Citation: Xu, B., Kankanhalli, M.S., Zhao, Q. (2019). Ultra-rapid object categorization in real-world scenes with top-down manipulations. PLoS ONE 14 (4) : e0214444. ScholarBank@NUS Repository. https://doi.org/10.1371/journal.pone.0214444
Rights: Attribution 4.0 International
Abstract: Humans are able to achieve visual object recognition rapidly and effortlessly. Object categorization is commonly believed to be achieved by interaction between bottom-up and top-down cognitive processing. In the ultra-rapid categorization scenario where the stimuli appear briefly and response time is limited, it is assumed that a first sweep of feedforward information is sufficient to discriminate whether or not an object is present in a scene. However, whether and how feedback/top-down processing is involved in such a brief duration remains an open question. To this end, here, we would like to examine how different top-down manipulations, such as category level, category type and real-world size, interact in ultra-rapid categorization. We have constructed a dataset comprising real-world scene images with a built-in measurement of target object display size. Based on this set of images, we have measured ultra-rapid object categorization performance by human subjects. Standard feedforward computational models representing scene features and a state-of-the-art object detection model were employed for auxiliary investigation. The results showed the influences from 1) animacy (animal, vehicle, food), 2) level of abstraction (people, sport), and 3) real-world size (four target size levels) on ultra-rapid categorization processes. This had an impact to support the involvement of top-down processing when rapidly categorizing certain objects, such as sport at a fine grained level. Our work on human vs. model comparisons also shed light on possible collaboration and integration of the two that may be of interest to both experimental and computational vision researches. All the collected images and behavioral data as well as code and models are publicly available at https://osf.io/mqwjz/. © 2019 Xu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Source Title: PLoS ONE
URI: https://scholarbank.nus.edu.sg/handle/10635/209984
ISSN: 1932-6203
DOI: 10.1371/journal.pone.0214444
Rights: Attribution 4.0 International
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_1371_journal_pone_0214444.pdf1.92 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons