Please use this identifier to cite or link to this item: https://doi.org/10.1145/3323873.3325056
Title: Annotating Objects and Relations in User-Generated Videos
Authors: Xindi Shang 
Donglin Di
Junbin Xiao
Yu Cao 
Xun Yang 
Tat-Seng Chua 
Keywords: Dataset
Object recognition
Video annotation
Video content analysis
Visual relation recognition
Issue Date: 10-Jun-2019
Citation: Xindi Shang, Donglin Di, Junbin Xiao, Yu Cao, Xun Yang, Tat-Seng Chua (2019-06-10). Annotating Objects and Relations in User-Generated Videos. ICMR 2019 : 279-287. ScholarBank@NUS Repository. https://doi.org/10.1145/3323873.3325056
Abstract: Understanding the objects and relations between them is indispensable to fine-grained video content analysis, which is widely studied in recent research works in multimedia and computer vision. However, existing works are limited to evaluating with either small datasets or indirect metrics, such as the performance over images. The underlying reason is that the construction of a large-scale video dataset with dense annotation is tricky and costly. In this paper, we address several main issues in annotating objects and relations in user-generated videos, and propose an annotation pipeline that can be executed at a modest cost. As a result, we present a new dataset, named VidOR, consisting of 10k videos (84 hours) together with dense annotations that localize 80 categories of objects and 50 categories of predicates in each video. We have made the training and validation set public and extendable for more tasks to facilitate future research on video object and relation recognition. © 2019 Association for Computing Machinery.
Source Title: ICMR 2019
URI: https://scholarbank.nus.edu.sg/handle/10635/167711
ISBN: 9781450367653
DOI: 10.1145/3323873.3325056
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
3323873.3325056.pdf3.42 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.