Please use this identifier to cite or link to this item: https://doi.org/10.1109/tpami.2019.2919301
DC FieldValue
dc.titleDART: Distribution Aware Retinal Transform for Event-based Cameras
dc.contributor.authorRamesh, Bharath
dc.contributor.authorYang, Hong
dc.contributor.authorOrchard, Garrick Michael
dc.contributor.authorLe Thi, Ngoc Anh
dc.contributor.authorZhang, Shihao
dc.contributor.authorXiang, Cheng
dc.date.accessioned2019-07-22T01:28:55Z
dc.date.available2019-07-22T01:28:55Z
dc.date.issued2019-05-27
dc.identifier.citationRamesh, Bharath, Yang, Hong, Orchard, Garrick Michael, Le Thi, Ngoc Anh, Zhang, Shihao, Xiang, Cheng (2019-05-27). DART: Distribution Aware Retinal Transform for Event-based Cameras. IEEE Transactions on Pattern Analysis and Machine Intelligence abs/1710.10800 : 1-1. ScholarBank@NUS Repository. https://doi.org/10.1109/tpami.2019.2919301
dc.identifier.issn01628828
dc.identifier.issn19393539
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/156799
dc.description.abstractWe introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed as local descriptors in a bag-of-features classification framework and testing is carried out on four standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101). (2) Extending the classification system, tracking is demonstrated using two key novelties: (i) For overcoming the low-sample problem for the one-shot learning of a binary classifier, statistical bootstrapping is leveraged with online learning; (ii) To achieve tracker robustness, the scale and rotation equivariance property of the DART descriptors is exploited for the one-shot learning. (3) To solve the long-term object tracking problem, an object detector is designed using the principle of cluster majority voting. The detection scheme is then combined with the tracker to result in a high intersection-over-union score with augmented ground truth annotations on the publicly available event camera dataset. (4) Finally, the event context encoded by DART greatly simplifies the feature correspondence problem, especially for spatio-temporal slices far apart in time, which has not been explicitly tackled in the event-based vision domain.
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.sourceElements
dc.subjectcs.CV
dc.subjectcs.CV
dc.typeArticle
dc.date.updated2019-07-21T07:40:50Z
dc.contributor.departmentELECTRICAL AND COMPUTER ENGINEERING
dc.contributor.departmentTEMASEK LABORATORIES
dc.description.doi10.1109/tpami.2019.2919301
dc.description.sourcetitleIEEE Transactions on Pattern Analysis and Machine Intelligence
dc.description.volumeabs/1710.10800
dc.description.page1-1
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
PAMI-2019.pdfPublished version1.88 MBAdobe PDF

CLOSED

Published

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.