Please use this identifier to cite or link to this item:
https://doi.org/10.1109/tpami.2019.2919301
DC Field | Value | |
---|---|---|
dc.title | DART: Distribution Aware Retinal Transform for Event-based Cameras | |
dc.contributor.author | Ramesh, Bharath | |
dc.contributor.author | Yang, Hong | |
dc.contributor.author | Orchard, Garrick Michael | |
dc.contributor.author | Le Thi, Ngoc Anh | |
dc.contributor.author | Zhang, Shihao | |
dc.contributor.author | Xiang, Cheng | |
dc.date.accessioned | 2019-07-22T01:28:55Z | |
dc.date.available | 2019-07-22T01:28:55Z | |
dc.date.issued | 2019-05-27 | |
dc.identifier.citation | Ramesh, Bharath, Yang, Hong, Orchard, Garrick Michael, Le Thi, Ngoc Anh, Zhang, Shihao, Xiang, Cheng (2019-05-27). DART: Distribution Aware Retinal Transform for Event-based Cameras. IEEE Transactions on Pattern Analysis and Machine Intelligence abs/1710.10800 : 1-1. ScholarBank@NUS Repository. https://doi.org/10.1109/tpami.2019.2919301 | |
dc.identifier.issn | 01628828 | |
dc.identifier.issn | 19393539 | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/156799 | |
dc.description.abstract | We introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed as local descriptors in a bag-of-features classification framework and testing is carried out on four standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101). (2) Extending the classification system, tracking is demonstrated using two key novelties: (i) For overcoming the low-sample problem for the one-shot learning of a binary classifier, statistical bootstrapping is leveraged with online learning; (ii) To achieve tracker robustness, the scale and rotation equivariance property of the DART descriptors is exploited for the one-shot learning. (3) To solve the long-term object tracking problem, an object detector is designed using the principle of cluster majority voting. The detection scheme is then combined with the tracker to result in a high intersection-over-union score with augmented ground truth annotations on the publicly available event camera dataset. (4) Finally, the event context encoded by DART greatly simplifies the feature correspondence problem, especially for spatio-temporal slices far apart in time, which has not been explicitly tackled in the event-based vision domain. | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | |
dc.source | Elements | |
dc.subject | cs.CV | |
dc.subject | cs.CV | |
dc.type | Article | |
dc.date.updated | 2019-07-21T07:40:50Z | |
dc.contributor.department | ELECTRICAL AND COMPUTER ENGINEERING | |
dc.contributor.department | TEMASEK LABORATORIES | |
dc.description.doi | 10.1109/tpami.2019.2919301 | |
dc.description.sourcetitle | IEEE Transactions on Pattern Analysis and Machine Intelligence | |
dc.description.volume | abs/1710.10800 | |
dc.description.page | 1-1 | |
dc.published.state | Published | |
Appears in Collections: | Staff Publications Elements |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
PAMI-2019.pdf | Published version | 1.88 MB | Adobe PDF | CLOSED | Published |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.