Please use this identifier to cite or link to this item: https://doi.org/10.3389/fnins.2020.00135
DC FieldValue
dc.titleLow-Power Dynamic Object Detection and Classification With Freely Moving Event Cameras
dc.contributor.authorRamesh, B.
dc.contributor.authorUssa, A.
dc.contributor.authorDella Vedova, L.
dc.contributor.authorYang, H.
dc.contributor.authorOrchard, G.
dc.date.accessioned2021-08-24T02:37:34Z
dc.date.available2021-08-24T02:37:34Z
dc.date.issued2020
dc.identifier.citationRamesh, B., Ussa, A., Della Vedova, L., Yang, H., Orchard, G. (2020). Low-Power Dynamic Object Detection and Classification With Freely Moving Event Cameras. Frontiers in Neuroscience 14 : 135. ScholarBank@NUS Repository. https://doi.org/10.3389/fnins.2020.00135
dc.identifier.issn1662-4548
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/198942
dc.description.abstractWe present the first purely event-based, energy-efficient approach for dynamic object detection and categorization with a freely moving event camera. Compared to traditional cameras, event-based object recognition systems are considerably behind in terms of accuracy and algorithmic maturity. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional object representation when hardware resources are limited to implement PCA. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance compared to state-of-the-art algorithms. Additionally, we verified the real-time FPGA performance of the proposed object detection method, trained with limited data as opposed to deep learning methods, under a closed-loop aerial vehicle flight mode. We also compare the proposed object categorization framework to pre-trained convolutional neural networks using transfer learning and highlight the drawbacks of using frame-based sensors under dynamic camera motion. Finally, we provide critical insights about the feature extraction method and the classification parameters on the system performance, which aids in understanding the framework to suit various low-power (less than a few watts) application scenarios. @ Copyright @ 2020 Ramesh, Ussa, Della Vedova, Yang and Orchard.
dc.publisherFrontiers Media S.A.
dc.rightsAttribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.sourceScopus OA2020
dc.subjectclosed-loop control
dc.subjectevent-based descriptor
dc.subjectFIFO processing
dc.subjectlow-power FPGA
dc.subjectneuromorphic vision
dc.subjectobject detection
dc.subjectobject recognition
dc.subjectrectangular grid
dc.typeArticle
dc.contributor.departmentLIFE SCIENCES INSTITUTE
dc.contributor.departmentTEMASEK LABORATORIES
dc.description.doi10.3389/fnins.2020.00135
dc.description.sourcetitleFrontiers in Neuroscience
dc.description.volume14
dc.description.page135
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_3389_fnins_2020_00135.pdf1.32 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons