Please use this identifier to cite or link to this item: https://doi.org/10.3389/fnins.2020.00135
Title: Low-Power Dynamic Object Detection and Classification With Freely Moving Event Cameras
Authors: Ramesh, B.
Ussa, A. 
Della Vedova, L. 
Yang, H. 
Orchard, G. 
Keywords: closed-loop control
event-based descriptor
FIFO processing
low-power FPGA
neuromorphic vision
object detection
object recognition
rectangular grid
Issue Date: 2020
Publisher: Frontiers Media S.A.
Citation: Ramesh, B., Ussa, A., Della Vedova, L., Yang, H., Orchard, G. (2020). Low-Power Dynamic Object Detection and Classification With Freely Moving Event Cameras. Frontiers in Neuroscience 14 : 135. ScholarBank@NUS Repository. https://doi.org/10.3389/fnins.2020.00135
Rights: Attribution 4.0 International
Abstract: We present the first purely event-based, energy-efficient approach for dynamic object detection and categorization with a freely moving event camera. Compared to traditional cameras, event-based object recognition systems are considerably behind in terms of accuracy and algorithmic maturity. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional object representation when hardware resources are limited to implement PCA. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance compared to state-of-the-art algorithms. Additionally, we verified the real-time FPGA performance of the proposed object detection method, trained with limited data as opposed to deep learning methods, under a closed-loop aerial vehicle flight mode. We also compare the proposed object categorization framework to pre-trained convolutional neural networks using transfer learning and highlight the drawbacks of using frame-based sensors under dynamic camera motion. Finally, we provide critical insights about the feature extraction method and the classification parameters on the system performance, which aids in understanding the framework to suit various low-power (less than a few watts) application scenarios. @ Copyright @ 2020 Ramesh, Ussa, Della Vedova, Yang and Orchard.
Source Title: Frontiers in Neuroscience
URI: https://scholarbank.nus.edu.sg/handle/10635/198942
ISSN: 1662-4548
DOI: 10.3389/fnins.2020.00135
Rights: Attribution 4.0 International
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_3389_fnins_2020_00135.pdf1.32 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons