Please use this identifier to cite or link to this item: https://doi.org/10.3389/fnins.2015.00522
DC FieldValue
dc.titleNeuromorphic event-based 3D pose estimation
dc.contributor.authorValeiras, D.R
dc.contributor.authorOrchard, G
dc.contributor.authorIeng, S.-H
dc.contributor.authorBenosman, R.B
dc.date.accessioned2020-11-19T09:39:35Z
dc.date.available2020-11-19T09:39:35Z
dc.date.issued2016
dc.identifier.citationValeiras, D.R, Orchard, G, Ieng, S.-H, Benosman, R.B (2016). Neuromorphic event-based 3D pose estimation. Frontiers in Neuroscience 9 (JAN) : 522. ScholarBank@NUS Repository. https://doi.org/10.3389/fnins.2015.00522
dc.identifier.issn16624548
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/183726
dc.description.abstractPose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30-60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. © 2016 Reverter Valeiras, Orchard, Ieng and Benosman.
dc.rightsAttribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.sourceUnpaywall 20201031
dc.subjectaccuracy
dc.subjectalgorithm
dc.subjectArticle
dc.subjectcomputer program
dc.subjectevent based three dimensional pose estimation algorithm
dc.subjectimage analysis
dc.subjectimage processing
dc.subjectimage quality
dc.subjectmathematical analysis
dc.subjectmathematical model
dc.subjectmethodology
dc.subjectprocess design
dc.subjectthree dimensional imaging
dc.subjectvelocity
dc.typeArticle
dc.contributor.departmentTEMASEK LABORATORIES
dc.description.doi10.3389/fnins.2015.00522
dc.description.sourcetitleFrontiers in Neuroscience
dc.description.volume9
dc.description.issueJAN
dc.description.page522
dc.published.statePublished
Appears in Collections:Elements
Staff Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_3389_fnins_2015_00522.pdf3.22 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons