Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.imavis.2007.08.015
DC FieldValue
dc.titleReal-time camera tracking for marker-less and unprepared augmented reality environments
dc.contributor.authorXu, K.
dc.contributor.authorChia, K.W.
dc.contributor.authorCheok, A.D.
dc.date.accessioned2014-06-17T03:03:28Z
dc.date.available2014-06-17T03:03:28Z
dc.date.issued2008-05-01
dc.identifier.citationXu, K., Chia, K.W., Cheok, A.D. (2008-05-01). Real-time camera tracking for marker-less and unprepared augmented reality environments. Image and Vision Computing 26 (5) : 673-689. ScholarBank@NUS Repository. https://doi.org/10.1016/j.imavis.2007.08.015
dc.identifier.issn02628856
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/57201
dc.description.abstractFor three-dimensional video-based augmented reality applications, accurate measurements of the 6DOF camera pose relative to the real world are required for proper registration of the virtual objects. This paper presents an accurate and robust system for real-time 6DOF camera pose tracking based on natural features in an arbitrary scene. Crucially, the calculation is based on pre-captured reference images. This prevents a gradual increase in the camera position error. Point features in the current image frame are first matched to two spatially separated reference images. This wide baseline correspondence problem is overcome by constructing (1) a global homography between current and previous image frame and (2) local affine transforms derived from known matches between previous frame and reference images. Chaining these two mappings constrains the search for potential matches in the reference images and allows the warping of corner intensity neighborhoods so that a viewpoint invariant similarity measure for assessing potential point matches can be defined. We then minimize deviations from the two-view and three-view constraints between the reference images and current frame as a function of the camera motion parameters to obtain an estimate of the current camera pose relative to the reference images. This calculation is stabilized using a recursive form of temporal regularization similar in spirit to the Kalman filter. We can track camera pose reliably over hundreds of image frames and realistically integrate three-dimensional virtual objects with only slight jitter. This paper also tries to simplify the above described algorithm and present a real-time, robust tracking system based on computing homographies. Homography can exactly describe the image motion between two frames when the camera motion is pure rotation, or it is viewing a planar scene. For outdoor registration applications, the system is robust under small translations as long as the majority of the scene contents are distant. © 2008.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1016/j.imavis.2007.08.015
dc.sourceScopus
dc.subjectAugmented reality
dc.subjectFundamental matrix
dc.subjectHomography
dc.subjectOptical flow
dc.subjectVision based tracking
dc.typeArticle
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.doi10.1016/j.imavis.2007.08.015
dc.description.sourcetitleImage and Vision Computing
dc.description.volume26
dc.description.issue5
dc.description.page673-689
dc.description.codenIVCOD
dc.identifier.isiut000254686900008
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.