Please use this identifier to cite or link to this item:
|Title:||Simultaneous camera pose and correspondence estimation with motion coherence||Authors:||Lin, W.-Y.
Structure from Motion
|Issue Date:||Jan-2012||Citation:||Lin, W.-Y., Cheong, L.-F., Tan, P., Dong, G., Liu, S. (2012-01). Simultaneous camera pose and correspondence estimation with motion coherence. International Journal of Computer Vision 96 (2) : 145-161. ScholarBank@NUS Repository. https://doi.org/10.1007/s11263-011-0456-9||Abstract:||Traditionally, the camera pose recovery problem has been formulated as one of estimating the optimal camera pose given a set of point correspondences. This critically depends on the accuracy of the point correspondences and would have problems in dealing with ambiguous features such as edge contours and high visual clutter. Joint estimation of camera pose and correspondence attempts to improve performance by explicitly acknowledging the chicken and egg nature of the pose and correspondence problem. However, such joint approaches for the two-view problem are still few and even then, they face problems when scenes contain largely edge cues with few corners, due to the fact that epipolar geometry only provides a "soft" point to line constraint. Viewed from the perspective of point set registration, the point matching process can be regarded as the registration of points while preserving their relative positions (i.e. preserving scene coherence). By demanding that the point set should be transformed coherently across views, this framework leverages on higher level perceptual information such as the shape of the contour. While thus potentially allowing registration of non-unique edge points, the registration framework in its traditional form is subject to substantial point localization error and is thus not suitable for estimating camera pose. In this paper, we introduce an algorithm which jointly estimates camera pose and correspondence within a point set registration framework based on motion coherence, with the camera pose helping to localize the edge registration, while the "ambiguous" edge information helps to guide camera pose computation. The algorithm can compute camera pose over large displacements and by utilizing the non-unique edge points can recover camera pose from what were previously regarded as feature-impoverished SfM scenes. Our algorithm is also sufficiently flexible to incorporate high dimensional feature descriptors and works well on traditional SfM scenes with adequate numbers of unique corners. © 2011 Springer Science+Business Media, LLC.||Source Title:||International Journal of Computer Vision||URI:||http://scholarbank.nus.edu.sg/handle/10635/57412||ISSN:||09205691||DOI:||10.1007/s11263-011-0456-9|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on May 16, 2019
WEB OF SCIENCETM
checked on May 7, 2019
checked on May 13, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.