Please use this identifier to cite or link to this item:
https://doi.org/10.1007/s11263-009-0226-0
Title: | Linear quasi-parallax SfM using laterally-placed eyes | Authors: | Hu, C. Cheong, L.F. |
Keywords: | Compound eyes Ego-motion estimation based on optical flow Lateral camera pairs Quasi-parallax terms |
Issue Date: | Aug-2009 | Citation: | Hu, C., Cheong, L.F. (2009-08). Linear quasi-parallax SfM using laterally-placed eyes. International Journal of Computer Vision 84 (1) : 21-39. ScholarBank@NUS Repository. https://doi.org/10.1007/s11263-009-0226-0 | Abstract: | A large class of visual systems in the biological world often has multiple eyes in simultaneous motion and yet has little or no overlap in the visual fields between the eyes. These systems include the lateral eyes found in many vertebrates and the compound eyes in insects. Instead of computing feature correspondences between the eyes, which might not even be possible due to the lack of overlap in the visual fields, we exploit the organizational possibility offered by the eye topography. In particular, we leverage on the pair of visual rays that are parallel to each other but opposite in direction, and compute what we call the quasi-parallax for translation recovery. Besides resulting in parsimonious visual processing, the quasi-parallax term also enhances the information pick-up for the translation, as it is almost rotation-free. The rotation is subsequently recovered from a pencil of visual rays using the individual epipolar constraints of each camera. As a result of using these different and appropriate aspects of visual rays for motion recovery, our method is numerically more effective in disambiguating the translation and rotation. In comparison to the gold standard solution obtained by the bundle adjustment (BA) technique, our method has a better Fisher information matrix for a lateral eye pair, as well as a superior experimental performance under the case of narrow field of view. For other eye configurations, the two methods achieve comparable performances, with our linear method slightly edging the nonlinear BA method when there exists imperfection in the calibration. © 2009 Springer Science+Business Media, LLC. | Source Title: | International Journal of Computer Vision | URI: | http://scholarbank.nus.edu.sg/handle/10635/56499 | ISSN: | 09205691 | DOI: | 10.1007/s11263-009-0226-0 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.