Please use this identifier to cite or link to this item: https://doi.org/10.1109/ISMAR.2013.6671759
DC FieldValue
dc.titleDiminished reality using appearance and 3D geometry of internet photo collections
dc.contributor.authorLi, Z.
dc.contributor.authorWang, Y.
dc.contributor.authorGuo, J.
dc.contributor.authorCheong, L.-F.
dc.contributor.authorZhou, S.Z.
dc.date.accessioned2014-10-07T04:43:26Z
dc.date.available2014-10-07T04:43:26Z
dc.date.issued2013
dc.identifier.citationLi, Z., Wang, Y., Guo, J., Cheong, L.-F., Zhou, S.Z. (2013). Diminished reality using appearance and 3D geometry of internet photo collections. 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013 : 11-19. ScholarBank@NUS Repository. https://doi.org/10.1109/ISMAR.2013.6671759
dc.identifier.isbn9781479928699
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/83632
dc.description.abstractThis paper presents a new system level framework for Diminished Reality, leveraging for the first time both the appearance and 3D information provided by large photo collections on the Internet. Recent computer vision techniques have made it possible to automatically reconstruct 3-D structure-from-motion points from large and unordered photo collections. Using these point clouds and a prior provided by GPS, reasonably accurate 6 degree of freedom camera poses can be obtained, thus allowing localization. Once the camera (and hence the user) is correctly localized, photos depicting scenes visible from the user's viewpoint can be used to remove unwanted objects indicated by the user in the video sequences. Existing methods based on texture synthesis bring undesirable artifacts and video inconsistency when the background is heterogeneous; the task is rendered even harder for these methods when the background contains complex structures. On the other hand, methods based on plane warping fail when the background has arbitrary shape. Unlike these methods, our algorithm copes with these problems by making use of internet photos, registering them in 3D space and obtaining the 3D scene structure in an offline process. We carefully design the various components during the online phase so as to meet both speed and quality requirements of the task. Experiments on real data collected demonstrate the superiority of our system. © 2013 IEEE.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/ISMAR.2013.6671759
dc.sourceScopus
dc.subjectH.5.1 [Information Systems]
dc.subjectImage Processing and Computer Vision-Application
dc.subjectMultimedia In-formation Systems-Augmented Reality; I.4.8 [Image Processing and Computer Vision]
dc.subjectScene AnalysisSensor Fusion, Tracking; I.4.9 [Computing Methodologies]
dc.typeConference Paper
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.doi10.1109/ISMAR.2013.6671759
dc.description.sourcetitle2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013
dc.description.page11-19
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.