Please use this identifier to cite or link to this item:
|Title:||Live three-dimensional content for augmented reality|
|Authors:||Farbiz, F. |
Three-dimensional computer vision
|Citation:||Farbiz, F., Cheok, A.D., Wei, L., ZhiYing, Z., Ke, X., Prince, S., Billinghurst, M., Kato, H. (2005-06). Live three-dimensional content for augmented reality. IEEE Transactions on Multimedia 7 (3) : 514-523. ScholarBank@NUS Repository. https://doi.org/10.1109/TMM.2005.846787|
|Abstract:||We describe an augmented reality system for superimposing three-dimensional (3-D) live content onto two-dimensional fiducial markers in the scene. In each frame, the Euclidean transformation between the marker and the camera is estimated. The equivalent virtual view of the live model is then generated and rendered into the scene at interactive speeds. The 3-D structure of the model is calculated using a fast shape-from-silhouette algorithm based on the outputs of 15 cameras surrounding the subject. The novel view is generated by projecting rays through each pixel of the desired image and intersecting them with the 3-D structure. Pixel color is estimated by taking a weighted sum of the colors of the projections of this 3-D point in nearby real camera images. Using this system, we capture live human models and present them via the augmented reality interface at a remote location. We can generate 384 × 288 pixel images of the models at 25 fps, with a latency of < 100 ms. The result gives the strong impression that the model is a real 3-D part of the scene. © 2005 IEEE.|
|Source Title:||IEEE Transactions on Multimedia|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on May 19, 2018
WEB OF SCIENCETM
checked on Apr 2, 2018
checked on May 12, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.