Please use this identifier to cite or link to this item:
|Title:||A framelet algorithm for enhancing video stills|
Tight frame system
|Citation:||Chan, R.H., Shen, Z., Xia, T. (2007-09). A framelet algorithm for enhancing video stills. Applied and Computational Harmonic Analysis 23 (2) : 153-170. ScholarBank@NUS Repository. https://doi.org/10.1016/j.acha.2006.10.003|
|Abstract:||High-resolution image reconstruction refers to the problem of constructing a high resolution image from low resolution images. One approach for the problem is the recent framelet method in [R. Chan, S.D. Riemenschneider, L. Shen, Z. Shen, Tight frame: An efficient way for high-resolution image reconstruction, Appl. Comput. Harmon. Anal. 17 (2004) 91-115]. There the low resolution images are assumed to be small perturbation of a reference image perturbed in different directions. Video clips are made of many still frames, usually about 30 frames per second. Thus most of the frames can be considered as small perturbations of their nearby frames. In particular, frames close to a specified reference frame can be considered as small perturbations of the reference frame. Hence the setting is similar to that in high-resolution image reconstruction. In this paper, we propose a framelet algorithm similar to that in [R. Chan, S.D. Riemenschneider, L. Shen, Z. Shen, Tight frame: An efficient way for high-resolution image reconstruction, Appl. Comput. Harmon. Anal. 17 (2004) 91-115] to enhance the resolution of any specified reference frames in video clips. Experiments on actual video clips show that our method can provide information that are not discernable from the given clips. © 2006 Elsevier Inc. All rights reserved.|
|Source Title:||Applied and Computational Harmonic Analysis|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Oct 19, 2018
WEB OF SCIENCETM
checked on Oct 3, 2018
checked on Oct 13, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.