Please use this identifier to cite or link to this item:
https://doi.org/10.2197/ipsjtcva.1.95
DC Field | Value | |
---|---|---|
dc.title | Detail recovery for single-image defocus blur | |
dc.contributor.author | Tai, Y.-W. | |
dc.contributor.author | Tang, H. | |
dc.contributor.author | Brown, M.S. | |
dc.contributor.author | Lin, S. | |
dc.date.accessioned | 2013-07-04T08:07:32Z | |
dc.date.available | 2013-07-04T08:07:32Z | |
dc.date.issued | 2009 | |
dc.identifier.citation | Tai, Y.-W.,Tang, H.,Brown, M.S.,Lin, S. (2009). Detail recovery for single-image defocus blur. IPSJ Transactions on Computer Vision and Applications 1 : 95-104. ScholarBank@NUS Repository. <a href="https://doi.org/10.2197/ipsjtcva.1.95" target="_blank">https://doi.org/10.2197/ipsjtcva.1.95</a> | |
dc.identifier.issn | 18826695 | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/40577 | |
dc.description.abstract | We presented an invited talk at the MIRU-IUW workshop on correcting photometric distortions in photographs. In this paper, we describe our work on addressing one form of this distortion, namely defocus blur. Defocus blur can lead to the loss of fine-scale scene detail, and we address the problem of recovering it. Our approach targets a single-image solution that capitalizes on redundant scene information by restoring image patches that have greater defocus blur using similar, more focused patches as exemplars. The major challenge in this approach is to produce a spatially coherent and natural result given the rather limited exemplar data present in a single image. To address this problem, we introduce a novel correction algorithm that maximizes the use of available image information and employs additional prior constraints. Unique to our approach is an exemplar-based deblurring strategy that simultaneously considers candidate patches from both sharper image regions as well as deconvolved patches from blurred regions. This not only allows more of the image to contribute to the recovery process but inherently combines synthesis and deconvolution into a single procedure. In addition, we use a top-down strategy where the pool of in-focus exemplars is progressively expanded as increasing levels of defocus are corrected. After detail recovery, regularization based on sparsity and contour continuity constraints is applied to produce a more plausible and natural result. Our method compares favorably to related techniques such as defocus inpainting and deconvolution with constraints from natural image statistics alone. © 2009 Information Processing Society of Japan. | |
dc.description.uri | http://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.2197/ipsjtcva.1.95 | |
dc.source | Scopus | |
dc.type | Conference Paper | |
dc.contributor.department | COMPUTER SCIENCE | |
dc.description.doi | 10.2197/ipsjtcva.1.95 | |
dc.description.sourcetitle | IPSJ Transactions on Computer Vision and Applications | |
dc.description.volume | 1 | |
dc.description.page | 95-104 | |
dc.identifier.isiut | NOT_IN_WOS | |
Appears in Collections: | Staff Publications |
Show simple item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.