Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/191102
DC FieldValue
dc.titleDEEP LEARNING FOR DEFOCUS DEBLURRING
dc.contributor.authorYANG ZIYI
dc.date.accessioned2021-05-09T18:00:32Z
dc.date.available2021-05-09T18:00:32Z
dc.date.issued2020-12-26
dc.identifier.citationYANG ZIYI (2020-12-26). DEEP LEARNING FOR DEFOCUS DEBLURRING. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/191102
dc.description.abstractIn this dissertation, several techniques are developed to facilitate the application of deep learning in the recovery of defocused images. The first is a data generation pipeline to help network training without collecting any real paired training samples. We proposed an efficient approach for synthesizing blurred/sharp image pairs which enable effective training of a non-uniform deblurring model with sufficient statistical characteristics, without attempting to simulate real-world images. The second is the introduction of an unsharp mark filtering technique which enables a better restoration in regions that contain dense image edges(the most difficult regions for recovery). The last one is the introduction of attention mechanism for handling spatially varying blurring. Such an attention mechanism enables the network to have an adaptive processing over different image regions with different blurring effects.
dc.language.isoen
dc.subjectdeep learning, defocus deblurring, non-uniform blind deblurring, data synthesis, unsharp mask filtering, attention mechanism
dc.typeThesis
dc.contributor.departmentMATHEMATICS
dc.contributor.supervisorJi Hui
dc.description.degreePh.D
dc.description.degreeconferredDOCTOR OF PHILOSOPHY (FOS)
dc.identifier.orcid0000-0002-3515-3299
Appears in Collections:Ph.D Theses (Open)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
YangZY.pdf14.22 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.