Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/231549
Title: | TOWARDS ADVERSARIAL ROBUSTNESS OF DEEP VISION ALGORITHMS | Authors: | YAN HANSHU | Keywords: | Deep Learning, Machine Learning, Adversarial Robustness, Computer Vision | Issue Date: | 11-May-2022 | Citation: | YAN HANSHU (2022-05-11). TOWARDS ADVERSARIAL ROBUSTNESS OF DEEP VISION ALGORITHMS. ScholarBank@NUS Repository. | Abstract: | Deep learning methods have achieved great success in solving computer vision tasks, and they have been widely utilized in artificially intelligent systems for image processing, analysis, and understanding. However, deep neural networks have been shown to be vulnerable to adversarial perturbations in input data. The security issues of deep neural networks have thus come to the fore. It is imperative to comprehensively study the adversarial robustness of deep vision algorithms. This thesis focuses on the adversarial robustness of deep image classification models and deep image denoisers. We systematically study the robustness of deep vision algorithms from three perspectives: 1) robustness evaluation (we propose the ObsAtk to evaluate the robustness of denoisers), 2) robustness improvement (HAT, TisODE, and CIFS are developed to robustify vision models), and 3) the connection between adversarial robustness and generalization capability to new domains (we find that adversarially robust denoisers can deal with unseen types of real-world noise). | URI: | https://scholarbank.nus.edu.sg/handle/10635/231549 |
Appears in Collections: | Ph.D Theses (Open) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
_thesis_.pdf | 11.7 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.