Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/188048
Title: | ROBUSTNESS AND UNCERTAINTY ESTIMATION FOR DEEP NEURAL NETWORKS | Authors: | JAY NANDY | Keywords: | Adversarial Robustness, Dirichlet distribution, Out-of-Distribution, Predictive Uncertainty, Deep Learning, Machine Learning | Issue Date: | 18-Aug-2020 | Citation: | JAY NANDY (2020-08-18). ROBUSTNESS AND UNCERTAINTY ESTIMATION FOR DEEP NEURAL NETWORKS. ScholarBank@NUS Repository. | Abstract: | Deep neural network (DNN) models are vulnerable against adversarial attacks and produced over-confident predictions even for out-of-distribution (OOD) examples. This thesis focuses on improving the robustness against adversarial attacks and predictive uncertainty estimation of the existing DNN-based classification models. We first propose a new generative framework, called RBF-Net. We use the RBF-Net to develop a robust image classification model, called RBF-CNN, to defend against adversarial attacks. We show that our proposed technique mitigate minor perturbations of different perturbation type to improve the robustness against different perturbation types. Next, we propose a novel framework to maximize the representation gap between in-domain and OOD examples to robustly distinguish different predictive uncertainties for a DNN model. Our experimental results demonstrate that our proposed technique consistently improves OOD detection performance. | URI: | https://scholarbank.nus.edu.sg/handle/10635/188048 |
Appears in Collections: | Ph.D Theses (Open) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
thesis.pdf | 7.77 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.