Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/218215
Title: EXPLAINING AND IMPROVING DEEP NEURAL NETWORKS VIA CONCEPT-BASED EXPLANATIONS
Authors: SANDAREKA KUMUDU KUMARI WICKRAMANAYAKE
ORCID iD:   orcid.org/0000-0003-0314-5988
Keywords: Interpretable Artificial Intelligence, Deep Neural Networks, Convolutional Neural Networks, Interpretability, Concept-based Explanations
Issue Date: 9-Nov-2021
Citation: SANDAREKA KUMUDU KUMARI WICKRAMANAYAKE (2021-11-09). EXPLAINING AND IMPROVING DEEP NEURAL NETWORKS VIA CONCEPT-BASED EXPLANATIONS. ScholarBank@NUS Repository.
Abstract: This thesis explores using concept-based explanations to explain and improve Deep Neural Networks (DNNs) in computer vision, especially Convolutional Neural Networks (CNNs). Concept-based explanations are easily understandable to end-users. However, we argue that the explanations should also be descriptive and faithfully explain why a model makes its decisions to secure public trust. Hence, we propose two approaches to generate such explanations. One method is to develop a post-hoc linguistic explanation framework that explains a model’s decision in terms of features that are truly responsible for the decision. As the second approach, we propose an inherently interpretable CNN that learns features that correspond to concepts consistent with human perception, thereby explaining its decisions in word phrases. Finally, we investigate using concept-based explanations to automatically augment the training dataset with new images that can cover the under-represented regions in the dataset to improve the prediction accuracy of the underline model.
URI: https://scholarbank.nus.edu.sg/handle/10635/218215
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Sandareka.pdf17.48 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.