Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/153716
Title: LAYERED EXPLANATIONS - INTERPRETING NEURAL NETWORKS WITH NUMERICAL INFLUENCE MEASURES
Authors: HO XUAN VINH
Keywords: algorithmic transparency, interpretable explanation, model-specific explanation, numerical influence measures, neural networks, deep learning
Issue Date: 10-Jan-2019
Citation: HO XUAN VINH (2019-01-10). LAYERED EXPLANATIONS - INTERPRETING NEURAL NETWORKS WITH NUMERICAL INFLUENCE MEASURES. ScholarBank@NUS Repository.
Abstract: Deep learning is currently receiving considerable attention from the machine learning community due to its predictive power. However, its lack of interpretability raises numerous concerns. Since neural networks are deployed in high-stakes domains, stakeholders expect to receive acceptable human interpretable explanations. We explain the decisions of neural networks using layered explanations: we use influence measures in order to compute a numerical value for each layer. Using layerwise influence measures, we identify the layers that contain the most explanatory power, and use those to generate explanations. We test our methodology on datasets, and discuss the merits and issues with our approach.
URI: https://scholarbank.nus.edu.sg/handle/10635/153716
Appears in Collections:Master's Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
HoXV.pdf4.71 MBAdobe PDF

OPEN

NoneView/Download

Page view(s)

43
checked on Jul 10, 2020

Download(s)

14
checked on Jul 10, 2020

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.