Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/201769
DC FieldValue
dc.titleUSER-CENTRIC EXPLANATION OF MACHINE LEARNING MODEL FOR HUMAN-AI COLLABORATION
dc.contributor.authorWANG DANDING
dc.date.accessioned2021-10-02T18:00:18Z
dc.date.available2021-10-02T18:00:18Z
dc.date.issued2021-08-05
dc.identifier.citationWANG DANDING (2021-08-05). USER-CENTRIC EXPLANATION OF MACHINE LEARNING MODEL FOR HUMAN-AI COLLABORATION. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/201769
dc.description.abstractArtificial Intelligence (AI) has been deployed in many aspects of human lives. While most AI models remain black-boxes, there is an increasing demand to explain the model to human users to moderate users' trust and support user decision-making together with the AI, especially for critical decisions. However, humans and machines reason and explain in different ways. Many explainable AI (XAI) techniques are not designed from the human user's point of view. This thesis focuses on the user-centric XAI and advocates that machine learning explanations should be designed to fit the user reasoning and improve user decision-making. This thesis tightens the connections between human reasoning and explaining AI. Firstly, we investigated the theoretical underpinnings of the connections between human decision-making and AI explanations by asking what types of explanations can fit users' reasoning models and mitigate users' reasoning errors. Secondly, we investigated how AI explanations should be built for different human reasoning preferences in a challenging case where people have different strategies to handle uncertainty in decision making. Lastly, since the user and the model reason and explain differently, we study how to bring the model explanation closer to the human reasoning model by explaining deep learning models with concepts and causality and combing human prior on the reasoning structure. In summary, we strengthened the theoretical connection between human reasoning and AI explanation, provided technical solutions to explain and improve AI models inspired by human reasoning, and empirically evaluated our methods for better human decisions.
dc.language.isoen
dc.subjectexplainable AI, human-computer interaction, trust, decision making, uncertainty, causality
dc.typeThesis
dc.contributor.departmentCOMPUTER SCIENCE
dc.contributor.supervisorYouliang Brian Lim
dc.description.degreePh.D
dc.description.degreeconferredDOCTOR OF PHILOSOPHY (SOC)
Appears in Collections:Ph.D Theses (Open)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Thesis final submitted.pdf10.41 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.