Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/237673
Title: WELFARIST MORAL GROUNDING FOR TRANSPARENT ARTIFICIAL INTELLIGENCE
Authors: DEVESH NARAYANAN
ORCID iD:   orcid.org/0000-0003-4201-1421
Keywords: AI ethics, Welfare-Consequentialism, Transparency, Explainability, Moral grounding, Applied ethics
Issue Date: 8-Nov-2022
Citation: DEVESH NARAYANAN (2022-11-08). WELFARIST MORAL GROUNDING FOR TRANSPARENT ARTIFICIAL INTELLIGENCE. ScholarBank@NUS Repository.
Abstract: Popular calls to make AI systems ‘transparent’ appeal to a wide range of goals that transparency ostensibly helps us pursue. Yet, as I’ll discuss, there remains considerable ambiguity about why these goals matter morally, and whether transparency is sufficient (or even necessary) for achieving them. Sceptics, moreover, have raised important challenges against the principle, which ought to be better incorporated into our understanding of when and how to pursue transparency. This thesis argues that a satisfactory treatment of these underlying moral considerations may be found by grounding calls for AI transparency in the demands of welfare-consequentialism. By shifting the focus away from the mere technical act of making an AI system transparent towards the benefits and harms that transparency might bring about, welfarism helps us understand transparency as a broader moral and political ideal about how we should relate to powerful technologies that make decisions about us.
URI: https://scholarbank.nus.edu.sg/handle/10635/237673
Appears in Collections:Master's Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
NarayananD.pdf777.95 kBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.