Purcell, Zoe and Bonnefon, Jean-François (2023) Humans feel too special for machines to score their morals. PNAS Nexus, vol. 2 (n° 6).
This is the latest version of this item.
Abstract
Artificial Intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems — enabling people and organizations to form judgements of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.
Item Type: | Article |
---|---|
Language: | English |
Date: | June 2023 |
Refereed: | Yes |
Place of Publication: | Oxford |
Subjects: | B- ECONOMIE ET FINANCE |
Divisions: | TSE-R (Toulouse) |
Site: | UT1 |
Date Deposited: | 14 Jun 2023 09:47 |
Last Modified: | 16 Apr 2024 09:35 |
OAI Identifier: | oai:tse-fr.eu:128114 |
URI: | https://publications.ut-capitole.fr/id/eprint/47863 |
Available Versions of this Item
-
Humans Feel Too Special for Machines to Score Their Morals. (deposited 29 Nov 2022 13:08)
- Humans feel too special for machines to score their morals. (deposited 14 Jun 2023 09:47) [Currently Displayed]