Makiva, Kinga, Bonnefon, Jean-FrançoisIdRef, Oudah, Mayada, Sargsyan, Anahit and Rahwan, Tahal (2025) Rewards and Punishments Help Humans Overcome Biases Against Cooperation Partners Assumed to be Machines. iScience.

Full text not available from this repository.
Identification Number : 10.1016/j.isci.2025.112833

Abstract

High levels of human-machine cooperation are required to combine the strengths of human and artificial intelligence. Here we investigate strategies to overcome the machine penalty, where people are less cooperative with partners they assume to be machines, than with partners they assume to be humans. Using a large-scale iterative public goods game with nearly 2000 participants, we find that peer rewards or peer punishments can both promote cooperation with partners assumed to be machines, but do not overcome the machine penalty. Their combination, however, eliminates the machine penalty, because it is uniquely effective for partners assumed to be machines, and inefficient for partners assumed to be humans. These findings provide a nuanced road map for designing a cooperative environment for humans and machines, depending on the exact goals of the designer.

Item Type: Article
Language: English
Date: 6 June 2025
Refereed: Yes
Place of Publication: Cambridge
Subjects: C- GESTION
Divisions: TSM Research (Toulouse)
Site: UT1
Date Deposited: 13 Jun 2025 06:48
Last Modified: 13 Jun 2025 06:49
OAI Identifier: oai:tsm.fr:2912
URI: https://publications.ut-capitole.fr/id/eprint/50927
View Item