You are here
Moral Blameworthiness and Trustworthiness: The Role of Accounts and Apologies in Perceptions of Human and Machine Agents
- Date Issued:
- 2017
- Abstract/Description:
- Would you trust a machine to make life-or-death decisions about your health and safety?Machines today are capable of achieving much more than they could 30 years ago(-)and thesame will be said for machines that exist 30 years from now. The rise of intelligence in machineshas resulted in humans entrusting them with ever-increasing responsibility. With this has arisenthe question of whether machines should be given equal responsibility to humans(-)or if humanswill ever perceive machines as being accountable for such responsibility. For example, if anintelligent machine accidentally harms a person, should it be blamed for its mistake? Should it betrusted to continue interacting with humans? Furthermore, how does the assignment of moralblame and trustworthiness toward machines compare to such assignment to humans who harmothers? I answer these questions by exploring differences in moral blame and trustworthinessattributed to human and machine agents who make harmful moral mistakes. Additionally, Iexamine whether the knowledge and type of reason, as well as apology, for the harmful incidentaffects perceptions of the parties involved. In order to fill the gaps in understanding betweentopics in moral psychology, cognitive psychology, and artificial intelligence, valuableinformation from each of these fields have been combined to guide the research study beingpresented herein.
Title: | Moral Blameworthiness and Trustworthiness: The Role of Accounts and Apologies in Perceptions of Human and Machine Agents. |
31 views
12 downloads |
---|---|---|
Name(s): |
Stowers, Kimberly, Author Hancock, Peter, Committee Chair Jentsch, Florian, Committee Member Mouloua, Mustapha, Committee Member Chen, Jessie, Committee Member Barber, Daniel, Committee Member University of Central Florida, Degree Grantor |
|
Type of Resource: | text | |
Date Issued: | 2017 | |
Publisher: | University of Central Florida | |
Language(s): | English | |
Abstract/Description: | Would you trust a machine to make life-or-death decisions about your health and safety?Machines today are capable of achieving much more than they could 30 years ago(-)and thesame will be said for machines that exist 30 years from now. The rise of intelligence in machineshas resulted in humans entrusting them with ever-increasing responsibility. With this has arisenthe question of whether machines should be given equal responsibility to humans(-)or if humanswill ever perceive machines as being accountable for such responsibility. For example, if anintelligent machine accidentally harms a person, should it be blamed for its mistake? Should it betrusted to continue interacting with humans? Furthermore, how does the assignment of moralblame and trustworthiness toward machines compare to such assignment to humans who harmothers? I answer these questions by exploring differences in moral blame and trustworthinessattributed to human and machine agents who make harmful moral mistakes. Additionally, Iexamine whether the knowledge and type of reason, as well as apology, for the harmful incidentaffects perceptions of the parties involved. In order to fill the gaps in understanding betweentopics in moral psychology, cognitive psychology, and artificial intelligence, valuableinformation from each of these fields have been combined to guide the research study beingpresented herein. | |
Identifier: | CFE0007134 (IID), ucf:52311 (fedora) | |
Note(s): |
2017-08-01 Ph.D. Sciences, Psychology Doctoral This record was generated from author submitted information. |
|
Subject(s): | trust -- morality -- human-machine interaction -- apology -- account | |
Persistent Link to This Record: | http://purl.flvc.org/ucf/fd/CFE0007134 | |
Restrictions on Access: | campus 2019-02-15 | |
Host Institution: | UCF |