Current Search: Human-Machine Interaction (x)
View All Items
- Title
- MODERATORS OF TRUST AND RELIANCE ACROSS MULTIPLE DECISION AIDS.
- Creator
-
Ross, Jennifer, Szalma, James, University of Central Florida
- Abstract / Description
-
The present work examines whether user's trust of and reliance on automation, were affected by the manipulations of user's perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face...
Show moreThe present work examines whether user's trust of and reliance on automation, were affected by the manipulations of user's perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face difficulties in understanding how to combine their judgment with that of an automated aid. This difficulty is especially prevalent in complex tasks in which users rely heavily on automation to reduce their workload and improve task performance. However, when users rely on automation heavily they often fail to monitor the system effectively (i.e., they lose situation awareness a form of misuse). However, if an operator realizes a system is imperfect and fails, they may subsequently lose trust in the system leading to underreliance. In the present studies, it was hypothesized that in a dual-aid environment poor reliability in one aid would impact trust and reliance levels in a companion better aid, but that this relationship is dependent upon the perceived aid type and the noticeability of the errors made. Simulations of a computer-based search-and-rescue scenario, employing uninhabited/unmanned ground vehicles (UGVs) searching a commercial office building for critical signals, were used to investigate these hypotheses. Results demonstrated that participants were able to adjust their reliance and trust on automated teammates depending on the teammate's actual reliability levels. However, as hypothesized there was a biasing effect among mixed-reliability aids for trust and reliance. That is, when operators worked with two agents of mixed-reliability, their perception of how reliable and to what degree they relied on the aid was effected by the reliability of a current aid. Additionally, the magnitude and direction of how trust and reliance were biased was contingent upon agent type (i.e., 'what' the agents were: two humans, two similar robotic agents, or two dissimilar robot agents). Finally, the type of agent an operator believed they were operating with significantly impacted their temporal reliance (i.e., reliance following an automation failure). Such that, operators were less likely to agree with a recommendation from a human teammate, after that teammate had made an obvious error, than with a robotic agent that had made the same obvious error. These results demonstrate that people are able to distinguish when an agent is performing well but that there are genuine differences in how operators respond to agents of mixed or same abilities and to errors by fellow human observers or robotic teammates. The overall goal of this research was to develop a better understanding how the aforementioned factors affect users' trust in automation so that system interfaces can be designed to facilitate users' calibration of their trust in automated aids, thus leading to improved coordination of human-automation performance. These findings have significant implications to many real-world systems in which human operators monitor the recommendations of multiple other human and/or machine systems.
Show less - Date Issued
- 2008
- Identifier
- CFE0002077, ucf:47579
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002077
- Title
- INDIVIDUAL PREFERENCES IN THE USE OF AUTOMATION.
- Creator
-
Thropp, Jennifer, Hancock, Peter, University of Central Florida
- Abstract / Description
-
As system automation increases and evolves, the intervention of the supervising operator becomes ever less frequent but ever more crucial. The adaptive automation approach is one in which control of tasks dynamically shifts between humans and machines, being an alternative to traditional static allocation in which task control is assigned during system design and subsequently remains unchanged during operations. It is proposed that adaptive allocation should adjust to the individual operators...
Show moreAs system automation increases and evolves, the intervention of the supervising operator becomes ever less frequent but ever more crucial. The adaptive automation approach is one in which control of tasks dynamically shifts between humans and machines, being an alternative to traditional static allocation in which task control is assigned during system design and subsequently remains unchanged during operations. It is proposed that adaptive allocation should adjust to the individual operators' characteristics in order to improve performance, avoid errors, and enhance safety. The roles of three individual difference variables relevant to adaptive automation are described: attentional control, desirability of control, and trait anxiety. It was hypothesized that these traits contribute to the level of performance for target detection tasks for different levels of difficulty as well as preferences for different levels of automation. The operators' level of attentional control was inversely proportional to automation level preferences, although few objective performance changes were observed. The effects of sensory modality were also assessed, and auditory signal detection was superior to visual signal detection. As a result, the following implications have been proposed: operators generally preferred either low or high automation while neglecting the intermediary level; preferences and needs for automation may not be congruent; and there may be a conservative response bias associated with high attentional control, notably in the auditory modality.
Show less - Date Issued
- 2006
- Identifier
- CFE0001096, ucf:46771
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001096
- Title
- Moral Blameworthiness and Trustworthiness: The Role of Accounts and Apologies in Perceptions of Human and Machine Agents.
- Creator
-
Stowers, Kimberly, Hancock, Peter, Jentsch, Florian, Mouloua, Mustapha, Chen, Jessie, Barber, Daniel, University of Central Florida
- Abstract / Description
-
Would you trust a machine to make life-or-death decisions about your health and safety?Machines today are capable of achieving much more than they could 30 years ago(-)and thesame will be said for machines that exist 30 years from now. The rise of intelligence in machineshas resulted in humans entrusting them with ever-increasing responsibility. With this has arisenthe question of whether machines should be given equal responsibility to humans(-)or if humanswill ever perceive machines as...
Show moreWould you trust a machine to make life-or-death decisions about your health and safety?Machines today are capable of achieving much more than they could 30 years ago(-)and thesame will be said for machines that exist 30 years from now. The rise of intelligence in machineshas resulted in humans entrusting them with ever-increasing responsibility. With this has arisenthe question of whether machines should be given equal responsibility to humans(-)or if humanswill ever perceive machines as being accountable for such responsibility. For example, if anintelligent machine accidentally harms a person, should it be blamed for its mistake? Should it betrusted to continue interacting with humans? Furthermore, how does the assignment of moralblame and trustworthiness toward machines compare to such assignment to humans who harmothers? I answer these questions by exploring differences in moral blame and trustworthinessattributed to human and machine agents who make harmful moral mistakes. Additionally, Iexamine whether the knowledge and type of reason, as well as apology, for the harmful incidentaffects perceptions of the parties involved. In order to fill the gaps in understanding betweentopics in moral psychology, cognitive psychology, and artificial intelligence, valuableinformation from each of these fields have been combined to guide the research study beingpresented herein.
Show less - Date Issued
- 2017
- Identifier
- CFE0007134, ucf:52311
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007134
- Title
- THE KIOSK CULTURE: RECONCILING THE PERFORMANCE SUPPORT PARADOX IN THE POSTMODERN AGE OF MACHINES.
- Creator
-
Cavanagh, Thomas, Kitalong, Karla, University of Central Florida
- Abstract / Description
-
Do you remember the first time you used an Automatic Teller Machine (ATM)? Or a pay-at-the-pump gas station? Or an airline e-ticket kiosk? How did you know what to do? Although you never received any formal instruction in how to interact with the self-service technology, you were likely able to accomplish your task (e.g., withdrawing or depositing money) as successfully as an experienced user. However, not so long ago, to accomplish that same task, you needed the direct mediation of a service...
Show moreDo you remember the first time you used an Automatic Teller Machine (ATM)? Or a pay-at-the-pump gas station? Or an airline e-ticket kiosk? How did you know what to do? Although you never received any formal instruction in how to interact with the self-service technology, you were likely able to accomplish your task (e.g., withdrawing or depositing money) as successfully as an experienced user. However, not so long ago, to accomplish that same task, you needed the direct mediation of a service professional who had been trained how to use the required complex technology. What has changed? In short, the technology is now able to compensate for the average consumer's lack of experience with the transactional system. The technology itself bridges the performance gap, allowing a novice to accomplish the same task as an experienced professional. This shift to a self-service paradigm is completely changing the dynamics of the consumer relationship with the capitalist enterprise, resulting in what is rapidly becoming the default consumer interface of the postmodern era. The recognition that the entire performance support apparatus now revolves around the end user/consumer rather than the employee represents a tectonic shift in the workforce training industry. What emerges is a homogenized consumer culture enabled by self-service technologies--a kiosk culture. No longer is the ability to interact with complex technology confined to a privileged workforce minority who has access to expensive and time-consuming training. The growth of the kiosk culture is being driven equally by business financial pressures, consumer demand for more efficient transactions, and the improved sophistication of compensatory technology that allows a novice to perform a task with the same competence as an expert. "The Kiosk Culture" examines all aspects of self-service technology and its ascendancy. Beyond the milieu of business, the kiosk culture is also infiltrating all corners of society, including medicine, athletics, and the arts, forcing us to re-examine our definitions of knowledge, skills, performance, and even humanity. The current ubiquity of self-service technology has already impacted our society and will continue to do so as we ride the rising tide of the kiosk culture.
Show less - Date Issued
- 2006
- Identifier
- CFE0001348, ucf:46989
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001348