Current Search: Human-Robot Interaction -- Transparency -- Human-Agent Teaming (x)
View All Items
- Title
- Transparency and Communication Patterns in Human-Robot Teaming.
- Creator
-
Lakhmani, Shan, Barber, Daniel, Jentsch, Florian, Reinerman, Lauren, Guznov, Svyatoslav, University of Central Florida
- Abstract / Description
-
In anticipation of the complex, dynamic battlefields of the future, military operations are increasingly demanding robots with increased autonomous capabilities to support soldiers. Effective communication is necessary to establish a common ground on which human-robot teamwork can be established across the continuum of military operations. However, the types and format of communication for mixed-initiative collaboration is still not fully understood. This study explores two approaches to...
Show moreIn anticipation of the complex, dynamic battlefields of the future, military operations are increasingly demanding robots with increased autonomous capabilities to support soldiers. Effective communication is necessary to establish a common ground on which human-robot teamwork can be established across the continuum of military operations. However, the types and format of communication for mixed-initiative collaboration is still not fully understood. This study explores two approaches to communication in human-robot interaction, transparency and communication pattern, and examines how manipulating these elements with a robot teammate affects its human counterpart in a collaborative exercise. Participants were coupled with a computer-simulated robot to perform a cordon-and-search-like task. A human-robot interface provided different transparency types(-)about the robot's decision making process alone, or about the robot's decision making process and its prediction of the human teammate's decision making process(-)and different communication patterns(-)either conveying information to the participant or both conveying information to and soliciting information from the participant. This experiment revealed that participants found robots that both conveyed and solicited information to be more animate, likeable, and intelligent than their less interactive counterparts, but working with those robots led to more misses in a target classification task. Furthermore, the act of responding to the robot led to a reduction in the number of correct identifications made, but only when the robot was solely providing information about its own decision making process. Findings from this effort inform the design of next-generation visual displays supporting human-robot teaming.
Show less - Date Issued
- 2019
- Identifier
- CFE0007481, ucf:52674
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007481
- Title
- Transparency in human-agent teaming and its effect on complacent behavior.
- Creator
-
Wright, Julia, Hancock, Peter, Szalma, James, Jentsch, Florian, Chen, Jessie, University of Central Florida
- Abstract / Description
-
This study examined how transparency of an intelligent agent's reasoning affected complacent behavior in a route selection task in a simulated environment. Also examined was how the information available to the operator affected those results.In two experiments, participants supervised a three-vehicle convoy as it traversed a simulated environment and re-routed the convoy when needed with the assistance of an intelligent agent, RoboLeader. Participants were randomly assigned to an Agent...
Show moreThis study examined how transparency of an intelligent agent's reasoning affected complacent behavior in a route selection task in a simulated environment. Also examined was how the information available to the operator affected those results.In two experiments, participants supervised a three-vehicle convoy as it traversed a simulated environment and re-routed the convoy when needed with the assistance of an intelligent agent, RoboLeader. Participants were randomly assigned to an Agent Reasoning Transparency condition. Participants received communications from a commander confirming either the presence or absence of activity in the area. They also received information regarding potential events along their route via icons that appeared on a map displaying the convoy route and surrounding area. Participants in Experiment 1 (low information setting) received information about their current route only; they did not receive any information about the suggested alternate route. Participants in Experiment 2 (high information setting) received information about both their current route and the agent recommended an alternative route. In the first experiment, access to agent reasoning was found to be an effective deterrent to complacent behavior when the operator has limited information about their task environment. However, the addition of information that created ambiguity for the operator encouraged complacency, resulting in reduced performance and poorer trust calibration. Agent reasoning did not increase response time or workload and appeared to have improved performance on the secondary task. These findings align with studies that have shown ambiguous information can increase workload and encourage complacency, as such, caution should be exercised when considering how transparent to make agent reasoning and what information should be included.In the second experiment, access to agent reasoning was found to have little effect on complacent behavior when the operator had complete information about the task environment. However, the addition of information that created ambiguity for the operator appeared to encourage complacency, as indicated by reduced performance and shorter decision times. Agent reasoning transparency did not increase overall workload, and operators reported higher satisfaction with their performance and reduced mental demand. Access to agent reasoning did not improve operators' secondary task performance, situation awareness, or operator trust. However, when agent reasoning transparency included ambiguous information complacent behavior was again encouraged. Unlike the first experiment, there were notable differences in complacent behavior, performance, operator trust, and situation awareness due to individual difference factors. As such, these findings would suggest that when the operator has complete information regarding their task environment, access to agent reasoning may be beneficial, but not dramatically so. However, individual difference factors will greatly influence performance outcomes. The amount of information the operator has regarding the task environment has a profound effect on the proper use of the agent. Increased environmental information resulted in more rejections of the agent recommendation regardless of the transparency of agent reasoning. The addition of agent reasoning transparency appeared to be effective at keeping the operator engaged, while complacent behavior appeared to be encouraged both when agent reasoning was either not transparent or so transparent as to become ambiguous. Even so, operators reported lower trust and usability for the agent than when environmental information was limited. Situation awareness (SA2) scores were also higher in the high information environment when agent reasoning was either not transparent or so transparent as to become ambiguous, compared to the low information environment. However, when a moderate amount of agent reasoning was available to the operator, the amount of information available to the operator had no effect on the operators' complacent behavior, subjective trust, or SA. These findings indicate that some negative outcomes resulting from the incongruous transparency of agent reasoning may be mitigated by increasing the information the operator has regarding the task environment.
Show less - Date Issued
- 2016
- Identifier
- CFE0006422, ucf:51469
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006422