Current Search: Human Robot Interaction (x)
View All Items
Pages
- Title
- Modeling and Compensation for Efficient Human Robot Interaction.
- Creator
-
Paperno, Nicholas, Behal, Aman, Haralambous, Michael, Boloni, Ladislau, University of Central Florida
- Abstract / Description
-
The purpose of this research is to first: identify the important human factors to performance when operating an assistive robotic manipulator, second: develop a predictive model that will be able to determine a user's performance based on their known human factors, and third: develop compensators based on the determined important human factors that will help improve user performance and satisfaction. An extensive literature search led to the selection of ten potential human factors to be...
Show moreThe purpose of this research is to first: identify the important human factors to performance when operating an assistive robotic manipulator, second: develop a predictive model that will be able to determine a user's performance based on their known human factors, and third: develop compensators based on the determined important human factors that will help improve user performance and satisfaction. An extensive literature search led to the selection of ten potential human factors to be analyzed including reaction time, spatial abilities (orientation and visualization), working memory, visual perception, dexterity (gross and fine), depth perception, and visual acuity of both eyes (classified as strongest and weakest). 93 participants were recruited to perform six different pick-and-place and retrieval tasks using an assistive robotic device. During this time, a participants Time-on-Task, Number-of-Moves, and Number-of-Moves per minute were recorded. From this it was determined that all the human factors except visual perception were considered important to at least one aspect of a user's performance. Predictive models were then developed using random forest, linear models, and polynomial models. To compensate for deficiencies in certain human factors, the GUI was redesigned based on a heuristic analysis and user feedback. Multimodal feedback as well as adjustments in the sensitivity of the input device and reduction in the robot's speed of movement were also implemented. From a user study using 15 participants it was found that certain compensation did improve satisfaction of the users, particularly the multimodal feedback and sensitivity adjustment. The reduction of speed was met with mixed reviews from the participants.
Show less - Date Issued
- 2016
- Identifier
- CFE0006370, ucf:51504
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006370
- Title
- The Perception and Measurement of Human-Robot Trust.
- Creator
-
Schaefer, Kristin, Hancock, Peter, Jentsch, Florian, Kincaid, John, Reinerman, Lauren, Billings, Deborah, Lee, John, University of Central Florida
- Abstract / Description
-
As robots penetrate further into the everyday environments, trust in these robots becomes a crucial issue. The purpose of this work was to create and validate a reliable scale that could measure changes in an individual's trust in a robot. Assessment of current trust theory identified measurable antecedents specific to the human, the robot, and the environment. Six experiments subsumed the development of the 40 item trust scale. Scale development included the creation of a 172 item pool. Two...
Show moreAs robots penetrate further into the everyday environments, trust in these robots becomes a crucial issue. The purpose of this work was to create and validate a reliable scale that could measure changes in an individual's trust in a robot. Assessment of current trust theory identified measurable antecedents specific to the human, the robot, and the environment. Six experiments subsumed the development of the 40 item trust scale. Scale development included the creation of a 172 item pool. Two experiments identified the robot features and perceived functional characteristics that were related to the classification of a machine as a robot for this item pool. Item pool reduction techniques and subject matter expert (SME) content validation were used to reduce the scale to 40 items. The two final experiments were then conducted to validate the scale. The finalized 40 item pre-post interaction trust scale was designed to measure trust perceptions specific to HRI. The scale measured trust on a 0-100% rating scale and provides a percentage trust score. A 14 item sub-scale of this final version of the test recommended by SMEs may be sufficient for some HRI tasks, and the implications of this proposition were discussed.
Show less - Date Issued
- 2013
- Identifier
- CFE0004931, ucf:49634
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004931
- Title
- THE IMPACT OF MENTAL TRANSFORMATION TRAINING ACROSS LEVELS OF AUTOMATION ON SPATIAL AWARENESS IN HUMAN-ROBOT INTERACTION.
- Creator
-
Rehfeld, Sherri, Jentsch, Florian, University of Central Florida
- Abstract / Description
-
One of the problems affecting robot operators' spatial awareness involves their ability to infer a robot's location based on the views from on-board cameras and other electro-optic systems. To understand the vehicle's location, operators typically need to translate images from a vehicle's camera into some other coordinates, such as a location on a map. This translation requires operators to relate the view by mentally rotating it along a number of axes, a task that is both...
Show moreOne of the problems affecting robot operators' spatial awareness involves their ability to infer a robot's location based on the views from on-board cameras and other electro-optic systems. To understand the vehicle's location, operators typically need to translate images from a vehicle's camera into some other coordinates, such as a location on a map. This translation requires operators to relate the view by mentally rotating it along a number of axes, a task that is both attention-demanding and workload-intensive, and one that is likely affected by individual differences in operator spatial abilities. Because building and maintaining spatial awareness is attention-demanding and workload-intensive, any variable that changes operator workload and attention should be investigated for its effects on operator spatial awareness. One of these variables is the use of automation (i.e., assigning functions to the robot). According to Malleable Attentional Resource Theory (MART), variation in workload across levels of automation affects an operator's attentional capacity to process critical cues like those that enable an operator to understand the robot's past, current, and future location. The study reported here focused on performance aspects of human-robot interaction involving ground robots (i.e., unmanned ground vehicles, or UGVs) during reconnaissance tasks. In particular, this study examined how differences in operator spatial ability and in operator workload and attention interacted to affect spatial awareness during human-robot interaction (HRI). Operator spatial abilities were systematically manipulated through the use of mental transformation training. Additionally, operator workload and attention were manipulated via the use of three different levels of automation (i.e., manual control, decision support, and full automation). Operator spatial awareness was measured by the size of errors made by the operators, when they were tasked to infer the robot's location from on-board camera views at three different points in a sequence of robot movements through a simulated military operation in urban terrain (MOUT) environment. The results showed that mental transformation training increased two areas of spatial ability, namely mental rotation and spatial visualization. Further, spatial ability in these two areas predicted performance in vehicle localization during the reconnaissance task. Finally, assistive automation showed a benefit with respect to operator workload, situation awareness, and subsequently performance. Together, the results of the study have implications with respect to the design of robots, function allocation between robots and operators, and training for spatial ability. Future research should investigate the interactive effects on operator spatial awareness of spatial ability, spatial ability training, and other variables affecting operator workload and attention.
Show less - Date Issued
- 2006
- Identifier
- CFE0001468, ucf:47102
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001468
- Title
- MODERATORS OF TRUST AND RELIANCE ACROSS MULTIPLE DECISION AIDS.
- Creator
-
Ross, Jennifer, Szalma, James, University of Central Florida
- Abstract / Description
-
The present work examines whether user's trust of and reliance on automation, were affected by the manipulations of user's perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face...
Show moreThe present work examines whether user's trust of and reliance on automation, were affected by the manipulations of user's perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face difficulties in understanding how to combine their judgment with that of an automated aid. This difficulty is especially prevalent in complex tasks in which users rely heavily on automation to reduce their workload and improve task performance. However, when users rely on automation heavily they often fail to monitor the system effectively (i.e., they lose situation awareness a form of misuse). However, if an operator realizes a system is imperfect and fails, they may subsequently lose trust in the system leading to underreliance. In the present studies, it was hypothesized that in a dual-aid environment poor reliability in one aid would impact trust and reliance levels in a companion better aid, but that this relationship is dependent upon the perceived aid type and the noticeability of the errors made. Simulations of a computer-based search-and-rescue scenario, employing uninhabited/unmanned ground vehicles (UGVs) searching a commercial office building for critical signals, were used to investigate these hypotheses. Results demonstrated that participants were able to adjust their reliance and trust on automated teammates depending on the teammate's actual reliability levels. However, as hypothesized there was a biasing effect among mixed-reliability aids for trust and reliance. That is, when operators worked with two agents of mixed-reliability, their perception of how reliable and to what degree they relied on the aid was effected by the reliability of a current aid. Additionally, the magnitude and direction of how trust and reliance were biased was contingent upon agent type (i.e., 'what' the agents were: two humans, two similar robotic agents, or two dissimilar robot agents). Finally, the type of agent an operator believed they were operating with significantly impacted their temporal reliance (i.e., reliance following an automation failure). Such that, operators were less likely to agree with a recommendation from a human teammate, after that teammate had made an obvious error, than with a robotic agent that had made the same obvious error. These results demonstrate that people are able to distinguish when an agent is performing well but that there are genuine differences in how operators respond to agents of mixed or same abilities and to errors by fellow human observers or robotic teammates. The overall goal of this research was to develop a better understanding how the aforementioned factors affect users' trust in automation so that system interfaces can be designed to facilitate users' calibration of their trust in automated aids, thus leading to improved coordination of human-automation performance. These findings have significant implications to many real-world systems in which human operators monitor the recommendations of multiple other human and/or machine systems.
Show less - Date Issued
- 2008
- Identifier
- CFE0002077, ucf:47579
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002077
- Title
- EFFECT OF A HUMAN-TEACHER VS. A ROBOT-TEACHER ON HUMAN LEARNING: A PILOT STUDY.
- Creator
-
Smith, Melissa, Sims, Valerie, University of Central Florida
- Abstract / Description
-
Studies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the...
Show moreStudies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the participants learned a simple pick-and-place block task via video of a human-teacher and a group where the participants learned the same pick-and-place block task via video from a robotic-teacher. After being the taught the task, the participants performed a 15-minute distracter task and then were timed in their reconstruction of the block configuration. An exit survey asking about their level of comfort learning from robot and computer entities was given upon completion. Results showed that there was no significant difference in the rebuild scores of the two groups, but there was a marginally significant difference in the rebuild times of the two groups. Exit survey results, research implications, and future work are discussed.
Show less - Date Issued
- 2011
- Identifier
- CFH0004068, ucf:44809
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004068
- Title
- Nonlinear Control Synthesis for Facilitation of Human-Robot Interaction.
- Creator
-
Ding, Zhangchi, Behal, Aman, Pourmohammadi Fallah, Yaser, Haralambous, Michael, Boloni, Ladislau, Xu, Yunjun, University of Central Florida
- Abstract / Description
-
Human-robot interaction is an area of interest that is becoming increasingly important in robotics research. Nonlinear control design techniques allow researchers to guarantee stability, performance, as well as safety, especially in cases involving physical human-robot interaction (PHRI). In this dissertation, we will propose two different nonlinear controllers and detail the design of an assistive robotic system to facilitate human-robot interaction. In Chapter 2, to facilitate physical...
Show moreHuman-robot interaction is an area of interest that is becoming increasingly important in robotics research. Nonlinear control design techniques allow researchers to guarantee stability, performance, as well as safety, especially in cases involving physical human-robot interaction (PHRI). In this dissertation, we will propose two different nonlinear controllers and detail the design of an assistive robotic system to facilitate human-robot interaction. In Chapter 2, to facilitate physical human-robot interaction, the problem of making a safe compliant contact between a human and an assistive robot is considered. Users with disabilities have a need to utilize their assistive robots for physical interaction during activities such as hair-grooming, scratching, face-sponging, etc. Specifically, we propose a hybrid force/velocity/attitude control for our physical human-robot interaction system which is based on measurements from a force/torque sensor mounted on the robot wrist. While automatically aligning the end-effector surface with the unknown environmental (human) surface, a desired commanded force is applied in the normal direction while following desired velocity commands in the tangential directions. A Lyapunov based stability analysis is provided to prove both convergence as well as passivity of the interaction to ensure both performance and safety. Simulation as well as experimental results verify the performance and robustness of the proposed hybrid force/velocity/attitude controller in the presence of dynamic uncertainties as well as safety compliance of human-robot interactions for a redundant robot manipulator.Chapter 3 presents the design, analysis, and experimental implementation of an adaptive control enabled intelligent algorithm to facilitate 1-click grasping of novel objects by a robotic gripper since one of the most common types of tasks for an assistive robot is pick and place/object retrieval tasks. But there are a variety of objects in our daily life all of which need different optimal force to grasp them. This algorithm facilitates automated grasping force adjustment. The use of object-geometry free modeling coupled with utilization of interaction force and slip velocity measurements allows for the design of an adaptive backstepping controller that is shown to be asymptotically stable via a Lyapunov-based analysis. Experiments with multiple objects using a prototype gripper with embedded sensing show that the proposed scheme is able to effectively immobilize novel objects within the gripper fingers. Furthermore, it is seen that the adaptation allows for close estimation of the minimum grasp force required for safe grasping which results in minimal deformation of the grasped object.In Chapter 4, we present the design and implementation of the motion controllerand adaptive interface for the second generation of the UCF-MANUSintelligent assistive robotic manipulator system. Based on usability testingfor the system, several features were implemented in the interface thatcould reduce the complexity of the human-robot interaction while alsocompensating for the deficits in different human factors, such as WorkingMemory, Response Inhibition, Processing Speed; , Depth Perception, SpatialAbility, Contrast Sensitivity. For the controller part, we designed severalnew features to provide the user has a less complex and safer interactionwith the robot, such as `One-click mode', `Move suggestion mode' and`Gripper Control Assistant'. As for the adaptive interface design, wedesigned and implemented compensators such as `Contrast Enhancement',`Object Proximity Velocity Reduction' and `Orientation Indicator'.
Show less - Date Issued
- 2019
- Identifier
- CFE0007798, ucf:52360
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007798
- Title
- ATTRIBUTIONS OF BLAME IN A HUMAN-ROBOT INTERACTION SCENARIO.
- Creator
-
Scholcover, Federico, Sims, Valerie, University of Central Florida
- Abstract / Description
-
This thesis worked towards answering the following question: Where, if at all, do the beliefs and behaviors associated with interacting with a nonhuman agent deviate from how we treat a human? This was done by exploring the inter-related fields of Human-Computer and Human-Robot Interaction in the literature review, viewing them through the theoretical lens of anthropomorphism. A study was performed which looked at how 104 participants would attribute blame in a robotic surgery scenario, as...
Show moreThis thesis worked towards answering the following question: Where, if at all, do the beliefs and behaviors associated with interacting with a nonhuman agent deviate from how we treat a human? This was done by exploring the inter-related fields of Human-Computer and Human-Robot Interaction in the literature review, viewing them through the theoretical lens of anthropomorphism. A study was performed which looked at how 104 participants would attribute blame in a robotic surgery scenario, as detailed in a vignette. A majority of results were statistically non-significant, however, some results emerged which may imply a diffusion of responsibility in human-robot collaboration scenarios.
Show less - Date Issued
- 2014
- Identifier
- CFH0004587, ucf:45224
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004587
- Title
- Transparency and Communication Patterns in Human-Robot Teaming.
- Creator
-
Lakhmani, Shan, Barber, Daniel, Jentsch, Florian, Reinerman, Lauren, Guznov, Svyatoslav, University of Central Florida
- Abstract / Description
-
In anticipation of the complex, dynamic battlefields of the future, military operations are increasingly demanding robots with increased autonomous capabilities to support soldiers. Effective communication is necessary to establish a common ground on which human-robot teamwork can be established across the continuum of military operations. However, the types and format of communication for mixed-initiative collaboration is still not fully understood. This study explores two approaches to...
Show moreIn anticipation of the complex, dynamic battlefields of the future, military operations are increasingly demanding robots with increased autonomous capabilities to support soldiers. Effective communication is necessary to establish a common ground on which human-robot teamwork can be established across the continuum of military operations. However, the types and format of communication for mixed-initiative collaboration is still not fully understood. This study explores two approaches to communication in human-robot interaction, transparency and communication pattern, and examines how manipulating these elements with a robot teammate affects its human counterpart in a collaborative exercise. Participants were coupled with a computer-simulated robot to perform a cordon-and-search-like task. A human-robot interface provided different transparency types(-)about the robot's decision making process alone, or about the robot's decision making process and its prediction of the human teammate's decision making process(-)and different communication patterns(-)either conveying information to the participant or both conveying information to and soliciting information from the participant. This experiment revealed that participants found robots that both conveyed and solicited information to be more animate, likeable, and intelligent than their less interactive counterparts, but working with those robots led to more misses in a target classification task. Furthermore, the act of responding to the robot led to a reduction in the number of correct identifications made, but only when the robot was solely providing information about its own decision making process. Findings from this effort inform the design of next-generation visual displays supporting human-robot teaming.
Show less - Date Issued
- 2019
- Identifier
- CFE0007481, ucf:52674
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007481
- Title
- Human-Robot Interaction For Multi-Robot Systems.
- Creator
-
Lewis, Bennie, Sukthankar, Gita, Hughes, Charles, Laviola II, Joseph, Boloni, Ladislau, Hancock, Peter, University of Central Florida
- Abstract / Description
-
Designing an effective human-robot interaction paradigm is particularly important for complex tasks such as multi robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that therobots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not...
Show moreDesigning an effective human-robot interaction paradigm is particularly important for complex tasks such as multi robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that therobots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not automatically improve performance due to the difficulty of teleoperating mobile robots with manipulators. The human operator's concentration is divided not only amongmultiple robots but also between controlling each robot's base and arm. This complexity substantially increases the potential neglect time, since the operator's inability to effectively attend to each robot during a critical phase of the task leads to a significant degradation in task performance.There are several proven paradigms for increasing the efficacy of human-robot interaction: 1) multimodal interfaces in which the user controls the robots using voice and gesture; 2) configurable interfaces which allow the user to create new commands by demonstrating them; 3) adaptive interfaceswhich reduce the operator's workload as necessary through increasing robot autonomy. This dissertation presents an evaluation of the relative benefits of different types of user interfaces for multi-robot systems composed of robots with wheeled bases and three degree of freedom arms. It describes a design for constructing low-cost multi-robot manipulation systems from off the shelfparts.User expertise was measured along three axes (navigation, manipulation, and coordination), and participants who performed above threshold on two out of three dimensions on a calibration task were rated as expert. Our experiments reveal that the relative expertise of the user was the key determinant of the best performing interface paradigm for that user, indicating that good user modeling is essential for designing a human-robot interaction system that will be used for an extended period of time. The contributions of the dissertation include: 1) a model for detecting operator distraction from robot motion trajectories; 2) adjustable autonomy paradigms for reducing operator workload; 3) a method for creating coordinated multi-robot behaviors from demonstrations with a single robot; 4) a user modeling approach for identifying expert-novice differences from short teleoperation traces.
Show less - Date Issued
- 2014
- Identifier
- CFE0005198, ucf:50613
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005198
- Title
- Influence of Task-Role Mental Models on Human Interpretation of Robot Motion Behavior.
- Creator
-
Ososky, Scott, Jentsch, Florian, Shumaker, Randall, Fiore, Stephen, Lackey, Stephanie, University of Central Florida
- Abstract / Description
-
The transition in robotics from tools to teammates has begun. However, the benefit autonomous robots provide will be diminished if human teammates misinterpret robot behaviors. Applying mental model theory as the organizing framework for human understanding of robots, the current empirical study examined the influence of task-role mental models of robots on the interpretation of robot motion behaviors, and the resulting impact on subjective ratings of robots. Observers (N = 120) were exposed...
Show moreThe transition in robotics from tools to teammates has begun. However, the benefit autonomous robots provide will be diminished if human teammates misinterpret robot behaviors. Applying mental model theory as the organizing framework for human understanding of robots, the current empirical study examined the influence of task-role mental models of robots on the interpretation of robot motion behaviors, and the resulting impact on subjective ratings of robots. Observers (N = 120) were exposed to robot behaviors that were either congruent or incongruent with their task-role mental model, by experimental manipulation of preparatory robot task-role information to influence mental models (i.e., security guard, groundskeeper, or no information), the robot's actual task-role behaviors (i.e., security guard or groundskeeper), and the order in which these robot behaviors were presented. The results of the research supported the hypothesis that observers with congruent mental models were significantly more accurate in interpreting the motion behaviors of the robot than observers without a specific mental model. Additionally, an incongruent mental model, under certain circumstances, significantly hindered an observer's interpretation accuracy, resulting in subjective sureness of inaccurate interpretations. The strength of the effects that mental models had on the interpretation and assessment of robot behaviors was thought to have been moderated by the ease with which a particular mental model could reasonably explain the robot's behavior, termed mental model applicability. Finally, positive associations were found between differences in observers' interpretation accuracy and differences in subjective ratings of robot intelligence, safety, and trustworthiness. The current research offers implications for the relationships between mental model components, as well as implications for designing robot behaviors to appear more transparent, or opaque, to humans.
Show less - Date Issued
- 2013
- Identifier
- CFE0005391, ucf:50457
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005391
- Title
- Evaluating Human-Robot Implicit Communication through Human-Human Implicit Communication.
- Creator
-
Richardson, Andrew, Karwowski, Waldemar, Hancock, Peter, Shumaker, Randall, Reinerman, Lauren, University of Central Florida
- Abstract / Description
-
Human-Robot Interaction (HRI) research is examining ways to make human-robot (HR) communication more natural. Incorporating natural communication techniques is expected to make HR communication seamless and more natural for humans. Humans naturally incorporate implicit levels of communication, and including implicit communication in HR communication should provide tremendous benefit. The aim for this work was to evaluate a model for human-robot implicit communication. Specifically, the...
Show moreHuman-Robot Interaction (HRI) research is examining ways to make human-robot (HR) communication more natural. Incorporating natural communication techniques is expected to make HR communication seamless and more natural for humans. Humans naturally incorporate implicit levels of communication, and including implicit communication in HR communication should provide tremendous benefit. The aim for this work was to evaluate a model for human-robot implicit communication. Specifically, the primary goal for this research was to determine whether humans can assign meanings to implicit cues received from autonomous robots as they do for identical implicit cues received from humans.An experiment was designed to allow participants to assign meanings to identical, implicit cues (pursuing, retreating, investigating, hiding, patrolling) received from humans and robots. Participants were tasked to view random video clips of both entity types, label the implicit cue, and assign a level of confidence in their chosen answer. Physiological data was tracked during the experiment using an electroencephalogram and eye-tracker. Participants answered workload and stress measure questionnaires following each scenario.Results revealed that participants were significantly more accurate with human cues (84%) than with robot cues (82%), however participants were highly accurate, above 80%, for both entity types. Despite the high accuracy for both types, participants remained significantly more confident in answers for humans (6.1) than for robots (5.9) on a confidence scale of 1 - 7.Subjective measures showed no significant differences for stress or mental workload across entities. Physiological measures were not significant for the engagement index acrossentity, but robots resulted in significantly higher levels of cognitive workload for participants via the index of cognitive activity.The results of this study revealed that participants are more confident interpreting human implicit cues than identical cues received from a robot. However, the accuracy of interpreting both entities remained high. Participants showed no significant difference in interpreting different cues across entity as well. Therefore, much of the ability of interpreting an implicit cue resides in the actual cue rather than the entity. Proper training should boost confidence as humans begin to work alongside autonomous robots as teammates, and it is possible to train humans to recognize cues based on the movement, regardless of the entity demonstrating the movement.
Show less - Date Issued
- 2012
- Identifier
- CFE0004429, ucf:49352
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004429
- Title
- APPLYING THE APPRAISAL THEORY OF EMOTIONTO HUMAN-AGENT INTERACTION.
- Creator
-
Pepe, Aaron, Sims, Valerie, University of Central Florida
- Abstract / Description
-
Autonomous robots are increasingly being used in everyday life; cleaning our floors, entertaining us and supplementing soldiers in the battlefield. As emotion is a key ingredient in how we interact with others, it is important that our emotional interaction with these new entities be understood. This dissertation proposes using the appraisal theory of emotion (Roseman, Scherer, Schorr, & Johnstone, 2001) to investigate how we understand and evaluate situations involving this new breed of...
Show moreAutonomous robots are increasingly being used in everyday life; cleaning our floors, entertaining us and supplementing soldiers in the battlefield. As emotion is a key ingredient in how we interact with others, it is important that our emotional interaction with these new entities be understood. This dissertation proposes using the appraisal theory of emotion (Roseman, Scherer, Schorr, & Johnstone, 2001) to investigate how we understand and evaluate situations involving this new breed of robot. This research involves two studies; in the first study an experimental method was used in which participants interacted with a live dog, a robotic dog or a non-anthropomorphic robot to attempt to accomplish a set of tasks. The appraisals of motive consistent / motive inconsistent (the task was performed correctly/incorrectly) and high / low perceived control (the teammate was well trained/not well trained) were manipulated to show the practicality of using appraisal theory as a basis for human robot interaction studies. Robot form was investigated for its influence on emotions experienced. Finally, the influence of high and low control on the experience of positive emotions caused by another was investigated. Results show that a human robot live interaction test bed is a valid way to influence participants' appraisals. Manipulation checks of motive consistent / motive inconsistent, high / low perceived control and the proper appraisal of cause were significant. Form was shown to influence both the positive and negative emotions experienced, the more lifelike agents were rated higher in positive emotions and lower in negative emotions. The emotion gratitude was shown to be greater during conditions of low control when the entities performed correctly,suggesting that more experiments should be conducted investigating agent caused motive-conducive events. A second study was performed with participants evaluating their reaction to a hypothetical story. In this story they were interacting with either a human, robotic dog, or robot to complete a task. These three agent types and high/low perceived control were manipulated with all stories ending successfully. Results indicated that gratitude and appreciation are sensitive to the manipulation of agent type. It is suggested that, based on the results of these studies, the emotion gratitude should be added to Roseman et al. (2001) appraisal theory to describe the emotion felt during low-control, motive-consistent, other-caused events. These studies have also shown that the appraisal theory of emotion is useful in the study of human-robot and human-animal interactions.
Show less - Date Issued
- 2007
- Identifier
- CFE0001819, ucf:47351
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001819
- Title
- Investigation of Tactile Displays for Robot to Human Communication.
- Creator
-
Barber, Daniel, Reinerman, Lauren, Jentsch, Florian, Lackey, Stephanie, Leonessa, Alexander, University of Central Florida
- Abstract / Description
-
Improvements in autonomous systems technology and a growing demand within military operations are spurring a revolution in Human-Robot Interaction (HRI). These mixed-initiative human-robot teams are enabled by Multi-Modal Communication (MMC), which supports redundancy and levels of communication that are more robust than single mode interaction. (Bischoff (&) Graefe, 2002; Partan (&) Marler, 1999). Tactile communication via vibrotactile displays is an emerging technology, potentially...
Show moreImprovements in autonomous systems technology and a growing demand within military operations are spurring a revolution in Human-Robot Interaction (HRI). These mixed-initiative human-robot teams are enabled by Multi-Modal Communication (MMC), which supports redundancy and levels of communication that are more robust than single mode interaction. (Bischoff (&) Graefe, 2002; Partan (&) Marler, 1999). Tactile communication via vibrotactile displays is an emerging technology, potentially beneficial to advancing HRI. Incorporation of tactile displays within MMC requires developing messages equivalent in communication power to speech and visual signals used in the military. Toward that end, two experiments were performed to investigate the feasibility of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence structure for communication of messages for robot to human communication. Experiment one evaluated tactons from the literature with standardized parameters grouped into categories (directional, dynamic, and static) based on the nature and meaning of the patterns to inform design of a tactile syntax. Findings of this experiment revealed directional tactons showed better performance than non-directional tactons, therefore syntax for experiment two composed of a non-directional and a directional tacton was more likely to show performance better than chance. Experiment two tested the syntax structure of equally performing tactons identified from experiment one, revealing participants' ability to interpret tactile sentences better than chance with or without the presence of an independent work imperative task. This finding advanced the state of the art in tactile displays from one to two word phrases facilitating inclusion of the tactile modality within MMC for HRI.
Show less - Date Issued
- 2012
- Identifier
- CFE0004778, ucf:49800
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004778
- Title
- THE EFFECTS ON OPERATOR PERFORMANCE AND WORKLOAD WHEN GUNNERY AND ROBOTIC CONTROL TASKS ARE PERFORMED CONCURRENTLY.
- Creator
-
Joyner, Carla, McCauley-Bell, Pamela, University of Central Florida
- Abstract / Description
-
The purpose of this research was to examine operator workload and performance in a high risk, multi-task environment. Specifically, the research examined if a gunner of a Future Combat System, such as a Mounted Combat System, could effectively detect targets in the immediate environment while concurrently operating robotic assets in a remote environment. It also analyzed possible effects of individual difference factors, such as spatial ability and attentional control, on operator performance...
Show moreThe purpose of this research was to examine operator workload and performance in a high risk, multi-task environment. Specifically, the research examined if a gunner of a Future Combat System, such as a Mounted Combat System, could effectively detect targets in the immediate environment while concurrently operating robotic assets in a remote environment. It also analyzed possible effects of individual difference factors, such as spatial ability and attentional control, on operator performance and workload. The experimental conditions included a gunner baseline and concurrent task conditions where participants simultaneously performed gunnery tasks and one of the following tasks: monitor an unmanned ground vehicle (UGV) via a video feed (Monitor), manage a semi-autonomous UGV, and teleoperate a UGV (Teleop). The analysis showed that the asset condition significantly impacted gunnery performance with the gunner baseline having the highest number of targets detected (M = 13.600 , SD = 2.353), and concurrent Teleop condition the lowest (M = 9.325 , SD = 2.424). The research also found that high spatial ability participants tended to detect more targets than low spatial ability participants. Robotic task performance was also affect by the asset condition. The results showed that the robotic target detection rate was lower for the concurrent task conditions. A significant difference was seen between the UGV-baseline (80.1%) when participants performed UGV tasks only and UGV-concurrent conditions (67.5%) when the participants performed UGV tasks concurrently with gunnery tasks. Overall, this study revealed that there were performance decrements for the gunnery tasks as well as the robotic tasks when the tasks were performed concurrently.
Show less - Date Issued
- 2006
- Identifier
- CFE0000979, ucf:46704
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000979
- Title
- Individual Differences in Trust Toward Robotic Assistants.
- Creator
-
Sanders, Tracy, Hancock, Peter, Mouloua, Mustapha, Szalma, James, Behal, Aman, University of Central Florida
- Abstract / Description
-
This work on trust in human-robot interaction describes a series of three experiments from which a series of predictive models are developed. Previous work in trust and robotics has examined HRI components related to robots extensively, but there has been little research to quantify the influence of individual differences in trust on HRI. The present work seeks to fill that void by measuring individual differences across a variety of conditions, including differences in robot characteristics...
Show moreThis work on trust in human-robot interaction describes a series of three experiments from which a series of predictive models are developed. Previous work in trust and robotics has examined HRI components related to robots extensively, but there has been little research to quantify the influence of individual differences in trust on HRI. The present work seeks to fill that void by measuring individual differences across a variety of conditions, including differences in robot characteristics and environments. The models produced indicate that the main individual factors predicting trust in robotics include pre-existing attitudes towards robots, interpersonal trust, and personality traits.
Show less - Date Issued
- 2016
- Identifier
- CFE0006843, ucf:51776
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006843
- Title
- Towards Improving Human-Robot Interaction For Social Robots.
- Creator
-
Khan, Saad, Boloni, Ladislau, Behal, Aman, Sukthankar, Gita, Garibay, Ivan, Fiore, Stephen, University of Central Florida
- Abstract / Description
-
Autonomous robots interacting with humans in a social setting must consider the social-cultural environment when pursuing their objectives. Thus the social robot must perceive and understand the social cultural environment in order to be able to explain and predict the actions of its human interaction partners. This dissertation contributes to the emerging field of human-robot interaction for social robots in the following ways: 1. We used the social calculus technique based on culture...
Show moreAutonomous robots interacting with humans in a social setting must consider the social-cultural environment when pursuing their objectives. Thus the social robot must perceive and understand the social cultural environment in order to be able to explain and predict the actions of its human interaction partners. This dissertation contributes to the emerging field of human-robot interaction for social robots in the following ways: 1. We used the social calculus technique based on culture sanctioned social metrics (CSSMs) to quantify, analyze and predict the behavior of the robot, human soldiers and the public perception in the Market Patrol peacekeeping scenario. 2. We validated the results of the Market Patrol scenario by comparing the predicted values with the judgment of a large group of human observers cognizant of the modeled culture. 3. We modeled the movement of a socially aware mobile robot in a dense crowds, using the concept of a micro-conflict to represent the challenge of giving or not giving way to pedestrians. 4. We developed an approach for the robot behavior in micro-conflicts based on the psychological observation that human opponents will use a consistent strategy. For this, the mobile robot classifies the opponent strategy reflected by the personality and social status of the person and chooses an appropriate counter-strategy that takes into account the urgency of the robots' mission. 5. We developed an alternative approach for the resolution of micro-conflicts based on the imitation of the behavior of the human agent. This approach aims to make the behavior of an autonomous robot closely resemble that of a remotely operated one.
Show less - Date Issued
- 2015
- Identifier
- CFE0005965, ucf:50819
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005965
- Title
- THE EFFECTS OF VIDEO FRAME DELAY AND SPATIAL ABILITY ON THE OPERATION OF MULTIPLE SEMIAUTONOMOUS AND TELE-OPERATED ROBOTS.
- Creator
-
Sloan, Jared, Stanney, Kay, University of Central Florida
- Abstract / Description
-
The United States Army has moved into the 21st century with the intent of redesigning not only the force structure but also the methods by which we will fight and win our nation's wars. Fundamental in this restructuring is the development of the Future Combat Systems (FCS). In an effort to minimize exposure of front line soldiers the future Army will utilize unmanned assets for both information gathering and when necessary engagements. Yet this must be done judiciously, as the bandwidth for...
Show moreThe United States Army has moved into the 21st century with the intent of redesigning not only the force structure but also the methods by which we will fight and win our nation's wars. Fundamental in this restructuring is the development of the Future Combat Systems (FCS). In an effort to minimize exposure of front line soldiers the future Army will utilize unmanned assets for both information gathering and when necessary engagements. Yet this must be done judiciously, as the bandwidth for net-centric warfare is limited. The implication is that the FCS must be designed to leverage bandwidth in a manner that does not overtax computational resources. In this study alternatives for improving human performance during operation of teleoperated and semi-autonomous robots were examined. It was predicted that when operating both types of robots, frame delay of the semi-autonomous robot would improve performance because it would allow operators to concentrate on the constant workload imposed by the teleoperated while only allocating resources to the semi-autonomous during critical tasks. An additional prediction was that operators with high spatial ability would perform better than those with low spatial ability, especially when operating an aerial vehicle. The results can not confirm that frame delay has a positive effect on operator performance, though power may have been an issue, but clearly show that spatial ability is a strong predictor of performance on robotic asset control, particularly with aerial vehicles. In operating the UAV, the high spatial group was, on average, 30% faster, lazed 12% more targets, and made 43% more location reports than the low spatial group. The implications of this study indicate that system design should judiciously manage workload and capitalize on individual ability to improve performance and are relevant to system designers, especially in the military community.
Show less - Date Issued
- 2005
- Identifier
- CFE0000430, ucf:46379
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000430
- Title
- IS PERCEIVED INTENTIONALITY OF A VIRTUAL ROBOT INFLUENCED BY THE KINEMATICS?.
- Creator
-
Sasser, Jordan, McConnell, Daniel S., University of Central Florida
- Abstract / Description
-
Research has shown that in Human-Human Interactions kinematic information reveals that competitive and cooperative intentions are perceivable and suggests the existence of a cooperation bias. The present study invokes the same question in a Human-Robot Interaction by investigating the relationship between the acceleration of a virtual robot within a virtual reality environment and the participants perception of the situation being cooperative or competitive by attempting to identify the...
Show moreResearch has shown that in Human-Human Interactions kinematic information reveals that competitive and cooperative intentions are perceivable and suggests the existence of a cooperation bias. The present study invokes the same question in a Human-Robot Interaction by investigating the relationship between the acceleration of a virtual robot within a virtual reality environment and the participants perception of the situation being cooperative or competitive by attempting to identify the social cues used for those perceptions. Five trials, which are mirrored, faster acceleration, slower acceleration, varied acceleration with a loss, and varied acceleration with a win, were experienced by the participant; randomized within two groups of five totaling in ten events. Results suggest that when the virtual robot's acceleration pattern were faster than the participant's acceleration the situation was perceived as more competitive. Additionally, results suggest that while the slower acceleration was perceived as more cooperative, the condition was not significantly different from mirrored acceleration. These results may indicate that there may be some kinematic information found in the faster accelerations that invoke stronger competitive perceptions whereas slower accelerations and mirrored acceleration may blend together during perception; furthermore, the models used in the slower acceleration conditions and the mirrored acceleration provide no single identifiable contributor towards perceived cooperativeness possibly due to a similar cooperative bias. These findings are used as a baseline for understanding movements that can be utilized in the design of better social robotic movements. These movements would improve the interactions between humans and these robots, ultimately improving the robot's ability to help during situations.
Show less - Date Issued
- 2019
- Identifier
- CFH2000524, ucf:45668
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH2000524
- Title
- Quantitative Framework For Social Cultural Interactions.
- Creator
-
Bhatia, Taranjeet, Boloni, Ladislau, Turgut, Damla, Sukthankar, Gita, Fiore, Stephen, University of Central Florida
- Abstract / Description
-
For an autonomous robot or software agent to participate in the social life of humans, it must have a way to perform a calculus of social behavior. Such a calculus must have explanatory power (it must provide a coherent theory for why the humans act the way they do), and predictive power (it must provide some plausible events from the predicted future actions of the humans).This dissertation describes a series of contributions that would allow agents observing or interacting with humans to...
Show moreFor an autonomous robot or software agent to participate in the social life of humans, it must have a way to perform a calculus of social behavior. Such a calculus must have explanatory power (it must provide a coherent theory for why the humans act the way they do), and predictive power (it must provide some plausible events from the predicted future actions of the humans).This dissertation describes a series of contributions that would allow agents observing or interacting with humans to perform a calculus of social behavior taking into account cultural conventions and socially acceptable behavior models. We discuss the formal components of the model: culture-sanctioned social metrics (CSSMs), concrete beliefs (CBs) and action impact functions. Through a detailed case study of a crooked seller who relies on the manipulation of public perception, we show that the model explains how the exploitation of social conventions allows the seller to finalize transactions, despite the fact that the clients know that they are being cheated. In a separate study, we show that how the crooked seller can find an optimal strategy with the use of reinforcement learning.We extend the CSSM model for modeling the propagation of public perception across multiple social interactions. We model the evolution of the public perception both over a single interaction and during a series of interactions over an extended period of time. An important aspect for modeling the public perception is its propagation - how the propagation is affected by the spatio-temporal context of the interaction and how does the short-term and long-term memory of humans affect the overall public perception.We validated the CSSM model through a user study in which participants cognizant with the modeled culture had to evaluate the impact on the social values. The scenarios used in the experiments modeled emotionally charged social situations in a cross-cultural setting and with the presence of a robot. The scenarios model conflicts of cross-cultural communication as well as ethical, social and financial choices. This study allowed us to study whether people sharing the same culture evaluate CSSMs at the same way (the inter-cultural uniformity conjecture). By presenting a wide range of possible metrics, the study also allowed us to determine whether any given metric can be considered a CSSM in a given culture or not.
Show less - Date Issued
- 2016
- Identifier
- CFE0006262, ucf:51047
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006262
- Title
- Supporting situation awareness through robot-to-human information exchanges under conditions of visuospatial perspective taking.
- Creator
-
Phillips, Elizabeth, Jentsch, Florian, Sims, Valerie, Bowers, Clint, Shumaker, Randall, University of Central Florida
- Abstract / Description
-
The future vision of military Soldier(-)robot teams is one in which Soldiers and robots work together to complete separate, but interdependent tasks that advance the goals of the mission. However, in the near term, robots will be limited in their ability to successfully perform tasks without, at least, occasional assistance from their human teammates. A need exists to design, in robots, mechanisms that can support human situation awareness (SA) regarding the operations of the robot, which...
Show moreThe future vision of military Soldier(-)robot teams is one in which Soldiers and robots work together to complete separate, but interdependent tasks that advance the goals of the mission. However, in the near term, robots will be limited in their ability to successfully perform tasks without, at least, occasional assistance from their human teammates. A need exists to design, in robots, mechanisms that can support human situation awareness (SA) regarding the operations of the robot, which humans can use to provide interventions in robot tasks. The purpose of the current study was to test the effects of information exchanges provided by a robot on the development of SA in a human partner, under differing levels of visual perspective taking, and the consequential effects on the quality of human assistance provided to a robot. After data screening, fifty-six male participants ranging in age from 18 to 29 (M= 18.89, SD= 3.412) were included in the analysis of the results. Hierarchical multiple regression and a series of ANOVAs with comparisons between individual within-subjects study conditions were conducted to analyze five Hypotheses. The results of this study revealed that if robots, through robot-to-human information exchanges, can increasingly support a human's understanding of when assistance is needed, humans will be better able to provide that assistance. As opposed to originally hypothesized, this study also showed that fewer instances in which robots share status information with their human counterparts may be more beneficial for supporting awareness, assistance, and dual task performance than more information sharing, by guarding against performance decrements that could be the result of receiving too many robot-to-human information exchanges. It was also thought that anchoring robot-to-human information sharing with spatial information in reference to the human's view of the environment would be most beneficial for supporting awareness regarding the robot and assistance provided to the robot. This notion was not supported. Instead, results suggested that if extra spatial information is added to robot-to-human information exchanges, representing that spatial information in reference to a cardinal, global-relative perspective of the environment may be better for supporting awareness and assistance than representing that information in reference to the human's view of the environment.
Show less - Date Issued
- 2016
- Identifier
- CFE0006162, ucf:51143
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006162