View All Items
- Title
- THE IMPACT OF MENTAL TRANSFORMATION TRAINING ACROSS LEVELS OF AUTOMATION ON SPATIAL AWARENESS IN HUMAN-ROBOT INTERACTION.
- Creator
-
Rehfeld, Sherri, Jentsch, Florian, University of Central Florida
- Abstract / Description
-
One of the problems affecting robot operators' spatial awareness involves their ability to infer a robot's location based on the views from on-board cameras and other electro-optic systems. To understand the vehicle's location, operators typically need to translate images from a vehicle's camera into some other coordinates, such as a location on a map. This translation requires operators to relate the view by mentally rotating it along a number of axes, a task that is both...
Show moreOne of the problems affecting robot operators' spatial awareness involves their ability to infer a robot's location based on the views from on-board cameras and other electro-optic systems. To understand the vehicle's location, operators typically need to translate images from a vehicle's camera into some other coordinates, such as a location on a map. This translation requires operators to relate the view by mentally rotating it along a number of axes, a task that is both attention-demanding and workload-intensive, and one that is likely affected by individual differences in operator spatial abilities. Because building and maintaining spatial awareness is attention-demanding and workload-intensive, any variable that changes operator workload and attention should be investigated for its effects on operator spatial awareness. One of these variables is the use of automation (i.e., assigning functions to the robot). According to Malleable Attentional Resource Theory (MART), variation in workload across levels of automation affects an operator's attentional capacity to process critical cues like those that enable an operator to understand the robot's past, current, and future location. The study reported here focused on performance aspects of human-robot interaction involving ground robots (i.e., unmanned ground vehicles, or UGVs) during reconnaissance tasks. In particular, this study examined how differences in operator spatial ability and in operator workload and attention interacted to affect spatial awareness during human-robot interaction (HRI). Operator spatial abilities were systematically manipulated through the use of mental transformation training. Additionally, operator workload and attention were manipulated via the use of three different levels of automation (i.e., manual control, decision support, and full automation). Operator spatial awareness was measured by the size of errors made by the operators, when they were tasked to infer the robot's location from on-board camera views at three different points in a sequence of robot movements through a simulated military operation in urban terrain (MOUT) environment. The results showed that mental transformation training increased two areas of spatial ability, namely mental rotation and spatial visualization. Further, spatial ability in these two areas predicted performance in vehicle localization during the reconnaissance task. Finally, assistive automation showed a benefit with respect to operator workload, situation awareness, and subsequently performance. Together, the results of the study have implications with respect to the design of robots, function allocation between robots and operators, and training for spatial ability. Future research should investigate the interactive effects on operator spatial awareness of spatial ability, spatial ability training, and other variables affecting operator workload and attention.
Show less - Date Issued
- 2006
- Identifier
- CFE0001468, ucf:47102
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001468
- Title
- Guided Autonomy for Quadcopter Photography.
- Creator
-
Alabachi, Saif, Sukthankar, Gita, Behal, Aman, Lin, Mingjie, Boloni, Ladislau, Laviola II, Joseph, University of Central Florida
- Abstract / Description
-
Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies...
Show morePhotographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point.This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction.
Show less - Date Issued
- 2019
- Identifier
- CFE0007774, ucf:52369
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007774
- Title
- THE ROLE OF THEORY OF MIND IN HUMAN-ROBOT INTERACTION.
- Creator
-
Jaramillo, Isabella, McConnell, Daniel, University of Central Florida
- Abstract / Description
-
Theory of Mind (ToM) has repeatedly been defined as the ability to understand that others believe their own things based on their own subjective interpretations and experiences, and that their thoughts are determined independently from your own. In this study, we wanted to see if individual differences in ToM are capable of causing different perceptions of an individual's interactions with human like robotics and highlight whether or not individual differences in ToM account for different...
Show moreTheory of Mind (ToM) has repeatedly been defined as the ability to understand that others believe their own things based on their own subjective interpretations and experiences, and that their thoughts are determined independently from your own. In this study, we wanted to see if individual differences in ToM are capable of causing different perceptions of an individual's interactions with human like robotics and highlight whether or not individual differences in ToM account for different levels of how individuals experience what is called the "Uncanny Valley phenomenon" and to see whether or not having a fully developed theory of mind is essential to the perception of the interaction. This was assessed by inquiring whether or not individuals with Autism Spectrum Disorder (ASD) perceive robotics and artificially intelligent technology in the same ways that typically developed individuals do; we focused on the growing use of social robotics in ASD therapies. Studies have indicated that differences of ToM exist between individuals with ASD and those who are typically developed. Comparably, we were also curious to see if differences in empathy levels also accounted for differences in ToM and thus a difference in the perceptions of human like robotics. A robotic image rating survey was administered to a group of University of central Florida students, as well as 2 surveys - the Autism Spectrum Quotient (ASQ) and the Basic Empathy Scale (BES), which helped optimize a measurement for theory of mind. Although the results of this study did not support the claim that individuals with ASD do not experience the uncanny valley differently than typically developed individuals, there were significant enough results to conclude that different levels of empathy may account for individual differences in the uncanny valley. People with low empathy seemed to have experienced less of an uncanny valley feeling, while people with higher recorded empathy showed to experience more of an uncanny valley sensitivity.
Show less - Date Issued
- 2015
- Identifier
- CFH0004858, ucf:45457
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004858
- Title
- TOWARD BUILDING A SOCIAL ROBOT WITH AN EMOTION-BASED INTERNAL CONTROL AND EXTERNAL COMMUNICATION TO ENHANCE HUMAN-ROBOT INTERACTION.
- Creator
-
Marpaung, Andreas, Lisetti, Christine, University of Central Florida
- Abstract / Description
-
In this thesis, we aim at modeling some aspects of the functional role of emotions on an autonomous embodied agent. We begin by describing our robotic prototype, Cherry--a robot with the task of being a tour guide and an office assistant for the Computer Science Department at the University of Central Florida. Cherry did not have a formal emotion representation of internal states, but did have the ability to express emotions through her multimodal interface. The thesis presents the results of...
Show moreIn this thesis, we aim at modeling some aspects of the functional role of emotions on an autonomous embodied agent. We begin by describing our robotic prototype, Cherry--a robot with the task of being a tour guide and an office assistant for the Computer Science Department at the University of Central Florida. Cherry did not have a formal emotion representation of internal states, but did have the ability to express emotions through her multimodal interface. The thesis presents the results of a survey we performed via our social informatics approach where we found that: (1) the idea of having emotions in a robot was warmly accepted by Cherry's users, and (2) the intended users were pleased with our initial interface design and functionalities. Guided by these results, we transferred our previous code to a human-height and more robust robot--Petra, the PeopleBot--where we began to build a formal emotion mechanism and representation for internal states to correspond to the external expressions of Cherry's interface. We describe our overall three-layered architecture, and propose the design of the sensory motor level (the first layer of the three-layered architecture) inspired by the Multilevel Process Theory of Emotion on one hand, and hybrid robotic architecture on the other hand. The sensory-motor level receives and processes incoming stimuli with fuzzy logic and produces emotion-like states without any further willful planning or learning. We will discuss how Petra has been equipped with sonar and vision for obstacle avoidance as well as vision for face recognition, which are used when she roams around the hallway to engage in social interactions with humans. We hope that the sensory motor level in Petra could serve as a foundation for further works in modeling the three-layered architecture of the Emotion State Generator.
Show less - Date Issued
- 2004
- Identifier
- CFE0000286, ucf:46228
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000286