Current Search: Robotics (x)
Pages
-
-
Title
-
An Exploration of Unmanned Aerial Vehicle Direct Manipulation Through 3D Spatial Interaction.
-
Creator
-
Pfeil, Kevin, Laviola II, Joseph, Hughes, Charles, Sukthankar, Gita, University of Central Florida
-
Abstract / Description
-
We present an exploration that surveys the strengths and weaknesses of various 3D spatial interaction techniques, in the context of directly manipulating an Unmanned Aerial Vehicle (UAV). Particularly, a study of touch- and device- free interfaces in this domain is provided. 3D spatial interaction can be achieved using hand-held motion control devices such as the NintendoWiimote, but computer vision systems offer a different and perhaps more natural method. In general, 3D user interfaces ...
Show moreWe present an exploration that surveys the strengths and weaknesses of various 3D spatial interaction techniques, in the context of directly manipulating an Unmanned Aerial Vehicle (UAV). Particularly, a study of touch- and device- free interfaces in this domain is provided. 3D spatial interaction can be achieved using hand-held motion control devices such as the NintendoWiimote, but computer vision systems offer a different and perhaps more natural method. In general, 3D user interfaces (3DUI) enable a user to interact with a system on a more robust and potentially more meaningful scale. We discuss the design and development of various 3D interaction techniques using commercially available computer vision systems, and provide an exploration of the effects that these techniques have on an overall user experience in the UAV domain. Specific qualities of the user experience are targeted, including the perceived intuition, ease of use, comfort, and others. We present a complete user study for upper-body gesture, and preliminary reactions towards 3DUI using hand-and-finger gestures are also discussed. The results provide evidence that supports the use of 3DUI in this domain, as well as the use of certain styles of techniques over others.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004910, ucf:49612
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004910
-
-
Title
-
THE EFFECTS ON OPERATOR PERFORMANCE AND WORKLOAD WHEN GUNNERY AND ROBOTIC CONTROL TASKS ARE PERFORMED CONCURRENTLY.
-
Creator
-
Joyner, Carla, McCauley-Bell, Pamela, University of Central Florida
-
Abstract / Description
-
The purpose of this research was to examine operator workload and performance in a high risk, multi-task environment. Specifically, the research examined if a gunner of a Future Combat System, such as a Mounted Combat System, could effectively detect targets in the immediate environment while concurrently operating robotic assets in a remote environment. It also analyzed possible effects of individual difference factors, such as spatial ability and attentional control, on operator performance...
Show moreThe purpose of this research was to examine operator workload and performance in a high risk, multi-task environment. Specifically, the research examined if a gunner of a Future Combat System, such as a Mounted Combat System, could effectively detect targets in the immediate environment while concurrently operating robotic assets in a remote environment. It also analyzed possible effects of individual difference factors, such as spatial ability and attentional control, on operator performance and workload. The experimental conditions included a gunner baseline and concurrent task conditions where participants simultaneously performed gunnery tasks and one of the following tasks: monitor an unmanned ground vehicle (UGV) via a video feed (Monitor), manage a semi-autonomous UGV, and teleoperate a UGV (Teleop). The analysis showed that the asset condition significantly impacted gunnery performance with the gunner baseline having the highest number of targets detected (M = 13.600 , SD = 2.353), and concurrent Teleop condition the lowest (M = 9.325 , SD = 2.424). The research also found that high spatial ability participants tended to detect more targets than low spatial ability participants. Robotic task performance was also affect by the asset condition. The results showed that the robotic target detection rate was lower for the concurrent task conditions. A significant difference was seen between the UGV-baseline (80.1%) when participants performed UGV tasks only and UGV-concurrent conditions (67.5%) when the participants performed UGV tasks concurrently with gunnery tasks. Overall, this study revealed that there were performance decrements for the gunnery tasks as well as the robotic tasks when the tasks were performed concurrently.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0000979, ucf:46704
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000979
-
-
Title
-
EFFECT OF A HUMAN-TEACHER VS. A ROBOT-TEACHER ON HUMAN LEARNING: A PILOT STUDY.
-
Creator
-
Smith, Melissa, Sims, Valerie, University of Central Florida
-
Abstract / Description
-
Studies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the...
Show moreStudies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the participants learned a simple pick-and-place block task via video of a human-teacher and a group where the participants learned the same pick-and-place block task via video from a robotic-teacher. After being the taught the task, the participants performed a 15-minute distracter task and then were timed in their reconstruction of the block configuration. An exit survey asking about their level of comfort learning from robot and computer entities was given upon completion. Results showed that there was no significant difference in the rebuild scores of the two groups, but there was a marginally significant difference in the rebuild times of the two groups. Exit survey results, research implications, and future work are discussed.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFH0004068, ucf:44809
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004068
-
-
Title
-
ATTRIBUTIONS OF BLAME IN A HUMAN-ROBOT INTERACTION SCENARIO.
-
Creator
-
Scholcover, Federico, Sims, Valerie, University of Central Florida
-
Abstract / Description
-
This thesis worked towards answering the following question: Where, if at all, do the beliefs and behaviors associated with interacting with a nonhuman agent deviate from how we treat a human? This was done by exploring the inter-related fields of Human-Computer and Human-Robot Interaction in the literature review, viewing them through the theoretical lens of anthropomorphism. A study was performed which looked at how 104 participants would attribute blame in a robotic surgery scenario, as...
Show moreThis thesis worked towards answering the following question: Where, if at all, do the beliefs and behaviors associated with interacting with a nonhuman agent deviate from how we treat a human? This was done by exploring the inter-related fields of Human-Computer and Human-Robot Interaction in the literature review, viewing them through the theoretical lens of anthropomorphism. A study was performed which looked at how 104 participants would attribute blame in a robotic surgery scenario, as detailed in a vignette. A majority of results were statistically non-significant, however, some results emerged which may imply a diffusion of responsibility in human-robot collaboration scenarios.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFH0004587, ucf:45224
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004587
-
-
Title
-
AUTONOMOUS ROBOTIC GRASPING IN UNSTRUCTURED ENVIRONMENTS.
-
Creator
-
Jabalameli, Amirhossein, Behal, Aman, Haralambous, Michael, Pourmohammadi Fallah, Yaser, Boloni, Ladislau, Xu, Yunjun, University of Central Florida
-
Abstract / Description
-
A crucial problem in robotics is interacting with known or novel objects in unstructured environments. While the convergence of a multitude of research advances is required to address this problem, our goal is to describe a framework that employs the robot's visual perception to identify and execute an appropriate grasp to pick and place novel objects. Analytical approaches explore for solutions through kinematic and dynamic formulations. On the other hand, data-driven methods retrieve grasps...
Show moreA crucial problem in robotics is interacting with known or novel objects in unstructured environments. While the convergence of a multitude of research advances is required to address this problem, our goal is to describe a framework that employs the robot's visual perception to identify and execute an appropriate grasp to pick and place novel objects. Analytical approaches explore for solutions through kinematic and dynamic formulations. On the other hand, data-driven methods retrieve grasps according to their prior knowledge of either the target object, human experience, or through information obtained from acquired data. In this dissertation, we propose a framework based on the supporting principle that potential contacting regions for a stable grasp can be foundby searching for (i) sharp discontinuities and (ii) regions of locally maximal principal curvature in the depth map. In addition to suggestions from empirical evidence, we discuss this principle by applying the concept of force-closure and wrench convexes. The key point is that no prior knowledge of objects is utilized in the grasp planning process; however, the obtained results show thatthe approach is capable to deal successfully with objects of different shapes and sizes. We believe that the proposed work is novel because the description of the visible portion of objects by theaforementioned edges appearing in the depth map facilitates the process of grasp set-point extraction in the same way as image processing methods with the focus on small-size 2D image areas rather than clustering and analyzing huge sets of 3D point-cloud coordinates. In fact, this approach dismisses reconstruction of objects. These features result in low computational costs and make it possible to run the proposed algorithm in real-time. Finally, the performance of the approach is successfully validated by applying it to the scenes with both single and multiple objects, in both simulation and real-world experiment setups.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007892, ucf:52757
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007892
-
-
Title
-
Evaluating Human-Robot Implicit Communication through Human-Human Implicit Communication.
-
Creator
-
Richardson, Andrew, Karwowski, Waldemar, Hancock, Peter, Shumaker, Randall, Reinerman, Lauren, University of Central Florida
-
Abstract / Description
-
Human-Robot Interaction (HRI) research is examining ways to make human-robot (HR) communication more natural. Incorporating natural communication techniques is expected to make HR communication seamless and more natural for humans. Humans naturally incorporate implicit levels of communication, and including implicit communication in HR communication should provide tremendous benefit. The aim for this work was to evaluate a model for human-robot implicit communication. Specifically, the...
Show moreHuman-Robot Interaction (HRI) research is examining ways to make human-robot (HR) communication more natural. Incorporating natural communication techniques is expected to make HR communication seamless and more natural for humans. Humans naturally incorporate implicit levels of communication, and including implicit communication in HR communication should provide tremendous benefit. The aim for this work was to evaluate a model for human-robot implicit communication. Specifically, the primary goal for this research was to determine whether humans can assign meanings to implicit cues received from autonomous robots as they do for identical implicit cues received from humans.An experiment was designed to allow participants to assign meanings to identical, implicit cues (pursuing, retreating, investigating, hiding, patrolling) received from humans and robots. Participants were tasked to view random video clips of both entity types, label the implicit cue, and assign a level of confidence in their chosen answer. Physiological data was tracked during the experiment using an electroencephalogram and eye-tracker. Participants answered workload and stress measure questionnaires following each scenario.Results revealed that participants were significantly more accurate with human cues (84%) than with robot cues (82%), however participants were highly accurate, above 80%, for both entity types. Despite the high accuracy for both types, participants remained significantly more confident in answers for humans (6.1) than for robots (5.9) on a confidence scale of 1 - 7.Subjective measures showed no significant differences for stress or mental workload across entities. Physiological measures were not significant for the engagement index acrossentity, but robots resulted in significantly higher levels of cognitive workload for participants via the index of cognitive activity.The results of this study revealed that participants are more confident interpreting human implicit cues than identical cues received from a robot. However, the accuracy of interpreting both entities remained high. Participants showed no significant difference in interpreting different cues across entity as well. Therefore, much of the ability of interpreting an implicit cue resides in the actual cue rather than the entity. Proper training should boost confidence as humans begin to work alongside autonomous robots as teammates, and it is possible to train humans to recognize cues based on the movement, regardless of the entity demonstrating the movement.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004429, ucf:49352
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004429
-
-
Title
-
Smart Grasping using Laser and Tactile Array Sensors for UCF-MANUS- An Intelligent Assistive Robotic Manipulator.
-
Creator
-
Prakash, Kiran, Behal, Aman, Boloni, Ladislau, Haralambous, Michael, University of Central Florida
-
Abstract / Description
-
This thesis presents three improvements in the UCF MANUS Assistive Robotic Manipulator's grasping abilities. Firstly, the robot can now grasp objects that are deformable, heavy and have uneven contact surfaces without undergoing slippage during robotic operations, e.g. paper cup, filled water bottle. This is achieved by installing a high precision non-contacting Laser sensor1 that runs with an algorithm that processes raw-input data from the sensor, registers smallest variation in the...
Show moreThis thesis presents three improvements in the UCF MANUS Assistive Robotic Manipulator's grasping abilities. Firstly, the robot can now grasp objects that are deformable, heavy and have uneven contact surfaces without undergoing slippage during robotic operations, e.g. paper cup, filled water bottle. This is achieved by installing a high precision non-contacting Laser sensor1 that runs with an algorithm that processes raw-input data from the sensor, registers smallest variation in the relative position of the object with respect to the gripper. Secondly, the robot can grasp objects that are as light and small as single cereal grain without deforming it. To achieve this a MEMS Barometer based tactile sensor array device that can measure force that are as small as 1 gram equivalent is embedded into the gripper to enhance pressure sensing capabilities. Thirdly, the robot gripper gloves are designed aesthetically and conveniently to accommodate existing and newly added sensors using a 3D printing technology that uses light weight ABS plastic as a fabrication material. The newly designed system was experimented and found that a high degree of adaptability for different kinds of objects can be attained with a better performance than the previous system.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006164, ucf:51119
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006164
-
-
Title
-
Individual Differences in Trust Toward Robotic Assistants.
-
Creator
-
Sanders, Tracy, Hancock, Peter, Mouloua, Mustapha, Szalma, James, Behal, Aman, University of Central Florida
-
Abstract / Description
-
This work on trust in human-robot interaction describes a series of three experiments from which a series of predictive models are developed. Previous work in trust and robotics has examined HRI components related to robots extensively, but there has been little research to quantify the influence of individual differences in trust on HRI. The present work seeks to fill that void by measuring individual differences across a variety of conditions, including differences in robot characteristics...
Show moreThis work on trust in human-robot interaction describes a series of three experiments from which a series of predictive models are developed. Previous work in trust and robotics has examined HRI components related to robots extensively, but there has been little research to quantify the influence of individual differences in trust on HRI. The present work seeks to fill that void by measuring individual differences across a variety of conditions, including differences in robot characteristics and environments. The models produced indicate that the main individual factors predicting trust in robotics include pre-existing attitudes towards robots, interpersonal trust, and personality traits.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006843, ucf:51776
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006843
-
-
Title
-
Towards Improving Human-Robot Interaction For Social Robots.
-
Creator
-
Khan, Saad, Boloni, Ladislau, Behal, Aman, Sukthankar, Gita, Garibay, Ivan, Fiore, Stephen, University of Central Florida
-
Abstract / Description
-
Autonomous robots interacting with humans in a social setting must consider the social-cultural environment when pursuing their objectives. Thus the social robot must perceive and understand the social cultural environment in order to be able to explain and predict the actions of its human interaction partners. This dissertation contributes to the emerging field of human-robot interaction for social robots in the following ways: 1. We used the social calculus technique based on culture...
Show moreAutonomous robots interacting with humans in a social setting must consider the social-cultural environment when pursuing their objectives. Thus the social robot must perceive and understand the social cultural environment in order to be able to explain and predict the actions of its human interaction partners. This dissertation contributes to the emerging field of human-robot interaction for social robots in the following ways: 1. We used the social calculus technique based on culture sanctioned social metrics (CSSMs) to quantify, analyze and predict the behavior of the robot, human soldiers and the public perception in the Market Patrol peacekeeping scenario. 2. We validated the results of the Market Patrol scenario by comparing the predicted values with the judgment of a large group of human observers cognizant of the modeled culture. 3. We modeled the movement of a socially aware mobile robot in a dense crowds, using the concept of a micro-conflict to represent the challenge of giving or not giving way to pedestrians. 4. We developed an approach for the robot behavior in micro-conflicts based on the psychological observation that human opponents will use a consistent strategy. For this, the mobile robot classifies the opponent strategy reflected by the personality and social status of the person and chooses an appropriate counter-strategy that takes into account the urgency of the robots' mission. 5. We developed an alternative approach for the resolution of micro-conflicts based on the imitation of the behavior of the human agent. This approach aims to make the behavior of an autonomous robot closely resemble that of a remotely operated one.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005965, ucf:50819
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005965
-
-
Title
-
A VIRTUAL REALITY VISUALIZATION OFAN ANALYTICAL SOLUTION TOMOBILE ROBOT TRAJECTORY GENERATIONIN THE PRESENCE OF MOVING OBSTACLES.
-
Creator
-
Elias, Ricardo, Qu, Zhihua, University of Central Florida
-
Abstract / Description
-
Virtual visualization of mobile robot analytical trajectories while avoiding moving obstacles is presented in this thesis as a very helpful technique to properly display and communicate simulation results. Analytical solutions to the path planning problem of mobile robots in the presence of obstacles and a dynamically changing environment have been presented in the current robotics and controls literature. These techniques have been demonstrated using two-dimensional graphical representation...
Show moreVirtual visualization of mobile robot analytical trajectories while avoiding moving obstacles is presented in this thesis as a very helpful technique to properly display and communicate simulation results. Analytical solutions to the path planning problem of mobile robots in the presence of obstacles and a dynamically changing environment have been presented in the current robotics and controls literature. These techniques have been demonstrated using two-dimensional graphical representation of simulation results. In this thesis, the analytical solution published by Dr. Zhihua Qu in December 2004 is used and simulated using a virtual visualization tool called VRML.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001575, ucf:47118
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001575
-
-
Title
-
APPLYING THE APPRAISAL THEORY OF EMOTIONTO HUMAN-AGENT INTERACTION.
-
Creator
-
Pepe, Aaron, Sims, Valerie, University of Central Florida
-
Abstract / Description
-
Autonomous robots are increasingly being used in everyday life; cleaning our floors, entertaining us and supplementing soldiers in the battlefield. As emotion is a key ingredient in how we interact with others, it is important that our emotional interaction with these new entities be understood. This dissertation proposes using the appraisal theory of emotion (Roseman, Scherer, Schorr, & Johnstone, 2001) to investigate how we understand and evaluate situations involving this new breed of...
Show moreAutonomous robots are increasingly being used in everyday life; cleaning our floors, entertaining us and supplementing soldiers in the battlefield. As emotion is a key ingredient in how we interact with others, it is important that our emotional interaction with these new entities be understood. This dissertation proposes using the appraisal theory of emotion (Roseman, Scherer, Schorr, & Johnstone, 2001) to investigate how we understand and evaluate situations involving this new breed of robot. This research involves two studies; in the first study an experimental method was used in which participants interacted with a live dog, a robotic dog or a non-anthropomorphic robot to attempt to accomplish a set of tasks. The appraisals of motive consistent / motive inconsistent (the task was performed correctly/incorrectly) and high / low perceived control (the teammate was well trained/not well trained) were manipulated to show the practicality of using appraisal theory as a basis for human robot interaction studies. Robot form was investigated for its influence on emotions experienced. Finally, the influence of high and low control on the experience of positive emotions caused by another was investigated. Results show that a human robot live interaction test bed is a valid way to influence participants' appraisals. Manipulation checks of motive consistent / motive inconsistent, high / low perceived control and the proper appraisal of cause were significant. Form was shown to influence both the positive and negative emotions experienced, the more lifelike agents were rated higher in positive emotions and lower in negative emotions. The emotion gratitude was shown to be greater during conditions of low control when the entities performed correctly,suggesting that more experiments should be conducted investigating agent caused motive-conducive events. A second study was performed with participants evaluating their reaction to a hypothetical story. In this story they were interacting with either a human, robotic dog, or robot to complete a task. These three agent types and high/low perceived control were manipulated with all stories ending successfully. Results indicated that gratitude and appreciation are sensitive to the manipulation of agent type. It is suggested that, based on the results of these studies, the emotion gratitude should be added to Roseman et al. (2001) appraisal theory to describe the emotion felt during low-control, motive-consistent, other-caused events. These studies have also shown that the appraisal theory of emotion is useful in the study of human-robot and human-animal interactions.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001819, ucf:47351
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001819
-
-
Title
-
A MULTI-OBJECTIVE NO-REGRET DECISION MAKING MODEL WITH BAYESIAN LEARNING FOR AUTONOMOUS UNMANNED SYSTEMS.
-
Creator
-
Howard, Matthew, Qu, Zhihua, University of Central Florida
-
Abstract / Description
-
The development of a multi-objective decision making and learning model for the use in unmanned systems is the focus of this project. Starting with traditional game theory and psychological learning theories developed in the past, a new model for machine learning is developed. This model incorporates a no-regret decision making model with a Bayesian learning process which has the ability to adapt to errors found in preconceived costs associated with each objective. This learning ability is...
Show moreThe development of a multi-objective decision making and learning model for the use in unmanned systems is the focus of this project. Starting with traditional game theory and psychological learning theories developed in the past, a new model for machine learning is developed. This model incorporates a no-regret decision making model with a Bayesian learning process which has the ability to adapt to errors found in preconceived costs associated with each objective. This learning ability is what sets this model apart from many others. By creating a model based on previously developed human learning models, hundreds of years of experience in these fields can be applied to the recently developing field of machine learning. This also allows for operators to more comfortably adapt to the machine's learning process in order to better understand how to take advantage of its features. One of the main purposes of this system is to incorporate multiple objectives into a decision making process. This feature can better allow its users to clearly define objectives and prioritize these objectives allowing the system to calculate the best approach for completing the mission. For instance, if an operator is given objectives such as obstacle avoidance, safety, and limiting resource usage, the operator would traditionally be required to decide how to meet all of these objectives. The use of a multi-objective decision making process such as the one designed in this project, allows the operator to input the objectives and their priorities and receive an output of the calculated optimal compromise.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002453, ucf:47711
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002453
-
-
Title
-
INDOOR GEO-LOCATION AND TRACKING OF MOBILE AUTONOMOUS ROBOT.
-
Creator
-
Ramamurthy, Mahesh, Schiavone, Guy, University of Central Florida
-
Abstract / Description
-
The field of robotics has always been one of fascination right from the day of Terminator. Even though we still do not have robots that can actually replicate human action and intelligence, progress is being made in the right direction. Robotic applications range from defense to civilian, in public safety and fire fighting. With the increase in urban-warfare robot tracking inside buildings and in cities form a very important application. The numerous applications range from munitions tracking...
Show moreThe field of robotics has always been one of fascination right from the day of Terminator. Even though we still do not have robots that can actually replicate human action and intelligence, progress is being made in the right direction. Robotic applications range from defense to civilian, in public safety and fire fighting. With the increase in urban-warfare robot tracking inside buildings and in cities form a very important application. The numerous applications range from munitions tracking to replacing soldiers for reconnaissance information. Fire fighters use robots for survey of the affected area. Tracking robots has been limited to the local area under consideration. Decision making is inhibited due to limited local knowledge and approximations have to be made. An effective decision making would involve tracking the robot in earth co-ordinates such as latitude and longitude. GPS signal provides us sufficient and reliable data for such decision making. The main drawback of using GPS is that it is unavailable indoors and also there is signal attenuation outdoors. Indoor geolocation forms the basis of tracking robots inside buildings and other places where GPS signals are unavailable. Indoor geolocation has traditionally been the field of wireless networks using techniques such as low frequency RF signals and ultra-wideband antennas. In this thesis we propose a novel method for achieving geolocation and enable tracking. Geolocation and tracking are achieved by a combination of Gyroscope and encoders together referred to as the Inertial Navigation System (INS). Gyroscopes have been widely used in aerospace applications for stabilizing aircrafts. In our case we use gyroscope as means of determining the heading of the robot. Further, commands can be sent to the robot when it is off balance or off-track. Sensors are inherently error prone; hence the process of geolocation is complicated and limited by the imperfect mathematical modeling of input noise. We make use of Kalman Filter for processing erroneous sensor data, as it provides us a robust and stable algorithm. The error characteristics of the sensors are input to the Kalman Filter and filtered data is obtained. We have performed a large set of experiments, both indoors and outdoors to test the reliability of the system. In outdoors we have used the GPS signal to aid the INS measurements. When indoors we utilize the last known position and extrapolate to obtain the GPS co-ordinates.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000506, ucf:46451
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000506
-
-
Title
-
Transparency and Communication Patterns in Human-Robot Teaming.
-
Creator
-
Lakhmani, Shan, Barber, Daniel, Jentsch, Florian, Reinerman, Lauren, Guznov, Svyatoslav, University of Central Florida
-
Abstract / Description
-
In anticipation of the complex, dynamic battlefields of the future, military operations are increasingly demanding robots with increased autonomous capabilities to support soldiers. Effective communication is necessary to establish a common ground on which human-robot teamwork can be established across the continuum of military operations. However, the types and format of communication for mixed-initiative collaboration is still not fully understood. This study explores two approaches to...
Show moreIn anticipation of the complex, dynamic battlefields of the future, military operations are increasingly demanding robots with increased autonomous capabilities to support soldiers. Effective communication is necessary to establish a common ground on which human-robot teamwork can be established across the continuum of military operations. However, the types and format of communication for mixed-initiative collaboration is still not fully understood. This study explores two approaches to communication in human-robot interaction, transparency and communication pattern, and examines how manipulating these elements with a robot teammate affects its human counterpart in a collaborative exercise. Participants were coupled with a computer-simulated robot to perform a cordon-and-search-like task. A human-robot interface provided different transparency types(-)about the robot's decision making process alone, or about the robot's decision making process and its prediction of the human teammate's decision making process(-)and different communication patterns(-)either conveying information to the participant or both conveying information to and soliciting information from the participant. This experiment revealed that participants found robots that both conveyed and solicited information to be more animate, likeable, and intelligent than their less interactive counterparts, but working with those robots led to more misses in a target classification task. Furthermore, the act of responding to the robot led to a reduction in the number of correct identifications made, but only when the robot was solely providing information about its own decision making process. Findings from this effort inform the design of next-generation visual displays supporting human-robot teaming.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007481, ucf:52674
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007481
-
-
Title
-
An Analysis of Robot-Assisted Social-Communication Instruction for Young Children with Autism Spectrum Disorders.
-
Creator
-
Donehower, Claire, Vasquez, Eleazar, Dieker, Lisa, Marino, Matthew, Correa, Vivian, University of Central Florida
-
Abstract / Description
-
Social and communication deficits are a core feature of Autism Spectrum Disorders (ASD) and impact an individual's ability to be a full participant in their school environment and community. The increase in number of students with ASD in schools combined with the use of ineffective interventions have created a critical need for quality social-communication instruction in schools for this population. Technology-based interventions, like robots, have the potential to greatly impact students...
Show moreSocial and communication deficits are a core feature of Autism Spectrum Disorders (ASD) and impact an individual's ability to be a full participant in their school environment and community. The increase in number of students with ASD in schools combined with the use of ineffective interventions have created a critical need for quality social-communication instruction in schools for this population. Technology-based interventions, like robots, have the potential to greatly impact students with disabilities, including students with ASD who tend to show increased interest and engagement in technology-based tasks and materials. While research on the use of robots with these learners is limited, these technologies have been successfully used to teach basic social-communication skills. The purpose of this study was to examine the effects of a social-communication intervention for young children with ASD that is rooted in evidence-based practices and utilizes a surrogate interactive robot as the primary interventionist. This study utilized a multiple baseline design across behaviors to determine the impact of the robot-assisted intervention on the manding, tacting, and intraverbal skills of four, 3-year old students with ASD. The researchers found that this intervention was effective in increasing the rate of all three the target behaviors.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006736, ucf:51852
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006736
-
-
Title
-
Investigation of Tactile Displays for Robot to Human Communication.
-
Creator
-
Barber, Daniel, Reinerman, Lauren, Jentsch, Florian, Lackey, Stephanie, Leonessa, Alexander, University of Central Florida
-
Abstract / Description
-
Improvements in autonomous systems technology and a growing demand within military operations are spurring a revolution in Human-Robot Interaction (HRI). These mixed-initiative human-robot teams are enabled by Multi-Modal Communication (MMC), which supports redundancy and levels of communication that are more robust than single mode interaction. (Bischoff (&) Graefe, 2002; Partan (&) Marler, 1999). Tactile communication via vibrotactile displays is an emerging technology, potentially...
Show moreImprovements in autonomous systems technology and a growing demand within military operations are spurring a revolution in Human-Robot Interaction (HRI). These mixed-initiative human-robot teams are enabled by Multi-Modal Communication (MMC), which supports redundancy and levels of communication that are more robust than single mode interaction. (Bischoff (&) Graefe, 2002; Partan (&) Marler, 1999). Tactile communication via vibrotactile displays is an emerging technology, potentially beneficial to advancing HRI. Incorporation of tactile displays within MMC requires developing messages equivalent in communication power to speech and visual signals used in the military. Toward that end, two experiments were performed to investigate the feasibility of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence structure for communication of messages for robot to human communication. Experiment one evaluated tactons from the literature with standardized parameters grouped into categories (directional, dynamic, and static) based on the nature and meaning of the patterns to inform design of a tactile syntax. Findings of this experiment revealed directional tactons showed better performance than non-directional tactons, therefore syntax for experiment two composed of a non-directional and a directional tacton was more likely to show performance better than chance. Experiment two tested the syntax structure of equally performing tactons identified from experiment one, revealing participants' ability to interpret tactile sentences better than chance with or without the presence of an independent work imperative task. This finding advanced the state of the art in tactile displays from one to two word phrases facilitating inclusion of the tactile modality within MMC for HRI.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004778, ucf:49800
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004778
-
-
Title
-
Autonomous Quadcopter Videographer.
-
Creator
-
Coaguila Quiquia, Rey, Sukthankar, Gita, Wu, Annie, Hughes, Charles, University of Central Florida
-
Abstract / Description
-
In recent years, the interest in quadcopters as a robotics platform for autonomous photography has increased. This is due to their small size and mobility, which allow them to reach places that are difficult or even impossible for humans. This thesis focuses on the design of an autonomous quadcopter videographer, i.e. a quadcopter capable of capturing good footage of a specific subject. In order to obtain this footage, the system needs to choose appropriate vantage points and control the...
Show moreIn recent years, the interest in quadcopters as a robotics platform for autonomous photography has increased. This is due to their small size and mobility, which allow them to reach places that are difficult or even impossible for humans. This thesis focuses on the design of an autonomous quadcopter videographer, i.e. a quadcopter capable of capturing good footage of a specific subject. In order to obtain this footage, the system needs to choose appropriate vantage points and control the quadcopter. Skilled human videographers can easily spot good filming locations where the subject and its actions can be seen clearly in the resulting video footage, but translating this knowledge to a robot can be complex. We present an autonomous system implemented on a commercially available quadcopter that achieves this using only the monocular information and an accelerometer. Our system has two vantage point selection strategies: 1) a reactive approach, which moves the robot to a fixed location with respect to the human and 2) the combination of the reactive approach and a POMDP planner that considers the target's movement intentions. We compare the behavior of these two approaches under different target movement scenarios. The results show that the POMDP planner obtains more stable footage with less quadcopter motion.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005592, ucf:50246
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005592
-
-
Title
-
Task Focused Robotic Imitation Learning.
-
Creator
-
Abolghasemi, Pooya, Boloni, Ladislau, Sukthankar, Gita, Shah, Mubarak, Willenberg, Bradley, University of Central Florida
-
Abstract / Description
-
For many years, successful applications of robotics were the domain of controlled environments, such as industrial assembly lines. Such environments are custom designed for the convenience of the robot and separated from human operators. In recent years, advances in artificial intelligence, in particular, deep learning and computer vision, allowed researchers to successfully demonstrate robots that operate in unstructured environments and directly interact with humans. One of the major...
Show moreFor many years, successful applications of robotics were the domain of controlled environments, such as industrial assembly lines. Such environments are custom designed for the convenience of the robot and separated from human operators. In recent years, advances in artificial intelligence, in particular, deep learning and computer vision, allowed researchers to successfully demonstrate robots that operate in unstructured environments and directly interact with humans. One of the major applications of such robots is in assistive robotics. For instance, a wheelchair mounted robotic arm can help disabled users in the performance of activities of daily living (ADLs) such as feeding and personal grooming. Early systems relied entirely on the control of the human operator, something that is difficult to accomplish by a user with motor and/or cognitive disabilities. In this dissertation, we are describing research results that advance the field of assistive robotics. The overall goal is to improve the ability of the wheelchair / robotic arm assembly to help the user with the performance of the ADLs by requiring only high-level commands from the user. Let us consider an ADL involving the manipulation of an object in the user's home. This task can be naturally decomposed into two components: the movement of the wheelchair in such a way that the manipulator can conveniently grasp the object and the movement of the manipulator itself. This dissertation we provide an approach for addressing the challenge of finding the position appropriate for the required manipulation. We introduce the ease-of-reach score (ERS), a metric that quantifies the preferences for the positioning of the base while taking into consideration the shape and position of obstacles and clutter in the environment. As the brute force computation of ERS is computationally expensive, we propose a machine learning approach to estimate the ERS based on features and characteristics of the obstacles. This dissertation addresses the second component as well, the ability of the robotic arm to manipulate objects. Recent work in end-to-end learning of robotic manipulation had demonstrated that a deep learning-based controller of vision-enabled robotic arms can be thought to manipulate objects from a moderate number of demonstrations. However, the current state of the art systems are limited in robustness to physical and visual disturbances and do not generalize well to new objects. We describe new techniques based on task-focused attention that show significant improvement in the robustness of manipulation and performance in clutter.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007771, ucf:52392
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007771
-
-
Title
-
Learning to Grasp Unknown Objects using Weighted Random Forest Algorithm from Selective Image and Point Cloud Feature.
-
Creator
-
Iqbal, Md Shahriar, Behal, Aman, Boloni, Ladislau, Haralambous, Michael, University of Central Florida
-
Abstract / Description
-
This method demonstrates an approach to determine the best grasping location on an unknown object using Weighted Random Forest Algorithm. It used RGB-D value of an object as input to find a suitable rectangular grasping region as the output. To accomplish this task, it uses a subspace of most important features from a very high dimensional extensive feature space that contains both image and point cloud features. Usage of most important features in the grasping algorithm has enabled the...
Show moreThis method demonstrates an approach to determine the best grasping location on an unknown object using Weighted Random Forest Algorithm. It used RGB-D value of an object as input to find a suitable rectangular grasping region as the output. To accomplish this task, it uses a subspace of most important features from a very high dimensional extensive feature space that contains both image and point cloud features. Usage of most important features in the grasping algorithm has enabled the system to be computationally very fast while preserving maximum information gain. In this approach, the Random Forest operates using optimum parameters e.g. Number of Trees, Number of Features at each node, Information Gain Criteria etc. ensures optimization in learning, with highest possible accuracy in minimum time in an advanced practical setting. The Weighted Random Forest chosen over Support Vector Machine (SVM), Decision Tree and Adaboost for implementation of the grasping system outperforms the stated machine learning algorithms both in training and testing accuracy and other performance estimates. The Grasping System utilizing learning from a score function detects the rectangular grasping region after selecting the top rectangle that has the largest score. The system is implemented and tested in a Baxter Research Robot with Parallel Plate Gripper in action.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005509, ucf:50358
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005509
-
-
Title
-
REAL LONELINESS AND ARTIFICIAL COMPANIONSHIP: LOOKING FOR SOCIAL CONNECTIONS IN TECHNOLOGY.
-
Creator
-
Montalvo, Fernando L, Smither, Janan, University of Central Florida
-
Abstract / Description
-
Loneliness among older adults is a problem with severe consequences to individual health, quality of life, cognitive capacity, and life-expectancy. Although approaches towards improving the quality and quantity of social relationships are the prevailing model of therapy, older adults may not always be able to form these relationships due to either personality factors, decreased mobility, or isolation. Intelligent personal assistants (IPAs), virtual agents, and social robotics offer an...
Show moreLoneliness among older adults is a problem with severe consequences to individual health, quality of life, cognitive capacity, and life-expectancy. Although approaches towards improving the quality and quantity of social relationships are the prevailing model of therapy, older adults may not always be able to form these relationships due to either personality factors, decreased mobility, or isolation. Intelligent personal assistants (IPAs), virtual agents, and social robotics offer an opportunity for the development of technology that could potentially serve as social companions to older adults. The present study explored whether an IPA could potentially be used as a social companion to older adults feeling lonely. Additionally, the research explored whether the device has the potential to generate social presence among both young and older adults. Results indicate that while the devices do show some social presence, participants rate the device low on some components of social presence, such as emotional contagion. This adversely affects the possibility of a social relationship between an older adult and the device. Analysis reveals ways to improve social presence in these devices.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFH2000186, ucf:46005
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH2000186
Pages