You are here

Learning robotic manipulation from user demonstrations

Download pdf | Full Screen View

Date Issued:
2017
Abstract/Description:
Personal robots that help disabled or elderly people in their activities of daily living need to be able to autonomously perform complex manipulation tasks. Traditional approaches to this problem employ task-specific controllers. However, these must to be designed by expert programmers, are focused on a single task, and will perform the task as programmed, not according to the preferences of the user. In this dissertation, we investigate methods that enable an assistive robot to learn to execute tasks as demonstrated by the user. First, we describe a learning from demonstration (LfD) method that learns assistive tasks that need to be adapted to the position and orientation of the user's head. Then we discuss a recurrent neural network controller that learns to generate movement trajectories for the end-effector of the robot arm to accomplish a task. The input to this controller is the pose of related objects and the current pose of the end-effector itself. Next, we discuss how to extract user preferences from the demonstration using reinforcement learning. Finally, we extend this controller to one that learns to observe images of the environment and generate joint movements for the robot to accomplish a desired task. We discuss several techniques that improve the performance of the controller and reduce the number of required demonstrations. One of this is multi-task learning: learning multiple tasks simultaneously with the same neural network. Another technique is to make the controller output one joint at a time-step, therefore to condition the prediction of each joint on the previous joints. We evaluate these controllers on a set of manipulation tasks and show that they can learn complex tasks, overcome failure, and attempt a task several times until they succeed.
Title: Learning robotic manipulation from user demonstrations.
16 views
6 downloads
Name(s): Rahmatizadeh, Rouhollah, Author
Boloni, Ladislau, Committee Chair
Turgut, Damla, Committee Member
Jha, Sumit Kumar, Committee Member
University of Central Florida, Degree Grantor
Type of Resource: text
Date Issued: 2017
Publisher: University of Central Florida
Language(s): English
Abstract/Description: Personal robots that help disabled or elderly people in their activities of daily living need to be able to autonomously perform complex manipulation tasks. Traditional approaches to this problem employ task-specific controllers. However, these must to be designed by expert programmers, are focused on a single task, and will perform the task as programmed, not according to the preferences of the user. In this dissertation, we investigate methods that enable an assistive robot to learn to execute tasks as demonstrated by the user. First, we describe a learning from demonstration (LfD) method that learns assistive tasks that need to be adapted to the position and orientation of the user's head. Then we discuss a recurrent neural network controller that learns to generate movement trajectories for the end-effector of the robot arm to accomplish a task. The input to this controller is the pose of related objects and the current pose of the end-effector itself. Next, we discuss how to extract user preferences from the demonstration using reinforcement learning. Finally, we extend this controller to one that learns to observe images of the environment and generate joint movements for the robot to accomplish a desired task. We discuss several techniques that improve the performance of the controller and reduce the number of required demonstrations. One of this is multi-task learning: learning multiple tasks simultaneously with the same neural network. Another technique is to make the controller output one joint at a time-step, therefore to condition the prediction of each joint on the previous joints. We evaluate these controllers on a set of manipulation tasks and show that they can learn complex tasks, overcome failure, and attempt a task several times until they succeed.
Identifier: CFE0006908 (IID), ucf:51686 (fedora)
Note(s): 2017-12-01
Ph.D.
Engineering and Computer Science, Computer Science
Doctoral
This record was generated from author submitted information.
Subject(s): Robot Learning -- Learning from Demonstration -- Robot Vision
Persistent Link to This Record: http://purl.flvc.org/ucf/fd/CFE0006908
Restrictions on Access: public 2017-12-15
Host Institution: UCF

In Collections