You are here

Guided Autonomy for Quadcopter Photography

Download pdf | Full Screen View

Date Issued:
2019
Abstract/Description:
Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point.This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction.
Title: Guided Autonomy for Quadcopter Photography.
38 views
13 downloads
Name(s): Alabachi, Saif, Author
Sukthankar, Gita, Committee Chair
Behal, Aman, Committee Member
Lin, Mingjie, Committee Member
Boloni, Ladislau, Committee Member
Laviola II, Joseph, Committee Member
University of Central Florida, Degree Grantor
Type of Resource: text
Date Issued: 2019
Publisher: University of Central Florida
Language(s): English
Abstract/Description: Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point.This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction.
Identifier: CFE0007774 (IID), ucf:52369 (fedora)
Note(s): 2019-12-01
Ph.D.
Engineering and Computer Science, Electrical and Computer Engineering
Doctoral
This record was generated from author submitted information.
Subject(s): UAV -- HRI -- Object Detection -- Deep Reinforcement Learning -- Reward Shaping
Persistent Link to This Record: http://purl.flvc.org/ucf/fd/CFE0007774
Restrictions on Access: public 2019-12-15
Host Institution: UCF

In Collections