You are here

Complementary Layered Learning

Download pdf | Full Screen View

Date Issued:
2014
Abstract/Description:
Layered learning is a machine learning paradigm used to develop autonomous robotic-based agents by decomposing a complex task into simpler subtasks and learns each sequentially. Although the paradigm continues to have success in multiple domains, performance can be unexpectedly unsatisfactory. Using Boolean-logic problems and autonomous agent navigation, we show poor performance is due to the learner forgetting how to perform earlier learned subtasks too quickly (favoring plasticity) or having difficulty learning new things (favoring stability). We demonstrate that this imbalance can hinder learning so that task performance is no better than that of a sub-optimal learning technique, monolithic learning, which does not use decomposition. Through the resulting analyses, we have identified factors that can lead to imbalance and their negative effects, providing a deeper understanding of stability and plasticity in decomposition-based approaches, such as layered learning.To combat the negative effects of the imbalance, a complementary learning system is applied to layered learning. The new technique augments the original learning approach with dual storage region policies to preserve useful information from being removed from an agent's policy prematurely. Through multi-agent experiments, a 28% task performance increase is obtained with the proposed augmentations over the original technique.
Title: Complementary Layered Learning.
39 views
15 downloads
Name(s): Mondesire, Sean, Author
Wu, Annie, Committee Chair
Wiegand, Rudolf, Committee CoChair
Sukthankar, Gita, Committee Member
Proctor, Michael, Committee Member
University of Central Florida, Degree Grantor
Type of Resource: text
Date Issued: 2014
Publisher: University of Central Florida
Language(s): English
Abstract/Description: Layered learning is a machine learning paradigm used to develop autonomous robotic-based agents by decomposing a complex task into simpler subtasks and learns each sequentially. Although the paradigm continues to have success in multiple domains, performance can be unexpectedly unsatisfactory. Using Boolean-logic problems and autonomous agent navigation, we show poor performance is due to the learner forgetting how to perform earlier learned subtasks too quickly (favoring plasticity) or having difficulty learning new things (favoring stability). We demonstrate that this imbalance can hinder learning so that task performance is no better than that of a sub-optimal learning technique, monolithic learning, which does not use decomposition. Through the resulting analyses, we have identified factors that can lead to imbalance and their negative effects, providing a deeper understanding of stability and plasticity in decomposition-based approaches, such as layered learning.To combat the negative effects of the imbalance, a complementary learning system is applied to layered learning. The new technique augments the original learning approach with dual storage region policies to preserve useful information from being removed from an agent's policy prematurely. Through multi-agent experiments, a 28% task performance increase is obtained with the proposed augmentations over the original technique.
Identifier: CFE0005213 (IID), ucf:50626 (fedora)
Note(s): 2014-05-01
Ph.D.
Engineering and Computer Science, Computer Science
Doctoral
This record was generated from author submitted information.
Subject(s): Machine Learning -- Reinforcement Learning -- Layered Learning -- Evolutionary Computation -- Q-learning -- Forgetting -- Stability-Plasticity Dilemma
Persistent Link to This Record: http://purl.flvc.org/ucf/fd/CFE0005213
Restrictions on Access: public 2014-05-15
Host Institution: UCF

In Collections