You are here
GAUSS-NEWTON BASED LEARNING FOR FULLY RECURRENT NEURAL NETWORKS
- Date Issued:
- 2004
- Abstract/Description:
- The thesis discusses a novel off-line and on-line learning approach for Fully Recurrent Neural Networks (FRNNs). The most popular algorithm for training FRNNs, the Real Time Recurrent Learning (RTRL) algorithm, employs the gradient descent technique for finding the optimum weight vectors in the recurrent neural network. Within the framework of the research presented, a new off-line and on-line variation of RTRL is presented, that is based on the Gauss-Newton method. The method itself is an approximate Newton's method tailored to the specific optimization problem, (non-linear least squares), which aims to speed up the process of FRNN training. The new approach stands as a robust and effective compromise between the original gradient-based RTRL (low computational complexity, slow convergence) and Newton-based variants of RTRL (high computational complexity, fast convergence). By gathering information over time in order to form Gauss-Newton search vectors, the new learning algorithm, GN-RTRL, is capable of converging faster to a better quality solution than the original algorithm. Experimental results reflect these qualities of GN-RTRL, as well as the fact that GN-RTRL may have in practice lower computational cost in comparison, again, to the original RTRL.
Title: | GAUSS-NEWTON BASED LEARNING FOR FULLY RECURRENT NEURAL NETWORKS. |
22 views
10 downloads |
---|---|---|
Name(s): |
Vartak, Aniket Arun, Author Georgiopoulos, Michael, Committee Chair University of Central Florida, Degree Grantor |
|
Type of Resource: | text | |
Date Issued: | 2004 | |
Publisher: | University of Central Florida | |
Language(s): | English | |
Abstract/Description: | The thesis discusses a novel off-line and on-line learning approach for Fully Recurrent Neural Networks (FRNNs). The most popular algorithm for training FRNNs, the Real Time Recurrent Learning (RTRL) algorithm, employs the gradient descent technique for finding the optimum weight vectors in the recurrent neural network. Within the framework of the research presented, a new off-line and on-line variation of RTRL is presented, that is based on the Gauss-Newton method. The method itself is an approximate Newton's method tailored to the specific optimization problem, (non-linear least squares), which aims to speed up the process of FRNN training. The new approach stands as a robust and effective compromise between the original gradient-based RTRL (low computational complexity, slow convergence) and Newton-based variants of RTRL (high computational complexity, fast convergence). By gathering information over time in order to form Gauss-Newton search vectors, the new learning algorithm, GN-RTRL, is capable of converging faster to a better quality solution than the original algorithm. Experimental results reflect these qualities of GN-RTRL, as well as the fact that GN-RTRL may have in practice lower computational cost in comparison, again, to the original RTRL. | |
Identifier: | CFE0000091 (IID), ucf:46065 (fedora) | |
Note(s): |
2004-08-01 M.S. College of Engineering and Computer Science, Department of Electrical and Computer Engineering This record was generated from author submitted information. |
|
Subject(s): |
Recurrent Neural Networks RTRL Least Squares Minimization |
|
Persistent Link to This Record: | http://purl.flvc.org/ucf/fd/CFE0000091 | |
Restrictions on Access: | public | |
Host Institution: | UCF |