Current Search: Neural Networks (x)
View All Items
Pages
- Title
- Effective Task Transfer Through Indirect Encoding.
- Creator
-
Verbancsics, Phillip, Stanley, Kenneth, Sukthankar, Gita, Georgiopoulos, Michael, Garibay, Ivan, University of Central Florida
- Abstract / Description
-
An important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Often approaches to task transfer focus on transforming the original representation to fit the new task. Such representational transformations are necessary because the target task often requires new state information that was not included in the original representation. In RoboCup Keepaway, changing from...
Show moreAn important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Often approaches to task transfer focus on transforming the original representation to fit the new task. Such representational transformations are necessary because the target task often requires new state information that was not included in the original representation. In RoboCup Keepaway, changing from the 3 vs. 2 variant of the task to 4 vs. 3 adds state information for each of the new players. In contrast, this dissertation explores the idea that transfer is most effective if the representation is designed to be the same even across different tasks. To this end, (1) the bird's eye view (BEV) representation is introduced, which can represent different tasks on the same two-dimensional map. Because the BEV represents state information associated with positions instead of objects, it can be scaled to more objects without manipulation. In this way, both the 3 vs. 2 and 4 vs. 3 Keepaway tasks can be represented on the same BEV, which is (2) demonstrated in this dissertation.Yet a challenge for such representation is that a raw two-dimensional map is high-dimensional and unstructured. This dissertation demonstrates how this problem is addressed naturally by the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach. HyperNEAT evolves an indirect encoding, which compresses the representation by exploiting its geometry. The dissertation then explores further exploiting the power of such encoding, beginning by (3) enhancing the configuration of the BEV with a focus on modularity. The need for further nonlinearity is then (4) investigated through the addition of hidden nodes. Furthermore, (5) the size of the BEV can be manipulated because it is indirectly encoded. Thus the resolution of the BEV, which is dictated by its size, is increased in precision and culminates in a HyperNEAT extension that is expressed at effectively infinite resolution. Additionally, scaling to higher resolutions through gradually increasing the size of the BEV is explored. Finally, (6) the ambitious problem of scaling from the Keepaway task to the Half-field Offense task is investigated with the BEV. Overall, this dissertation demonstrates that advanced representations in conjunction with indirect encoding can contribute to scaling learning techniques to more challenging tasks, such as the Half-field Offense RoboCup soccer domain.
Show less - Date Issued
- 2011
- Identifier
- CFE0004174, ucf:49071
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004174
- Title
- Worldwide Infrastructure for Neuroevolution: A Modular Library to Turn Any Evolutionary Domain into an Online Interactive Platform.
- Creator
-
Szerlip, Paul, Stanley, Kenneth, Laviola II, Joseph, Wu, Annie, Kim, Joo, University of Central Florida
- Abstract / Description
-
Across many scientific disciplines, there has emerged an open opportunity to utilize the scale and reach of the Internet to collect scientific contributions from scientists and non-scientists alike. This process, called citizen science, has already shown great promise in the fields of biology and astronomy. Within the fields of artificial life (ALife) and evolutionary computation (EC) experiments in collaborative interactive evolution (CIE) have demonstrated the ability to collect thousands...
Show moreAcross many scientific disciplines, there has emerged an open opportunity to utilize the scale and reach of the Internet to collect scientific contributions from scientists and non-scientists alike. This process, called citizen science, has already shown great promise in the fields of biology and astronomy. Within the fields of artificial life (ALife) and evolutionary computation (EC) experiments in collaborative interactive evolution (CIE) have demonstrated the ability to collect thousands of experimental contributions from hundreds of users across the glob. However, such collaborative evolutionary systems can take nearly a year to build with a small team of researchers. This dissertation introduces a new developer framework enabling researchers to easily build fully persistent online collaborative experiments around almost any evolutionary domain, thereby reducing the time to create such systems to weeks for a single researcher. To add collaborative functionality to any potential domain, this framework, called Worldwide Infrastructure for Neuroevolution (WIN), exploits an important unifying principle among all evolutionary algorithms: regardless of the overall methods and parameters of the evolutionary experiment, every individual created has an explicit parent-child relationship, wherein one individual is considered the direct descendant of another. This principle alone is enough to capture and preserve the relationships and results for a wide variety of evolutionary experiments, while allowing multiple human users to meaningfully contribute. The WIN framework is first validated through two experimental domains, image evolution and a new two-dimensional virtual creature domain, Indirectly Encoded SodaRace (IESoR), that is shown to produce a visually diverse variety of ambulatory creatures. Finally, an Android application built with WIN, #filters, allows users to interactively evolve custom image effects to apply to personalized photographs, thereby introducing the first CIE application available for any mobile device. Together, these collaborative experiments and new mobile application establish a comprehensive new platform for evolutionary computation that can change how researchers design and conduct citizen science online.
Show less - Date Issued
- 2015
- Identifier
- CFE0005889, ucf:50892
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005889
- Title
- On the design and performance of cognitive packets over wired networks and mobile ad hoc networks.
- Creator
-
Lent, Marino Ricardo, Gelenbe, Erol, Engineering and Computer Science
- Abstract / Description
-
University of Central Florida College of Engineering Thesis; This dissertation studied cognitive packet networks (CPN) which build networked learning systems that support adaptive, quality of service-driven routing of packets in wired networks and in wireless, mobile ad hoc networks.
- Date Issued
- 2003
- Identifier
- CFR0001374, ucf:52931
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFR0001374
- Title
- HIGH PERFORMANCE DATA MINING TECHNIQUES FOR INTRUSION DETECTION.
- Creator
-
Siddiqui, Muazzam Ahmed, Lee, Joohan, University of Central Florida
- Abstract / Description
-
The rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques...
Show moreThe rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques varying from statistical methods to machine learning algorithms.Intrusion detection systems use audit data generated by operating systems, application softwares or network devices. These sources produce huge amount of datasets with tens of millions of records in them. To analyze this data, data mining is used which is a process to dig useful patterns from a large bulk of information. A major obstacle in the process is that the traditional data mining and learning algorithms are overwhelmed by the bulk volume and complexity of available data. This makes these algorithms impractical for time critical tasks like intrusion detection because of the large execution time.Our approach towards this issue makes use of high performance data mining techniques to expedite the process by exploiting the parallelism in the existing data mining algorithms and the underlying hardware. We will show that how high performance and parallel computing can be used to scale the data mining algorithms to handle large datasets, allowing the data mining component to search a much larger set of patterns and models than traditional computational platforms and algorithms would allow.We develop parallel data mining algorithms by parallelizing existing machine learning techniques using cluster computing. These algorithms include parallel backpropagation and parallel fuzzy ARTMAP neural networks. We evaluate the performances of the developed models in terms of speedup over traditional algorithms, prediction rate and false alarm rate. Our results showed that the traditional backpropagation and fuzzy ARTMAP algorithms can benefit from high performance computing techniques which make them well suited for time critical tasks like intrusion detection.
Show less - Date Issued
- 2004
- Identifier
- CFE0000056, ucf:46142
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000056
- Title
- GENETICALLY ENGINEERED ADAPTIVE RESONANCE THEORY (ART) NEURAL NETWORK ARCHITECTURES.
- Creator
-
Al-Daraiseh, Ahmad, Georgiopoulos, Michael, University of Central Florida
- Abstract / Description
-
Fuzzy ARTMAP (FAM) is currently considered to be one of the premier neural network architectures in solving classification problems. One of the limitations of Fuzzy ARTMAP that has been extensively reported in the literature is the category proliferation problem. That is Fuzzy ARTMAP has the tendency of increasing its network size, as it is confronted with more and more data, especially if the data is of noisy and/or overlapping nature. To remedy this problem a number of researchers have...
Show moreFuzzy ARTMAP (FAM) is currently considered to be one of the premier neural network architectures in solving classification problems. One of the limitations of Fuzzy ARTMAP that has been extensively reported in the literature is the category proliferation problem. That is Fuzzy ARTMAP has the tendency of increasing its network size, as it is confronted with more and more data, especially if the data is of noisy and/or overlapping nature. To remedy this problem a number of researchers have designed modifications to the training phase of Fuzzy ARTMAP that had the beneficial effect of reducing this phenomenon. In this thesis we propose a new approach to handle the category proliferation problem in Fuzzy ARTMAP by evolving trained FAM architectures. We refer to the resulting FAM architectures as GFAM. We demonstrate through extensive experimentation that an evolved FAM (GFAM) exhibits good (sometimes optimal) generalization, small size (sometimes optimal size), and requires reasonable computational effort to produce an optimal or sub-optimal network. Furthermore, comparisons of the GFAM with other approaches, proposed in the literature, which address the FAM category proliferation problem, illustrate that the GFAM has a number of advantages (i.e. produces smaller or equal size architectures, of better or as good generalization, with reduced computational complexity). Furthermore, in this dissertation we have extended the approach used with Fuzzy ARTMAP to other ART architectures, such as Ellipsoidal ARTMAP (EAM) and Gaussian ARTMAP (GAM) that also suffer from the ART category proliferation problem. Thus, we have designed and experimented with genetically engineered EAM and GAM architectures, named GEAM and GGAM. Comparisons of GEAM and GGAM with other ART architectures that were introduced in the ART literature, addressing the category proliferation problem, illustrate similar advantages observed by GFAM (i.e, GEAM and GGAM produce smaller size ART architectures, of better or improved generalization, with reduced computational complexity). Moverover, to optimally cover the input space of a problem, we proposed a genetically engineered ART architecture that combines the category structures of two different ART networks, FAM and EAM. We named this architecture UART (Universal ART). We analyzed the order of search in UART, that is the order according to which a FAM category or an EAM category is accessed in UART. This analysis allowed us to better understand UART's functionality. Experiments were also conducted to compare UART with other ART architectures, in a similar fashion as GFAM and GEAM were compared. Similar conclusions were drawn from this comparison, as in the comparison of GFAM and GEAM with other ART architectures. Finally, we analyzed the computational complexity of the genetically engineered ART architectures and we compared it with the computational complexity of other ART architectures, introduced into the literature. This analytical comparison verified our claim that the genetically engineered ART architectures produce better generalization and smaller sizes ART structures, at reduced computational complexity, compared to other ART approaches. In review, a methodology was introduced of how to combine the answers (categories) of ART architectures, using genetic algorithms. This methodology was successfully applied to FAM, EAM and FAM and EAM ART architectures, with success, resulting in ART neural networks which outperformed other ART architectures, previously introduced into the literature, and quite often produced ART architectures that attained optimal classification results, at reduced computational complexity.
Show less - Date Issued
- 2006
- Identifier
- CFE0000977, ucf:46696
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000977
- Title
- Predicting Students' Academic Performance with Decision Tree and Neural Network.
- Creator
-
Feng, Junshuai, Jha, Sumit Kumar, Zhang, Wei, Zhang, Shaojie, University of Central Florida
- Abstract / Description
-
Educational Data Mining (EDM) is a developing research field that involves many techniques to explore data relating to educational background. EDM can analyze and resolve educational data with computational methods to address educational questions. Similar to EDM, neural networks have been utilized in widespread and successful data mining applications. In this paper, synthetic datasets are employed since this paper aims to explore the methodologies such as decision tree classifiers and neural...
Show moreEducational Data Mining (EDM) is a developing research field that involves many techniques to explore data relating to educational background. EDM can analyze and resolve educational data with computational methods to address educational questions. Similar to EDM, neural networks have been utilized in widespread and successful data mining applications. In this paper, synthetic datasets are employed since this paper aims to explore the methodologies such as decision tree classifiers and neural networks to predict student performance in the context of EDM. Firstly, it introduces EDM and some relative works that have been accomplished previously in this field along with their datasets and computational results. Then, it demonstrates how the synthetic student dataset is generated, analyzes some input attributes from the dataset such as gender and high school GPA, and delivers with some visualization results to determine which classification methods approaches are the most efficient. After testing the data with decision tree classifiers and neural networks methodologies, it concludes the effectiveness of both approaches in terms of the model evaluation performance as well as discussing some of the most promising future work of this research.
Show less - Date Issued
- 2019
- Identifier
- CFE0007455, ucf:52680
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007455
- Title
- Visionary Ophthalmics: Confluence of Computer Vision and Deep Learning for Ophthalmology.
- Creator
-
Morley, Dustin, Foroosh, Hassan, Bagci, Ulas, Gong, Boqing, Mohapatra, Ram, University of Central Florida
- Abstract / Description
-
Ophthalmology is a medical field ripe with opportunities for meaningful application of computer vision algorithms. The field utilizes data from multiple disparate imaging techniques, ranging from conventional cameras to tomography, comprising a diverse set of computer vision challenges. Computer vision has a rich history of techniques that can adequately meet many of these challenges. However, the field has undergone something of a revolution in recent times as deep learning techniques have...
Show moreOphthalmology is a medical field ripe with opportunities for meaningful application of computer vision algorithms. The field utilizes data from multiple disparate imaging techniques, ranging from conventional cameras to tomography, comprising a diverse set of computer vision challenges. Computer vision has a rich history of techniques that can adequately meet many of these challenges. However, the field has undergone something of a revolution in recent times as deep learning techniques have sprung into the forefront following advances in GPU hardware. This development raises important questions regarding how to best leverage insights from both modern deep learning approaches and more classical computer vision approaches for a given problem. In this dissertation, we tackle challenging computer vision problems in ophthalmology using methods all across this spectrum. Perhaps our most significant work is a highly successful iris registration algorithm for use in laser eye surgery. This algorithm relies on matching features extracted from the structure tensor and a Gabor wavelet (-) a classically driven approach that does not utilize modern machine learning. However, drawing on insight from the deep learning revolution, we demonstrate successful application of backpropagation to optimize the registration significantly faster than the alternative of relying on finite differences. Towards the other end of the spectrum, we also present a novel framework for improving RANSAC segmentation algorithms by utilizing a convolutional neural network (CNN) trained on a RANSAC-based loss function. Finally, we apply state-of-the-art deep learning methods to solve the problem of pathological fluid detection in optical coherence tomography images of the human retina, using a novel retina-specific data augmentation technique to greatly expand the data set. Altogether, our work demonstrates benefits of applying a holistic view of computer vision, which leverages deep learning and associated insights without neglecting techniques and insights from the previous era.
Show less - Date Issued
- 2018
- Identifier
- CFE0007058, ucf:52001
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007058
- Title
- REMOTE SENSING WITH COMPUTATIONAL INTELLIGENCE MODELLING FOR MONITORING THE ECOSYSTEM STATE AND HYDRAULIC PATTERN IN A CONSTRUCTED WETLAND.
- Creator
-
Mohiuddin, Golam, Chang, Ni-bin, Lee, Woo Hyoung, Wanielista, Martin, University of Central Florida
- Abstract / Description
-
Monitoring the heterogeneous aquatic environment such as the Stormwater Treatment Areas (STAs) located at the northeast of the Everglades is extremely important in understanding the land processes of the constructed wetland in its capacity to remove nutrient. Direct monitoring and measurements of ecosystem evolution and changing velocities at every single part of the STA are not always feasible. Integrated remote sensing, monitoring, and modeling technique can be a state-of-the-art tool to...
Show moreMonitoring the heterogeneous aquatic environment such as the Stormwater Treatment Areas (STAs) located at the northeast of the Everglades is extremely important in understanding the land processes of the constructed wetland in its capacity to remove nutrient. Direct monitoring and measurements of ecosystem evolution and changing velocities at every single part of the STA are not always feasible. Integrated remote sensing, monitoring, and modeling technique can be a state-of-the-art tool to estimate the spatial and temporal distributions of flow velocity regimes and ecological functioning in such dynamic aquatic environments. In this presentation, comparison between four computational intelligence models including Extreme Learning Machine (ELM), Genetic Programming (GP) and Artificial Neural Network (ANN) models were organized to holistically assess the flow velocity and direction as well as ecosystem states within a vegetative wetland area. First the local sensor network was established using Acoustic Doppler Velocimeter (ADV). Utilizing the local sensor data along with the help of external driving forces parameters, trained models of ELM, GP and ANN were developed, calibrated, validated, and compared to select the best computational capacity of velocity prediction over time. Besides, seasonal images collected by French satellite Pleiades have been analyzed to address the seasonality effect of plant species evolution and biomass changes in the constructed wetland. The key finding of this research is to characterize the interactions between geophysical and geochemical processes in this wetland system based on ground-based monitoring sensors and satellite images to discover insight of hydraulic residence time, plant species variation, and water quality and improve the overall understanding of possible nutrient removal in this constructed wetland.
Show less - Date Issued
- 2014
- Identifier
- CFE0005533, ucf:52864
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005533
- Title
- Imaging through Glass-air Anderson Localizing Optical Fiber.
- Creator
-
Zhao, Jian, Schulzgen, Axel, Amezcua Correa, Rodrigo, Pang, Sean, Delfyett, Peter, Mafi, Arash, University of Central Florida
- Abstract / Description
-
The fiber-optic imaging system enables imaging deeply into hollow tissue tracts or organs of biological objects in a minimally invasive way, which are inaccessible to conventional microscopy. It is the key technology to visualize biological objects in biomedical research and clinical applications. The fiber-optic imaging system should be able to deliver a high-quality image to resolve the details of cell morphology in vivo and in real time with a miniaturized imaging unit. It also has to be...
Show moreThe fiber-optic imaging system enables imaging deeply into hollow tissue tracts or organs of biological objects in a minimally invasive way, which are inaccessible to conventional microscopy. It is the key technology to visualize biological objects in biomedical research and clinical applications. The fiber-optic imaging system should be able to deliver a high-quality image to resolve the details of cell morphology in vivo and in real time with a miniaturized imaging unit. It also has to be insensitive to environmental perturbations, such as mechanical bending or temperature variations. Besides, both coherent and incoherent light sources should be compatible with the imaging system. It is extremely challenging for current technologies to address all these issues simultaneously. The limitation mainly lies in the deficient stability and imaging capability of fiber-optic devices and the limited image reconstruction capability of algorithms. To address these limitations, we first develop the randomly disordered glass-air optical fiber featuring a high air-filling fraction (~28.5 %) and low loss (~1 dB per meter) at visible wavelengths. Due to the transverse Anderson localization effect, the randomly disordered structure can support thousands of modes, most of which demonstrate single-mode properties. By making use of these modes, the randomly disordered optical fiber provides a robust and low-loss imaging system which can transport images with higher quality than the best commercially available imaging fiber. We further demonstrate that deep-learning algorithm can be applied to the randomly disordered optical fiber to overcome the physical limitation of the fiber itself. At the initial stage, a laser-illuminated system is built by integrating a deep convolutional neural network with the randomly disordered optical fiber. Binary sparse objects, such as handwritten numbers and English letters, are collected, transported and reconstructed using this system. It is proved that this first deep-learning-based fiber imaging system can perform artifact-free, lensless and bending-independent imaging at variable working distances. In real-world applications, the gray-scale biological subjects have much more complicated features. To image biological tissues, we re-design the architecture of the deep convolutional neural network and apply it to a newly designed system using incoherent illumination. The improved fiber imaging system has much higher resolution and faster reconstruction speed. We show that this new system can perform video-rate, artifact-free, lensless cell imaging. The cell imaging process is also remarkably robust with regard to mechanical bending and temperature variations. In addition, this system demonstrates stronger transfer-learning capability than existed deep-learning-based fiber imaging system.
Show less - Date Issued
- 2019
- Identifier
- CFE0007746, ucf:52405
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007746
- Title
- Physics-Guided Deep Learning for Power System Sate Estimation.
- Creator
-
Wang, Lei, Zhou, Qun, Li, Qifeng, Qi, Junjian, Dimitrovski, Aleksandar, University of Central Florida
- Abstract / Description
-
Conventionally, physics-based models are used for power system state estimation, including Weighted Least Square (WLS) or Weighted Absolute Value (WLAV). These models typically consider a single snapshot of the system without capturing temporal correlations of system states. In this thesis, a Physics-Guided Deep Learning (PGDL) method incorporating the physical power system model with the deep learning is proposed to improve the performance of power system state estimation. Specifically,...
Show moreConventionally, physics-based models are used for power system state estimation, including Weighted Least Square (WLS) or Weighted Absolute Value (WLAV). These models typically consider a single snapshot of the system without capturing temporal correlations of system states. In this thesis, a Physics-Guided Deep Learning (PGDL) method incorporating the physical power system model with the deep learning is proposed to improve the performance of power system state estimation. Specifically, inspired by Autoencoders, deep neural networks (DNNs) are utilized to learn the temporal correlations of power system states. The estimated system states are checked against the physics law by a set of power flow equations. Hence, the proposed PGDL approach is both data-driven and physics-based. The proposed method is compared with the traditional methods on the basis of accuracy and robustness in IEEE standard cases. The results indicate that PGDL framework provides more accurate and robust estimation for power system state estimation.
Show less - Date Issued
- 2019
- Identifier
- CFE0007871, ucf:52787
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007871
- Title
- Life Long Learning in Sparse Learning Environments.
- Creator
-
Reeder, John, Georgiopoulos, Michael, Gonzalez, Avelino, Sukthankar, Gita, Anagnostopoulos, Georgios, University of Central Florida
- Abstract / Description
-
Life long learning is a machine learning technique that deals with learning sequential tasks over time. It seeks to transfer knowledge from previous learning tasks to new learning tasks in order to increase generalization performance and learning speed. Real-time learning environments in which many agents are participating may provide learning opportunities but they are spread out in time and space outside of the geographical scope of a single learning agent. This research seeks to provide an...
Show moreLife long learning is a machine learning technique that deals with learning sequential tasks over time. It seeks to transfer knowledge from previous learning tasks to new learning tasks in order to increase generalization performance and learning speed. Real-time learning environments in which many agents are participating may provide learning opportunities but they are spread out in time and space outside of the geographical scope of a single learning agent. This research seeks to provide an algorithm and framework for life long learning among a network of agents in a sparse real-time learning environment. This work will utilize the robust knowledge representation of neural networks, and make use of both functional and representational knowledge transfer to accomplish this task. A new generative life long learning algorithm utilizing cascade correlation and reverberating pseudo-rehearsal and incorporating a method for merging divergent life long learning paths will be implemented.
Show less - Date Issued
- 2013
- Identifier
- CFE0004917, ucf:49601
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004917
- Title
- Towards Evolving More Brain-Like Artificial Neural Networks.
- Creator
-
Risi, Sebastian, Stanley, Kenneth, Hughes, Charles, Sukthankar, Gita, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
An ambitious long-term goal for neuroevolution, which studies how artificial evolutionary processes can be driven to produce brain-like structures, is to evolve neurocontrollers with a high density of neurons and connections that can adapt and learn from past experience. Yet while neuroevolution has produced successful results in a variety of domains, the scale of natural brains remains far beyond reach. In this dissertation two extensions to the recently introduced Hypercube-based...
Show moreAn ambitious long-term goal for neuroevolution, which studies how artificial evolutionary processes can be driven to produce brain-like structures, is to evolve neurocontrollers with a high density of neurons and connections that can adapt and learn from past experience. Yet while neuroevolution has produced successful results in a variety of domains, the scale of natural brains remains far beyond reach. In this dissertation two extensions to the recently introduced Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach are presented that are a step towards more brain-like artificial neural networks (ANNs). First, HyperNEAT is extended to evolve plastic ANNs that can learn from past experience. This new approach, called adaptive HyperNEAT, allows not only patterns of weights across the connectivity of an ANN to be generated by a function of its geometry, but also patterns of arbitrary local learning rules. Second, evolvable-substrate HyperNEAT (ES-HyperNEAT) is introduced, which relieves the user from deciding where the hidden nodes should be placed in a geometry that is potentially infinitely dense. This approach not only can evolve the location of every neuron in the network, but also can represent regions of varying density, which means resolution can increase holistically over evolution. The combined approach, adaptive ES-HyperNEAT, unifies for the first time in neuroevolution the abilities to indirectly encode connectivity through geometry, generate patterns of heterogeneous plasticity, and simultaneously encode the density and placement of nodes in space. The dissertation culminates in a major application domain that takes a step towards the general goal of adaptive neurocontrollers for legged locomotion.
Show less - Date Issued
- 2012
- Identifier
- CFE0004287, ucf:49477
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004287
- Title
- Probabilistic-Based Computing Transformation with Reconfigurable Logic Fabrics.
- Creator
-
Alawad, Mohammed, Lin, Mingjie, DeMara, Ronald, Mikhael, Wasfy, Wang, Jun, Das, Tuhin, University of Central Florida
- Abstract / Description
-
Effectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such...
Show moreEffectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such, the traditional computing means of first-principlemodeling or explicit statistical modeling will very likely be ineffective to achieveflexibility, autonomy, and human interaction. The bottom line is clear: given where we areheaded, the fundamental principle of modern computing(-)deterministic logic circuits canflawlessly emulate propositional logic deduction governed by Boolean algebra(-)has to bereexamined, and transformative changes in the foundation of modern computing must bemade.This dissertation presents a novel stochastic-based computing methodology. It efficientlyrealizes the algorithmatic computing through the proposed concept of Probabilistic DomainTransform (PDT). The essence of PDT approach is to encode the input signal asthe probability density function, perform stochastic computing operations on the signal inthe probabilistic domain, and decode the output signal by estimating the probability densityfunction of the resulting random samples. The proposed methodology possesses manynotable advantages. Specifically, it uses much simplified circuit units to conduct complexoperations, which leads to highly area- and energy-efficient designs suitable for parallel processing.Moreover, it is highly fault-tolerant because the information to be processed isencoded with a large ensemble of random samples. As such, the local perturbations of itscomputing accuracy will be dissipated globally, thus becoming inconsequential to the final overall results. Finally, the proposed probabilistic-based computing can facilitate buildingscalable precision systems, which provides an elegant way to trade-off between computingaccuracy and computing performance/hardware efficiency for many real-world applications.To validate the effectiveness of the proposed PDT methodology, two important signal processingapplications, discrete convolution and 2-D FIR filtering, are first implemented andbenchmarked against other deterministic-based circuit implementations. Furthermore, alarge-scale Convolutional Neural Network (CNN), a fundamental algorithmic building blockin many computer vision and artificial intelligence applications that follow the deep learningprinciple, is also implemented with FPGA based on a novel stochastic-based and scalablehardware architecture and circuit design. The key idea is to implement all key componentsof a deep learning CNN, including multi-dimensional convolution, activation, and poolinglayers, completely in the probabilistic computing domain. The proposed architecture notonly achieves the advantages of stochastic-based computation, but can also solve severalchallenges in conventional CNN, such as complexity, parallelism, and memory storage.Overall, being highly scalable and energy efficient, the proposed PDT-based architecture iswell-suited for a modular vision engine with the goal of performing real-time detection, recognitionand segmentation of mega-pixel images, especially those perception-based computingtasks that are inherently fault-tolerant.
Show less - Date Issued
- 2016
- Identifier
- CFE0006828, ucf:51768
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006828
- Title
- Improving Efficiency in Deep Learning for Large Scale Visual Recognition.
- Creator
-
Liu, Baoyuan, Foroosh, Hassan, Qi, GuoJun, Welch, Gregory, Sukthankar, Rahul, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
The emerging recent large scale visual recognition methods, and in particular the deep Convolutional Neural Networks (CNN), are promising to revolutionize many computer vision based artificial intelligent applications, such as autonomous driving and online image retrieval systems. One of the main challenges in large scale visual recognition is the complexity of the corresponding algorithms. This is further exacerbated by the fact that in most real-world scenarios they need to run in real time...
Show moreThe emerging recent large scale visual recognition methods, and in particular the deep Convolutional Neural Networks (CNN), are promising to revolutionize many computer vision based artificial intelligent applications, such as autonomous driving and online image retrieval systems. One of the main challenges in large scale visual recognition is the complexity of the corresponding algorithms. This is further exacerbated by the fact that in most real-world scenarios they need to run in real time and on platforms that have limited computational resources. This dissertation focuses on improving the efficiency of such large scale visual recognition algorithms from several perspectives. First, to reduce the complexity of large scale classification to sub-linear with the number of classes, a probabilistic label tree framework is proposed. A test sample is classified by traversing the label tree from the root node. Each node in the tree is associated with a probabilistic estimation of all the labels. The tree is learned recursively with iterative maximum likelihood optimization. Comparing to the hard label partition proposed previously, the probabilistic framework performs classification more accurately with similar efficiency. Second, we explore the redundancy of parameters in Convolutional Neural Networks (CNN) and employ sparse decomposition to significantly reduce both the amount of parameters and computational complexity. Both inter-channel and inner-channel redundancy is exploit to achieve more than 90\% sparsity with approximately 1\% drop of classification accuracy. We also propose a CPU based efficient sparse matrix multiplication algorithm to reduce the actual running time of CNN models with sparse convolutional kernels. Third, we propose a multi-stage framework based on CNN to achieve better efficiency than a single traditional CNN model. With a combination of cascade model and the label tree framework, the proposed method divides the input images in both the image space and the label space, and processes each image with CNN models that are most suitable and efficient. The average complexity of the framework is significantly reduced, while the overall accuracy remains the same as in the single complex model.
Show less - Date Issued
- 2016
- Identifier
- CFE0006472, ucf:51436
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006472
- Title
- Integrated Remote Sensing and Forecasting of Regional Terrestrial Precipitation with Global Nonlinear and Nonstationary Teleconnection Signals Using Wavelet Analysis.
- Creator
-
Mullon, Lee, Chang, Ni-bin, Wang, Dingbao, Wanielista, Martin, University of Central Florida
- Abstract / Description
-
Global sea surface temperature (SST) anomalies have a demonstrable effect on terrestrial climate dynamics throughout the continental U.S. SST variations have been correlated with greenness (vegetation densities) and precipitation via ocean-atmospheric interactions known as climate teleconnections. Prior research has demonstrated that teleconnections can be used for climate prediction across a wide region at sub-continental scales. Yet these studies tend to have large uncertainties in...
Show moreGlobal sea surface temperature (SST) anomalies have a demonstrable effect on terrestrial climate dynamics throughout the continental U.S. SST variations have been correlated with greenness (vegetation densities) and precipitation via ocean-atmospheric interactions known as climate teleconnections. Prior research has demonstrated that teleconnections can be used for climate prediction across a wide region at sub-continental scales. Yet these studies tend to have large uncertainties in estimates by utilizing simple linear analyses to examine chaotic teleconnection relationships. Still, non-stationary signals exist, making teleconnection identification difficult at the local scale. Part 1 of this research establishes short-term (10-year), linear and non-stationary teleconnection signals between SST at the North Atlantic and North Pacific oceans and terrestrial responses of greenness and precipitation along multiple pristine sites in the northeastern U.S., including (1) White Mountain National Forest (-) Pemigewasset Wilderness, (2) Green Mountain National Forest (-) Lye Brook Wilderness and (3) Adirondack State Park (-) Siamese Ponds Wilderness. Each site was selected to avoid anthropogenic influences that may otherwise mask climate teleconnection signals. Lagged pixel-wise linear teleconnection patterns across anomalous datasets found significant correlation regions between SST and the terrestrial sites. Non-stationary signals also exhibit salient co-variations at biennial and triennial frequencies between terrestrial responses and SST anomalies across oceanic regions in agreement with the El Nino Southern Oscillation (ENSO) and North Atlantic Oscillation (NAO) signals. Multiple regression analysis of the combined ocean indices explained up to 50% of the greenness and 42% of the precipitation in the study sites. The identified short-term teleconnection signals improve the understanding and projection of climate change impacts at local scales, as well as harness the interannual periodicity information for future climate projections. Part 2 of this research paper builds upon the earlier short-term study by exploring a long-term (30-year) teleconnection signal investigation between SST at the North Atlantic and Pacific oceans and the precipitation within Adirondack State Park in upstate New York. Non-traditional teleconnection signals are identified using wavelet decomposition and teleconnection mapping specific to the Adirondack region. Unique SST indices are extracted and used as input variables in an artificial neural network (ANN) prediction model. The results show the importance of considering non-leading teleconnection patterns as well as the known teleconnection patterns. Additionally, the effects of the Pacific Ocean SST or the Atlantic Ocean SST on terrestrial precipitation in the study region were compared with each other to deepen the insight of sea-land interactions. Results demonstrate reasonable prediction skill at forecasting precipitation trends with a lead time of one month, with r values of 0.6. The results are compared against a statistical downscaling approach using the HadCM3 global circulation model output data and the SDSM statistical downscaling software, which demonstrate less predictive skill at forecasting precipitation within the Adirondacks.
Show less - Date Issued
- 2014
- Identifier
- CFE0005535, ucf:50319
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005535
- Title
- FALCONET: FORCE-FEEDBACK APPROACH FOR LEARNING FROM COACHING AND OBSERVATION USING NATURAL AND EXPERIENTIAL TRAINING.
- Creator
-
Stein, Gary, Gonzalez, Avelino, University of Central Florida
- Abstract / Description
-
Building an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine...
Show moreBuilding an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine learning from observation emerged to produce agent models based on observational data. Learning from observation uses unobtrusive and purely observable information to construct an agent that behaves similarly to the observed human. Typically, an observational system builds an agent only based on prerecorded observations. This type of system works well with respect to agent creation, but lacks the ability to be trained and updated on-line. To overcome these deficiencies, the proposed system works by adding an augmented force-feedback system of training that senses the agents intentions haptically. Furthermore, because not all possible situations can be observed or directly trained, a third stage of learning from practice is added for the agent to gain additional knowledge for a particular mission. These stages of learning mimic the natural way a human might learn a task by first watching the task being performed, then being coached to improve, and finally practicing to self improve. The hypothesis is that a system that is initially trained using human recorded data (Observational), then tuned and adjusted using force-feedback (Instructional), and then allowed to perform the task in different situations (Experiential) will be better than any individual step or combination of steps.
Show less - Date Issued
- 2009
- Identifier
- CFE0002746, ucf:48157
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002746
- Title
- Quality Diversity: Harnessing Evolution to Generate a Diversity of High-Performing Solutions.
- Creator
-
Pugh, Justin, Stanley, Kenneth, Wu, Annie, Sukthankar, Gita, Garibay, Ivan, University of Central Florida
- Abstract / Description
-
Evolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of...
Show moreEvolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of satisfying quality emerges. At the same time, research in open-ended evolution (OEE) draws different lessons from nature, seeking to identify and recreate processes that lead to the type of perpetual innovation and indefinitely increasing complexity observed in natural evolution. New algorithms in EC such as MAP-Elites and Novelty Search with Local Competition harness the toolkit of evolution for a related purpose: finding as many types of good solutions as possible (rather than merely the single best solution). With the field in its infancy, no empirical studies previously existed comparing these so-called quality diversity (QD) algorithms. This dissertation (1) contains the first extensive and methodical effort to compare different approaches to QD (including both existing published approaches as well as some new methods presented for the first time here) and to understand how they operate to help inform better approaches in the future.It also (2) introduces a new technique for encoding neural networks for evolution with indirect encoding that contain multiple sensory or output modalities.Further, it (3) explores the idea that QD can act as an engine of open-ended discovery by introducing an expressive platform called Voxelbuild where QD algorithms continually evolve robots that stack blocks in new ways. A culminating experiment (4) is presented that investigates evolution in Voxelbuild over a very long timescale. This research thus stands to advance the OEE community's desire to create and understand open-ended systems while also laying the groundwork for QD to realize its potential within EC as a means to automatically generate an endless progression of new content in real-world applications.
Show less - Date Issued
- 2019
- Identifier
- CFE0007513, ucf:52638
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007513
- Title
- Describing Images by Semantic Modeling using Attributes and Tags.
- Creator
-
Mahmoudkalayeh, Mahdi, Shah, Mubarak, Sukthankar, Gita, Rahnavard, Nazanin, Zhang, Teng, University of Central Florida
- Abstract / Description
-
This dissertation addresses the problem of describing images using visual attributes and textual tags, a fundamental task that narrows down the semantic gap between the visual reasoning of humans and machines. Automatic image annotation assigns relevant textual tags to the images. In this dissertation, we propose a query-specific formulation based on Weighted Multi-view Non-negative Matrix Factorization to perform automatic image annotation. Our proposed technique seamlessly adapt to the...
Show moreThis dissertation addresses the problem of describing images using visual attributes and textual tags, a fundamental task that narrows down the semantic gap between the visual reasoning of humans and machines. Automatic image annotation assigns relevant textual tags to the images. In this dissertation, we propose a query-specific formulation based on Weighted Multi-view Non-negative Matrix Factorization to perform automatic image annotation. Our proposed technique seamlessly adapt to the changes in training data, naturally solves the problem of feature fusion and handles the challenge of the rare tags. Unlike tags, attributes are category-agnostic, hence their combination models an exponential number of semantic labels. Motivated by the fact that most attributes describe local properties, we propose exploiting localization cues, through semantic parsing of human face and body to improve person-related attribute prediction. We also demonstrate that image-level attribute labels can be effectively used as weak supervision for the task of semantic segmentation. Next, we analyze the Selfie images by utilizing tags and attributes. We collect the first large-scale Selfie dataset and annotate it with different attributes covering characteristics such as gender, age, race, facial gestures, and hairstyle. We then study the popularity and sentiments of the selfies given an estimated appearance of various semantic concepts. In brief, we automatically infer what makes a good selfie. Despite its extensive usage, the deep learning literature falls short in understanding the characteristics and behavior of the Batch Normalization. We conclude this dissertation by providing a fresh view, in light of information geometry and Fisher kernels to why the batch normalization works. We propose Mixture Normalization that disentangles modes of variation in the underlying distribution of the layer outputs and confirm that it effectively accelerates training of different batch-normalized architectures including Inception-V3, Densely Connected Networks, and Deep Convolutional Generative Adversarial Networks while achieving better generalization error.
Show less - Date Issued
- 2019
- Identifier
- CFE0007493, ucf:52640
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007493
- Title
- High Performance Techniques for Face Recognition.
- Creator
-
Aldhahab, Ahmed, Mikhael, Wasfy, Atia, George, Jones, W Linwood, Wei, Lei, Elshennawy, Ahmad, University of Central Florida
- Abstract / Description
-
The identification of individuals using face recognition techniques is a challenging task. This is due to the variations resulting from facial expressions, makeup, rotations, illuminations, gestures, etc. Also, facial images contain a great deal of redundant information, which negatively affects the performance of the recognition system. The dimensionality and the redundancy of the facial features have a direct effect on the face recognition accuracy. Not all the features in the feature...
Show moreThe identification of individuals using face recognition techniques is a challenging task. This is due to the variations resulting from facial expressions, makeup, rotations, illuminations, gestures, etc. Also, facial images contain a great deal of redundant information, which negatively affects the performance of the recognition system. The dimensionality and the redundancy of the facial features have a direct effect on the face recognition accuracy. Not all the features in the feature vector space are useful. For example, non-discriminating features in the feature vector space not only degrade the recognition accuracy but also increase the computational complexity.In the field of computer vision, pattern recognition, and image processing, face recognition has become a popular research topic. This is due to its wide spread applications in security and control, which allow the identified individual to access secure areas, personal information, etc. The performance of any recognition system depends on three factors: 1) the storage requirements, 2) the computational complexity, and 3) the recognition rates.Two different recognition system families are presented and developed in this dissertation. Each family consists of several face recognition systems. Each system contains three main steps, namely, preprocessing, feature extraction, and classification. Several preprocessing steps, such as cropping, facial detection, dividing the facial image into sub-images, etc. are applied to the facial images. This reduces the effect of the irrelevant information (background) and improves the system performance. In this dissertation, either a Neural Network (NN) based classifier or Euclidean distance is used for classification purposes. Five widely used databases, namely, ORL, YALE, FERET, FEI, and LFW, each containing different facial variations, such as light condition, rotations, facial expressions, facial details, etc., are used to evaluate the proposed systems. The experimental results of the proposed systems are analyzed using K-folds Cross Validation (CV).In the family-1, Several systems are proposed for face recognition. Each system employs different integrated tools in the feature extraction step. These tools, Two Dimensional Discrete Multiwavelet Transform (2D DMWT), 2D Radon Transform (2D RT), 2D or 3D DWT, and Fast Independent Component Analysis (FastICA), are applied to the processed facial images to reduce the dimensionality and to obtain discriminating features. Each proposed system produces a unique representation, and achieves less storage requirements and better performance than the existing methods.For further facial compression, there are three face recognition systems in the second family. Each system uses different integrated tools to obtain better facial representation. The integrated tools, Vector Quantization (VQ), Discrete cosine Transform (DCT), and 2D DWT, are applied to the facial images for further facial compression and better facial representation. In the systems using the tools VQ/2D DCT and VQ/ 2D DWT, each pose in the databases is represented by one centroid with 4*4*16 dimensions. In the third system, VQ/ Facial Part Detection (FPD), each person in the databases is represented by four centroids with 4*Centroids (4*4*16) dimensions. The systems in the family-2 are proposed to further reduce the dimensions of the data compared to the systems in the family-1 while attaining comparable results. For example, in family-1, the integrated tools, FastICA/ 2D DMWT, applied to different combinations of sub-images in the FERET database with K-fold=5 (9 different poses used in the training mode), reduce the dimensions of the database by 97.22% and achieve 99% accuracy. In contrast, the integrated tools, VQ/ FPD, in the family-2 reduce the dimensions of the data by 99.31% and achieve 97.98% accuracy. In this example, the integrated tools, VQ/ FPD, accomplished further data compression and less accuracy compared to those reported by FastICA/ 2D DMWT tools. Various experiments and simulations using MATLAB are applied. The experimental results of both families confirm the improvements in the storage requirements, as well as the recognition rates as compared to some recently reported methods.
Show less - Date Issued
- 2017
- Identifier
- CFE0006709, ucf:51878
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006709
- Title
- DESIGN AND OPERATION OF STATIONARY DISTRIBUTED BATTERY MICRO-STORAGE SYSTEMS.
- Creator
-
Al-Haj Hussein, Ala, Batarseh, Issa, University of Central Florida
- Abstract / Description
-
Due to some technical and environmental constraints, expanding the current electric power generation and transmission system is being challenged by even increasing the deployment of distributed renewable generation and storage systems. Energy storage can be used to store energy from utility during low-demand (off-peak) hours and deliver this energy back to the utility during high-demand (on-peak) hours. Furthermore, energy storage can be used with renewable sources to overcome some of their...
Show moreDue to some technical and environmental constraints, expanding the current electric power generation and transmission system is being challenged by even increasing the deployment of distributed renewable generation and storage systems. Energy storage can be used to store energy from utility during low-demand (off-peak) hours and deliver this energy back to the utility during high-demand (on-peak) hours. Furthermore, energy storage can be used with renewable sources to overcome some of their limitations such as their strong dependence on the weather conditions, which cannot be perfectly predicted, and their unmatched or out-of-synchronization generation peaks with the demand peaks. Generally, energy storage enhances the performance of distributed renewable sources and increases the efficiency of the entire power system. Moreover, energy storage allows for leveling the load, shaving peak demands, and furthermore, transacting power with the utility grid. This research proposes an energy management system (EMS) to manage the operation of distributed grid-tied battery micro-storage systems for stationary applications when operated with and without renewable sources. The term "micro" refers to the capacity of the energy storage compared to the grid capacity. The proposed management system employs four dynamic models; economic model, battery model, and load and weather forecasting models. These models, which are the main contribution of this research, are used in order to optimally control the operation of the micro-storage system (MSS) to maximize the economic return for the end-user when operated in an electricity spot market system.
Show less - Date Issued
- 2011
- Identifier
- CFE0003964, ucf:48712
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003964