Current Search: Wu, Annie (x)
Pages
-
-
Title
-
THE PROTEOMICS APPROACH TO EVOLUTIONARY COMPUTATION: AN ANALYSIS OF PROTEOME-BASED LOCATION INDEPENDENT REPRESENTATIONS BASEDON THE PROPORTIONAL GENETIC ALGORITHM.
-
Creator
-
Garibay, Ivan, Wu, Annie, University of Central Florida
-
Abstract / Description
-
As the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into...
Show moreAs the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into complex functions or designs. The observation from the central dogma of molecular biology that proteins are the basic building blocks of life and the recent advances in proteomics on analysis of structure, function and interaction of entire protein complements, lead us to propose a unifying framework of thought for these approaches: the proteomics approach. This thesis propose to investigate whether the self-organization of protein analogous structures at the representation level can increase the degree of complexity and ``novelty'' of solutions obtainable using evolutionary search techniques. In order to do so, we identify two fundamental aspects of this transition: (1) proteins interact in a three dimensional medium analogous to a multiset; and (2) proteins are functional structures. The first aspect is foundational for understanding of the second. This thesis analyzes the first aspect. It investigates the effects of using a genome to proteome mapping on evolutionary computation. This analysis is based on a genetic algorithm (GA) with a string to multiset mapping that we call the proportional genetic algorithm (PGA), and it focuses on the feasibility and effectiveness of this mapping. This mapping leads to a fundamental departure from typical EC methods: using a multiset of proteins as an intermediate mapping results in a \emph{completely location independent} problem representation where the location of the genes in a genome has no effect on the fitness of the solutions. Completely location independent representations, by definition, do not suffer from traditional EC hurdles associated with the location of the genes or positional effect in a genome. Such representations have the ability to self-organize into a genomic structure that appears to favor positive correlations between form and quality of represented solutions. Completely location independent representations also introduce new problems of their own such as the need for large alphabets of symbols and the theoretical need for larger representation spaces than traditional approaches. Overall, these representations perform as well or better than traditional representations and they appear to be particularly good for the class of problems involving proportions or multisets. This thesis concludes that the use of protein analogous structures as an intermediate representation in evolutionary computation is not only feasible but in some cases advantageous. In addition, it lays the groundwork for further research on proteins as functional self-organizing structures capable of building increasingly complex functionality, and as basic units of problem representation for evolutionary computation.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000311, ucf:46307
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000311
-
-
Title
-
BEHAVIOR OF VARIABLE-LENGTH GENETIC ALGORITHMS UNDER RANDOM SELECTION.
-
Creator
-
Stringer, Harold, Wu, Annie, University of Central Florida
-
Abstract / Description
-
In this work, we show how a variable-length genetic algorithm naturally evolves populations whose mean chromosome length grows shorter over time. A reduction in chromosome length occurs when selection is absent from the GA. Specifically, we divide the mating space into five distinct areas and provide a probabilistic and empirical analysis of the ability of matings in each area to produce children whose size is shorter than the parent generation's average size. Diversity of size within a...
Show moreIn this work, we show how a variable-length genetic algorithm naturally evolves populations whose mean chromosome length grows shorter over time. A reduction in chromosome length occurs when selection is absent from the GA. Specifically, we divide the mating space into five distinct areas and provide a probabilistic and empirical analysis of the ability of matings in each area to produce children whose size is shorter than the parent generation's average size. Diversity of size within a GA's population is shown to be a necessary condition for a reduction in mean chromosome length to take place. We show how a finite variable-length GA under random selection pressure uses 1) diversity of size within the population, 2) over-production of shorter than average individuals, and 3) the imperfect nature of random sampling during selection to naturally reduce the average size of individuals within a population from one generation to the next. In addition to our findings, this work provides GA researchers and practitioners with 1) a number of mathematical tools for analyzing possible size reductions for various matings and 2) new ideas to explore in the area of bloat control.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001652, ucf:47249
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001652
-
-
Title
-
A NEAT APPROACH TO GENETIC PROGRAMMING.
-
Creator
-
Rodriguez, Adelein, Wu, Annie, University of Central Florida
-
Abstract / Description
-
The evolution of explicitly represented topologies such as graphs involves devising methods for mutating, comparing and combining structures in meaningful ways and identifying and maintaining the necessary topological diversity. Research has been conducted in the area of the evolution of trees in genetic programming and of neural networks and some of these problems have been addressed independently by the different research communities. In the domain of neural networks, NEAT (Neuroevolution...
Show moreThe evolution of explicitly represented topologies such as graphs involves devising methods for mutating, comparing and combining structures in meaningful ways and identifying and maintaining the necessary topological diversity. Research has been conducted in the area of the evolution of trees in genetic programming and of neural networks and some of these problems have been addressed independently by the different research communities. In the domain of neural networks, NEAT (Neuroevolution of Augmenting Topologies) has shown to be a successful method for evolving increasingly complex networks. This system's success is based on three interrelated elements: speciation, marking of historical information in topologies, and initializing search in a small structures search space. This provides the dynamics necessary for the exploration of diverse solution spaces at once and a way to discriminate between different structures. Although different representations have emerged in the area of genetic programming, the study of the tree representation has remained of interest in great part because of its mapping to programming languages and also because of the observed phenomenon of unnecessary code growth or bloat which hinders performance. The structural similarity between trees and neural networks poses an interesting question: Is it possible to apply the techniques from NEAT to the evolution of trees and if so, how does it affect performance and the dynamics of code growth? In this work we address these questions and present analogous techniques to those in NEAT for genetic programming.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001971, ucf:47451
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001971
-
-
Title
-
ALAYZING THE EFFECTS OF MODULARITY ON SEARCH SPACES.
-
Creator
-
Garibay, Ozlem, Wu, Annie, University of Central Florida
-
Abstract / Description
-
We are continuously challenged by ever increasing problem complexity and the need to develop algorithms that can solve complex problems and solve them within a reasonable amount of time. Modularity is thought to reduce problem complexity by decomposing large problems into smaller and less complex subproblems. In practice, introducing modularity into evolutionary algorithm representations appears to improve search performance; however, how and why modularity improves performance is not well...
Show moreWe are continuously challenged by ever increasing problem complexity and the need to develop algorithms that can solve complex problems and solve them within a reasonable amount of time. Modularity is thought to reduce problem complexity by decomposing large problems into smaller and less complex subproblems. In practice, introducing modularity into evolutionary algorithm representations appears to improve search performance; however, how and why modularity improves performance is not well understood. In this thesis, we seek to better understand the effects of modularity on search. In particular, what are the effects of module creation on the search space structure and how do these structural changes affect performance? We define a theoretical and empirical framework to study modularity in evolutionary algorithms. Using this framework, we provide evidence of the following. First, not all types of modularity have an effect on search. We can have highly modular spaces that in essence are equivalent to simpler non-modular spaces. This is the case, because these spaces achieve higher degree of modularity without changing the fundamental structure of the search space. Second, for the cases when modularity actually has an effect on the fundamental structure of the search space, if left without guidance, it would only crowd and complicate the space structure resulting in a harder space for most search algorithms. Finally, we have the case when modularity not only has an effect in the search space structure, but most importantly, module creation can be guided by problem domain knowledge. When this knowledge can be used to estimate the value of a module in terms of its contribution toward building the solution, then modularity is extremely effective. It is in this last case that creating high value modules or low value modules has a direct and decisive impact on performance. The results presented in this thesis help to better understand, in a principled way, the effects of modularity on search. Better understanding the effects of modularity on search is a step forward in the larger issue of evolutionary search applied to increasingly complex problems.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002490, ucf:47680
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002490
-
-
Title
-
Enhancing Cognitive Algorithms for Optimal Performance of Adaptive Networks.
-
Creator
-
Lugo-Cordero, Hector, Guha, Ratan, Wu, Annie, Stanley, Kenneth, University of Central Florida
-
Abstract / Description
-
This research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in...
Show moreThis research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in the environment. Furthermore, the diversity of the performance objectives considered (e.g., power, coverage, anonymity, etc.) makes the objective function non-continuous and therefore not have a derivative. For these reasons, we enhance Particle Swarm Optimization (PSO) algorithm with elements that aid in exploring for better configurations to obtain optimal and sub-optimal configurations. According to results, the enhanced PSO promotes population diversity, leading to more unique optimal configurations for adapting to dynamic environments. The gradual complexification process demonstrated simpler optimal solutions than those obtained via trial and error without the enhancements.Configurations obtained by the modified PSO are further tuned in real-time upon environment changes. Such tuning occurs with a Fuzzy Logic Controller (FLC) which models human decision making by monitoring certain events in the algorithm. Example of such events include diversity and quality of solution in the environment. The FLC is able to adapt the enhanced PSO to changes in the environment, causing more exploration or exploitation as needed.By adding a Probabilistic Neural Network (PNN) classifier, the enhanced PSO is again used as a filter to aid in intrusion detection classification. This approach reduces miss classifications by consulting neighbors for classification in case of ambiguous samples. The performance of ambiguous votes via PSO filtering shows an improvement in classification, causing the simple classifier perform better the commonly used classifiers.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007046, ucf:52003
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007046
-
-
Title
-
Training Neural Networks Through the Integration of Evolution and Gradient Descent.
-
Creator
-
Morse, Gregory, Stanley, Kenneth, Wu, Annie, Shah, Mubarak, Wiegand, Rudolf, University of Central Florida
-
Abstract / Description
-
Neural networks have achieved widespread adoption due to both their applicability to a wide range of problems and their success relative to other machine learning algorithms. The training of neural networks is achieved through any of several paradigms, most prominently gradient-based approaches (including deep learning), but also through up-and-coming approaches like neuroevolution. However, while both of these neural network training paradigms have seen major improvements over the past...
Show moreNeural networks have achieved widespread adoption due to both their applicability to a wide range of problems and their success relative to other machine learning algorithms. The training of neural networks is achieved through any of several paradigms, most prominently gradient-based approaches (including deep learning), but also through up-and-coming approaches like neuroevolution. However, while both of these neural network training paradigms have seen major improvements over the past decade, little work has been invested in developing algorithms that incorporate the advances from both deep learning and neuroevolution. This dissertation introduces two new algorithms that are steps towards the integration of gradient descent and neuroevolution for training neural networks. The first is (1) the Limited Evaluation Evolutionary Algorithm (LEEA), which implements a novel form of evolution where individuals are partially evaluated, allowing rapid learning and enabling the evolutionary algorithm to behave more like gradient descent. This conception provides a critical stepping stone to future algorithms that more tightly couple evolutionary and gradient descent components. The second major algorithm (2) is Divergent Discriminative Feature Accumulation (DDFA), which combines a neuroevolution phase, where features are collected in an unsupervised manner, with a gradient descent phase for fine tuning of the neural network weights. The neuroevolution phase of DDFA utilizes an indirect encoding and novelty search, which are sophisticated neuroevolution components rarely incorporated into gradient descent-based systems. Further contributions of this work that build on DDFA include (3) an empirical analysis to identify an effective distance function for novelty search in high dimensions and (4) the extension of DDFA for the purpose of discovering convolutional features. The results of these DDFA experiments together show that DDFA discovers features that are effective as a starting point for gradient descent, with significant improvement over gradient descent alone. Additionally, the method of collecting features in an unsupervised manner allows DDFA to be applied to domains with abundant unlabeled data and relatively sparse labeled data. This ability is highlighted in the STL-10 domain, where DDFA is shown to make effective use of unlabeled data.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007840, ucf:52819
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007840
-
-
Title
-
From Excited Charge Dynamics to Cluster Diffusion: Development and Application of Techniques Beyond DFT and KMC.
-
Creator
-
Acharya, Shree Ram, Rahman, Talat, Chow, Lee, Stolbov, Sergey, Wu, Annie, University of Central Florida
-
Abstract / Description
-
This dissertation focuses on developing reliable and accurate computational techniques which enable the examination of static and dynamic properties of various activated phenomena using deterministic and stochastic approaches. To explore ultrafast electron dynamics in materials with strong electron-electron correlation, under the influence of a laser pulse, an ab initio electronic structure method based on time-dependent density functional theory (TDDFT) in combination with dynamical mean...
Show moreThis dissertation focuses on developing reliable and accurate computational techniques which enable the examination of static and dynamic properties of various activated phenomena using deterministic and stochastic approaches. To explore ultrafast electron dynamics in materials with strong electron-electron correlation, under the influence of a laser pulse, an ab initio electronic structure method based on time-dependent density functional theory (TDDFT) in combination with dynamical mean field theory (DMFT) is developed and applied to: 1) single-band Hubbard model; 2) multi-band metal Ni; and 3) multi-band insulator MnO. The ultrafast demagnetization in Ni reveal the importance of memory and correlation effects, leading to much better agreement with experimental data than previously obtained, while for MnO the main channels of charge response are identified. Furthermore, an analytical form of the exchange-correlation kernel is obtained for future applications, saving tremendous computational cost. In another project, size-dependent temporal and spatial evolution of homo- and hetero-epitaxial adatom islands on fcc(111) transition metals surfaces are investigated using the self-learning kinetic Monte Carlo (SLKMC) method that explores long-time dynamics unbiased by apriori selected diffusion processes. Novel multi-atom diffusion processes are revealed. Trends in the diffusion coefficients point to the relative role of adatom lateral interaction and island-substrate binding energy in determining island diffusivity. Moreover, analysis of the large data-base of the activation energy barriers generated for multitude of diffusion processes for variety of systems allows extraction of a set of descriptors that in turn generate predictive models for energy barrier evaluation. Finally, the kinetics of the industrially important methanol partial oxidation reaction on a model nanocatalyst is explored using KMC supplemented by DFT energetics. Calculated thermodynamics explores the active surface sites for reaction components including different intermediates and energetics of competing probable reaction pathways, while kinetic study attends to the selectivity of products and its variation with external factors.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0006965, ucf:52910
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006965
-
-
Title
-
Harmony Oriented Architecture.
-
Creator
-
Martin, Kyle, Hua, Kien, Wu, Annie, Heinrich, Mark, University of Central Florida
-
Abstract / Description
-
This thesis presents Harmony Oriented Architecture: a novel architectural paradigm that applies the principles of Harmony Oriented Programming to the architecture of scalable and evolvable distributed systems. It is motivated by research on Ultra Large Scale systems that has revealed inherent limitations in human ability to design large-scale software systems that can only be overcome through radical alternatives to traditional object-oriented software engineering practice that simplifies the...
Show moreThis thesis presents Harmony Oriented Architecture: a novel architectural paradigm that applies the principles of Harmony Oriented Programming to the architecture of scalable and evolvable distributed systems. It is motivated by research on Ultra Large Scale systems that has revealed inherent limitations in human ability to design large-scale software systems that can only be overcome through radical alternatives to traditional object-oriented software engineering practice that simplifies the construction of highly scalable and evolvable system.HOP eschews encapsulation and information hiding, the core principles of object- oriented design, in favor of exposure and information sharing through a spatial abstraction. This helps to avoid the brittle interface dependencies that impede the evolution of object-oriented software. HOA extends these concepts to distributed systems resulting in an architecture in which application components are represented by objects in a spatial database and executed in strict isolation using an embedded application server. Application components store their state entirely in the database and interact solely by diffusing data into a space for proximate components to observe. This architecture provides a high degree of decoupling, isolation, and state exposure allowing highly scalable and evolvable applications to be built.A proof-of-concept prototype of a non-distributed HOA middleware platform supporting JavaScript application components is implemented and evaluated. Results show remarkably good performance considering that little effort was made to optimize the implementation.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004480, ucf:49298
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004480
-
-
Title
-
Machine Learning from Casual Conversation.
-
Creator
-
Mohammed Ali, Awrad, Sukthankar, Gita, Wu, Annie, Boloni, Ladislau, University of Central Florida
-
Abstract / Description
-
Human social learning is an effective process that has inspired many existing machine learning techniques, such as learning from observation and learning by demonstration. In this dissertation, we introduce another form of social learning, Learning from a Casual Conversation (LCC). LCC is an open-ended machine learning system in which an artificially intelligent agent learns from an extended dialog with a human. Our system enables the agent to incorporate changes into its knowledge base,...
Show moreHuman social learning is an effective process that has inspired many existing machine learning techniques, such as learning from observation and learning by demonstration. In this dissertation, we introduce another form of social learning, Learning from a Casual Conversation (LCC). LCC is an open-ended machine learning system in which an artificially intelligent agent learns from an extended dialog with a human. Our system enables the agent to incorporate changes into its knowledge base, based on the human's conversational text input. This system emulates how humans learn from each other through a dialog. LCC closes the gap in the current research that is focused on teaching specific tasks to computer agents. Furthermore, LCC aims to provide an easy way to enhance the knowledge of the system without requiring the involvement of a programmer. This system does not require the user to enter specific information; instead, the user can chat naturally with the agent. LCC identifies the inputs that contain information relevant to its knowledge base in the learning process. LCC's architecture consists of multiple sub-systems combined to perform the task. Its learning component can add new knowledge to existing information in the knowledge base, confirm existing information, and/or update existing information found to be related to the user input. %The test results indicate that the prototype was successful in learning from a conversation. The LCC system functionality was assessed using different evaluation methods. This includes tests performed by the developer, as well as by 130 human test subjects. Thirty of those test subjects interacted directly with the system and completed a survey of 13 questions/statements that asked the user about his/her experience using LCC. A second group of 100 human test subjects evaluated the dialogue logs of a subset of the first group of human testers. The collected results were all found to be acceptable and within the range of our expectations.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007503, ucf:52634
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007503
-
-
Title
-
Quality Diversity: Harnessing Evolution to Generate a Diversity of High-Performing Solutions.
-
Creator
-
Pugh, Justin, Stanley, Kenneth, Wu, Annie, Sukthankar, Gita, Garibay, Ivan, University of Central Florida
-
Abstract / Description
-
Evolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of...
Show moreEvolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of satisfying quality emerges. At the same time, research in open-ended evolution (OEE) draws different lessons from nature, seeking to identify and recreate processes that lead to the type of perpetual innovation and indefinitely increasing complexity observed in natural evolution. New algorithms in EC such as MAP-Elites and Novelty Search with Local Competition harness the toolkit of evolution for a related purpose: finding as many types of good solutions as possible (rather than merely the single best solution). With the field in its infancy, no empirical studies previously existed comparing these so-called quality diversity (QD) algorithms. This dissertation (1) contains the first extensive and methodical effort to compare different approaches to QD (including both existing published approaches as well as some new methods presented for the first time here) and to understand how they operate to help inform better approaches in the future.It also (2) introduces a new technique for encoding neural networks for evolution with indirect encoding that contain multiple sensory or output modalities.Further, it (3) explores the idea that QD can act as an engine of open-ended discovery by introducing an expressive platform called Voxelbuild where QD algorithms continually evolve robots that stack blocks in new ways. A culminating experiment (4) is presented that investigates evolution in Voxelbuild over a very long timescale. This research thus stands to advance the OEE community's desire to create and understand open-ended systems while also laying the groundwork for QD to realize its potential within EC as a means to automatically generate an endless progression of new content in real-world applications.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007513, ucf:52638
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007513
-
-
Title
-
Automatically Acquiring a Semantic Network of Related Concepts.
-
Creator
-
Szumlanski, Sean, Gomez, Fernando, Wu, Annie, Hughes, Charles, Sims, Valerie, University of Central Florida
-
Abstract / Description
-
We describe the automatic acquisition of a semantic network in which over 7,500 of the most frequently occurring nouns in the English language are linked to their semantically related concepts in the WordNet noun ontology. Relatedness between nouns is discovered automatically from lexical co-occurrence in Wikipedia texts using a novel adaptation of an information theoretic inspired measure. Our algorithm then capitalizes on salient sense clustering among these semantic associates to...
Show moreWe describe the automatic acquisition of a semantic network in which over 7,500 of the most frequently occurring nouns in the English language are linked to their semantically related concepts in the WordNet noun ontology. Relatedness between nouns is discovered automatically from lexical co-occurrence in Wikipedia texts using a novel adaptation of an information theoretic inspired measure. Our algorithm then capitalizes on salient sense clustering among these semantic associates to automatically disambiguate them to their corresponding WordNet noun senses (i.e., concepts). The resultant concept-to-concept associations, stemming from 7,593 target nouns, with 17,104 distinct senses among them, constitute a large-scale semantic network with 208,832 undirected edges between related concepts. Our work can thus be conceived of as augmenting the WordNet noun ontology with RelatedTo links.The network, which we refer to as the Szumlanski-Gomez Network (SGN), has been subjected to a variety of evaluative measures, including manual inspection by human judges and quantitative comparison to gold standard data for semantic relatedness measurements. We have also evaluated the network's performance in an applied setting on a word sense disambiguation (WSD) task in which the network served as a knowledge source for established graph-based spreading activation algorithms, and have shown: a) the network is competitive with WordNet when used as a stand-alone knowledge source for WSD, b) combining our network with WordNet achieves disambiguation results that exceed the performance of either resource individually, and c) our network outperforms a similar resource, WordNet++ (Ponzetto (&) Navigli, 2010), that has been automatically derived from annotations in the Wikipedia corpus.Finally, we present a study on human perceptions of relatedness. In our study, we elicited quantitative evaluations of semantic relatedness from human subjects using a variation of the classical methodology that Rubenstein and Goodenough (1965) employed to investigate human perceptions of semantic similarity. Judgments from individual subjects in our study exhibit high average correlation to the elicited relatedness means using leave-one-out sampling (r = 0.77, ? = 0.09, N = 73), although not as high as average human correlation in previous studies of similarity judgments, for which Resnik (1995) established an upper bound of r = 0.90 (? = 0.07, N = 10). These results suggest that human perceptions of relatedness are less strictly constrained than evaluations of similarity, and establish a clearer expectation for what constitutes human-like performance by a computational measure of semantic relatedness. We also contrast the performance of a variety of similarity and relatedness measures on our dataset to their performance on similarity norms and introduce our own dataset as a supplementary evaluative standard for relatedness measures.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004759, ucf:49767
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004759
-
-
Title
-
Modeling social norms in real-world agent-based simulations.
-
Creator
-
Beheshti, Rahmatollah, Sukthankar, Gita, Boloni, Ladislau, Wu, Annie, Swarup, Samarth, University of Central Florida
-
Abstract / Description
-
Studying and simulating social systems including human groups and societies can be a complex problem. In order to build a model that simulates humans' actions, it is necessary to consider the major factors that affect human behavior. Norms are one of these factors: social norms are the customary rules that govern behavior in groups and societies. Norms are everywhere around us, from the way people handshake or bow to the clothes they wear. They play a large role in determining our behaviors....
Show moreStudying and simulating social systems including human groups and societies can be a complex problem. In order to build a model that simulates humans' actions, it is necessary to consider the major factors that affect human behavior. Norms are one of these factors: social norms are the customary rules that govern behavior in groups and societies. Norms are everywhere around us, from the way people handshake or bow to the clothes they wear. They play a large role in determining our behaviors. Studies on norms are much older than the age of computer science, since normative studies have been a classic topic in sociology, psychology, philosophy and law. Various theories have been put forth about the functioning of social norms. Although an extensive amount of research on norms has been performed during the recent years, there remains a significant gap between current models and models that can explain real-world normative behaviors. Most of the existing work on norms focuses on abstract applications, and very few realistic normative simulations of human societies can be found. The contributions of this dissertation include the following: 1) a new hybrid technique based on agent-based modeling and Markov Chain Monte Carlo is introduced. This method is used to prepare a smoking case study for applying normative models. 2) This hybrid technique is described using category theory, which is a mathematical theory focusing on relations rather than objects. 3) The relationship between norm emergence in social networks and the theory of tipping points is studied. 4) A new lightweight normative architecture for studying smoking cessation trends is introduced. This architecture is then extended to a more general normative framework that can be used to model real-world normative behaviors. The final normative architecture considers cognitive and social aspects of norm formation in human societies. Normative architectures based on only one of these two aspects exist in the literature, but a normative architecture that effectively includes both of these two is missing.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005577, ucf:50244
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005577
-
-
Title
-
Autonomous Quadcopter Videographer.
-
Creator
-
Coaguila Quiquia, Rey, Sukthankar, Gita, Wu, Annie, Hughes, Charles, University of Central Florida
-
Abstract / Description
-
In recent years, the interest in quadcopters as a robotics platform for autonomous photography has increased. This is due to their small size and mobility, which allow them to reach places that are difficult or even impossible for humans. This thesis focuses on the design of an autonomous quadcopter videographer, i.e. a quadcopter capable of capturing good footage of a specific subject. In order to obtain this footage, the system needs to choose appropriate vantage points and control the...
Show moreIn recent years, the interest in quadcopters as a robotics platform for autonomous photography has increased. This is due to their small size and mobility, which allow them to reach places that are difficult or even impossible for humans. This thesis focuses on the design of an autonomous quadcopter videographer, i.e. a quadcopter capable of capturing good footage of a specific subject. In order to obtain this footage, the system needs to choose appropriate vantage points and control the quadcopter. Skilled human videographers can easily spot good filming locations where the subject and its actions can be seen clearly in the resulting video footage, but translating this knowledge to a robot can be complex. We present an autonomous system implemented on a commercially available quadcopter that achieves this using only the monocular information and an accelerometer. Our system has two vantage point selection strategies: 1) a reactive approach, which moves the robot to a fixed location with respect to the human and 2) the combination of the reactive approach and a POMDP planner that considers the target's movement intentions. We compare the behavior of these two approaches under different target movement scenarios. The results show that the POMDP planner obtains more stable footage with less quadcopter motion.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005592, ucf:50246
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005592
-
-
Title
-
Synthetic generators for simulating social networks.
-
Creator
-
Mohammed Ali, Awrad, Sukthankar, Gita, Wu, Annie, Boloni, Ladislau, University of Central Florida
-
Abstract / Description
-
An application area of increasing importance is creating agent-based simulations to model human societies. One component of developing these simulations is the ability to generate realistic human social networks. Online social networking websites, such as Facebook, Google+, and Twitter, have increased in popularity in the last decade. Despite the increase in online social networking tools and the importance of studying human behavior in these networks, collecting data directly from these...
Show moreAn application area of increasing importance is creating agent-based simulations to model human societies. One component of developing these simulations is the ability to generate realistic human social networks. Online social networking websites, such as Facebook, Google+, and Twitter, have increased in popularity in the last decade. Despite the increase in online social networking tools and the importance of studying human behavior in these networks, collecting data directly from these networks is not always feasible due to privacy concerns. Previous work in this area has primarily been limited to 1) network generators that aim to duplicate a small subset of the original network's properties and 2) problem-specific generators for applications such as the evaluation of community detection algorithms.In this thesis, we extended two synthetic network generators to enable them to duplicate the properties of a specific dataset. In the first generator, we consider feature similarity and label homophily among individuals when forming links. The second generator is designed to handle multiplex networks that contain different link types. We evaluate the performance of both generators on existing real-world social network datasets, as well as comparing our methods with a related synthetic network generator. In this thesis, we demonstrate that the proposed synthetic network generators are both time efficient and require only limited parameter optimization.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005532, ucf:50300
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005532
-
-
Title
-
In-Memory Computing Using Formal Methods and Paths-Based Logic.
-
Creator
-
Velasquez, Alvaro, Jha, Sumit Kumar, Leavens, Gary, Wu, Annie, Subramani, K., University of Central Florida
-
Abstract / Description
-
The continued scaling of the CMOS device has been largely responsible for the increase in computational power and consequent technological progress over the last few decades. However, the end of Dennard scaling has interrupted this era of sustained exponential growth in computing performance. Indeed, we are quickly reaching an impasse in the form of limitations in the lithographic processes used to fabricate CMOS processes and, even more dire, we are beginning to face fundamental physical...
Show moreThe continued scaling of the CMOS device has been largely responsible for the increase in computational power and consequent technological progress over the last few decades. However, the end of Dennard scaling has interrupted this era of sustained exponential growth in computing performance. Indeed, we are quickly reaching an impasse in the form of limitations in the lithographic processes used to fabricate CMOS processes and, even more dire, we are beginning to face fundamental physical phenomena, such as quantum tunneling, that are pervasive at the nanometer scale. Such phenomena manifests itself in prohibitively high leakage currents and process variations, leading to inaccurate computations. As a result, there has been a surge of interest in computing architectures that can replace the traditional CMOS transistor-based methods. This thesis is a thorough investigation of how computations can be performed on one such architecture, called a crossbar. The methods proposed in this document apply to any crossbar consisting of two-terminal connective devices. First, we demonstrate how paths of electric current between two wires can be used as design primitives in a crossbar. We then leverage principles from the field of formal methods, in particular the area of bounded model checking, to automate the synthesis of crossbar designs for computing arithmetic operations. We demonstrate that our approach yields circuits that are state-of-the-art in terms of the number of operations required to perform a computation. Finally, we look at the benefits of using a 3D crossbar for computation; that is, a crossbar consisting of multiple layers of interconnects. A novel 3D crossbar computing paradigm is proposed for solving the Boolean matrix multiplication and transitive closure problems and we show how this paradigm can be utilized, with small modifications, in the XPoint crossbar memory architecture that was recently announced by Intel.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007419, ucf:52720
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007419
-
-
Title
-
Learning Internal State Memory Representations from Observation.
-
Creator
-
Wong, Josiah, Gonzalez, Avelino, Liu, Fei, Wu, Annie, Ontanon, Santiago, Wiegand, Rudolf, University of Central Florida
-
Abstract / Description
-
Learning from Observation (LfO) is a machine learning paradigm that mimics how people learn in daily life: learning how to do something simply by watching someone else do it. LfO has been used in various applications, from video game agent creation to driving a car, but it has always been limited by the inability of an observer to know what a performing entity chooses to remember as they act in an environment. Various methods have either ignored the effects of memory or otherwise made...
Show moreLearning from Observation (LfO) is a machine learning paradigm that mimics how people learn in daily life: learning how to do something simply by watching someone else do it. LfO has been used in various applications, from video game agent creation to driving a car, but it has always been limited by the inability of an observer to know what a performing entity chooses to remember as they act in an environment. Various methods have either ignored the effects of memory or otherwise made simplistic assumptions about its structure. In this dissertation, we propose a new method, Memory Composition Learning, that captures the influence of a performer's memory in an observed behavior through the creation of an auxiliary memory feature set that explicitly models the aspects of the environment with significance for future decisions, and which can be used with a machine learning technique to provide salient information from memory. It advances the state of the art by automatically learning the internal structure of memory instead of ignoring or predefining it. This research is difficult in that memory modeling is an unsupervised learning problem that we elect to solve solely from unobtrusive observation. This research is significant for LfO in that it will allow learning techniques that otherwise could not use information from memory to use a tailored set of learned memory features that capture salient influences from memory and enable decision-making based on these influences for more effective learning performance. To validate our hypothesis, we implemented a prototype for modeling observed memory influences with our approach and applied it to simulated vacuum cleaner and lawn mower domains. Our investigation revealed that MCL was able to automatically learn memory features that describe the influences on an observed actor's internal state, and which improved learning performance of observed behaviors.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007879, ucf:52755
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007879
-
-
Title
-
Methods to Calculate Cut Volumes for Fault Trees with Dependencies Induced by Spatial Locations.
-
Creator
-
Hanes, Phillip, Wiegand, Rudolf, Wu, Annie, DeMara, Ronald, Song, Zixia, University of Central Florida
-
Abstract / Description
-
Fault tree analysis (FTA) is used to find and mitigate vulnerabilities in systems based on their constituent components. Methods exist to efficiently find minimal cut sets (MCS), which are combinations of components whose failure causes the overall system to fail. However, traditional FTA ignores the physical location of the components. Components in close proximity to each other could be defeated by a single event with a radius of effect, such as an explosion or fire. Events such as the...
Show moreFault tree analysis (FTA) is used to find and mitigate vulnerabilities in systems based on their constituent components. Methods exist to efficiently find minimal cut sets (MCS), which are combinations of components whose failure causes the overall system to fail. However, traditional FTA ignores the physical location of the components. Components in close proximity to each other could be defeated by a single event with a radius of effect, such as an explosion or fire. Events such as the Deepwater Horizon explosion and subsequent oil spill demonstrate the potentially devastating risk posed by such spatial dependencies. This motivates the search for techniques to identify this type of vulnerability. Adding physical locations to the fault tree structure can help identify possible points of failure in the overall system caused by localized disasters. Since existing FTA methods cannot address these concerns, using this information requires extending existing solution methods or developing entirely new ones.A problem complicating research in FTA is the lack of benchmark problems for evaluating methods, especially for fault trees over one hundred components. This research presents a method of using Lindenmeyer systems (L-systems) to generate fault trees that are reproducible, capable of producing fault trees with similar properties to real-world designs, and scalable while maintaining predictable structural properties. This approach will be useful for testing and analyzing different methodologies for FTA tasks at different scales and under different conditions.Using a set of benchmark fault trees derived from L-systems, three approaches to finding these vulnerabilities were explored in this research. These approaches were compared by defining a metric called (")minimal cut volumes(") (MCV) for describing volumes of effect that defeat the system. Since no existing methods are known for solving this problem, the methods are compared to each other to evaluate performance.1) The control method executes traditional FTA software to find minimal cut sets (MCS), then extends this approach by searching for clusters in the resulting MCS to find MCV.2) The next method starts by searching for clusters of components in the three dimensional space, then evaluates combinations of clusters to find MCV that defeat the system.3) The last method uses an evolutionary algorithm to search the space directly by selecting center points, then using the radius of the smallest sphere(s) as the fitness value for identifying MCV.Results generated using each method are presented. The performance of the methods are compared to the control method and their utilities evaluated accordingly.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007403, ucf:52075
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007403
-
-
Title
-
Heterogeneous Reconfigurable Fabrics for In-circuit Training and Evaluation of Neuromorphic Architectures.
-
Creator
-
Mohammadizand, Ramtin, DeMara, Ronald, Lin, Mingjie, Sundaram, Kalpathy, Fan, Deliang, Wu, Annie, University of Central Florida
-
Abstract / Description
-
A heterogeneous device technology reconfigurable logic fabric is proposed which leverages the cooperating advantages of distinct magnetic random access memory (MRAM)-based look-up tables (LUTs) to realize sequential logic circuits, along with conventional SRAM-based LUTs to realize combinational logic paths. The resulting Hybrid Spin/Charge FPGA (HSC-FPGA) using magnetic tunnel junction (MTJ) devices within this topology demonstrates commensurate reductions in area and power consumption over...
Show moreA heterogeneous device technology reconfigurable logic fabric is proposed which leverages the cooperating advantages of distinct magnetic random access memory (MRAM)-based look-up tables (LUTs) to realize sequential logic circuits, along with conventional SRAM-based LUTs to realize combinational logic paths. The resulting Hybrid Spin/Charge FPGA (HSC-FPGA) using magnetic tunnel junction (MTJ) devices within this topology demonstrates commensurate reductions in area and power consumption over fabrics having LUTs constructed with either individual technology alone. Herein, a hierarchical top-down design approach is used to develop the HSC(&)#173; FPGA starting from the configurable logic block (CLB) and slice structures down to LUT circuits and the corresponding device fabrication paradigms. This facilitates a novel architectural approach to reduce leakage energy, minimize communication occurrence and energy cost by eliminating unnecessary data transfer, and support auto-tuning for resilience. Furthermore, HSC-FPGA enables new advantages of technology co-design which trades off alternative mappings between emerging devices and transistors at runtime by allowing dynamic remapping to adaptively leverage the intrinsic computing features of each device technology. HSC-FPGA offers a platform for fine-grained Logic-In-Memory architectures and runtime adaptive hardware.An orthogonal dimension of fabric heterogeneity is also non-determinism enabled by either low(&)#173; voltage CMOS or probabilistic emerging devices. It can be realized using probabilistic devices within a reconfigurable network to blend deterministic and probabilistic computational models. Herein, consider the probabilistic spin logic p-bit device as a fabric element comprising a crossbar(&)#173; structured weighted array. The programmability of the resistive network interconnecting p-bit devices can be achieved by modifying the resistive states of the array's weighted connections. Thus, the programmable weighted array forms a CLB-scale macro co-processing element with bitstream programmability. This allows field programmability for a wide range of classification problems and recognition tasks to allow fluid mappings of probabilistic and deterministic computing approaches. In particular, a Deep Belief Network (DBN) is implemented in the field using recurrent layers of co-processing elements to form an n(&)#215; m1(&)#215;m2(&)#215;...(&)#215;mi weighted array as a configurable hardware circuit with an n-input layer followed by i?1 hidden layers. As neuromorphic architectures using post-CMOS devices increase in capability and network size, the utility and benefits of reconfigurable fabrics of neuromorphic modules can be anticipated to continue to accelerate.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007502, ucf:52643
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007502
-
-
Title
-
Context-Centric Affect Recognition From Paralinguistic Features of Speech.
-
Creator
-
Marpaung, Andreas, Gonzalez, Avelino, DeMara, Ronald, Sukthankar, Gita, Wu, Annie, Lisetti, Christine, University of Central Florida
-
Abstract / Description
-
As the field of affect recognition has progressed, many researchers have shifted from having unimodal approaches to multimodal ones. In particular, the trends in paralinguistic speech affect recognition domain have been to integrate other modalities such as facial expression, body posture, gait, and linguistic speech. Our work focuses on integrating contextual knowledge into paralinguistic speech affect recognition. We hypothesize that a framework to recognize affect through paralinguistic...
Show moreAs the field of affect recognition has progressed, many researchers have shifted from having unimodal approaches to multimodal ones. In particular, the trends in paralinguistic speech affect recognition domain have been to integrate other modalities such as facial expression, body posture, gait, and linguistic speech. Our work focuses on integrating contextual knowledge into paralinguistic speech affect recognition. We hypothesize that a framework to recognize affect through paralinguistic features of speech can improve its performance by integrating relevant contextual knowledge. This dissertation describes our research to integrate contextual knowledge into the paralinguistic affect recognition process from acoustic features of speech. We conceived, built, and tested a two-phased system called the Context-Based Paralinguistic Affect Recognition System (CxBPARS). The first phase of this system is context-free and uses the AdaBoost classifier that applies data on the acoustic pitch, jitter, shimmer, Harmonics-to-Noise Ratio (HNR), and the Noise-to-Harmonics Ratio (NHR) to make an initial judgment about the emotion most likely exhibited by the human elicitor. The second phase then adds context modeling to improve upon the context-free classifications from phase I. CxBPARS was inspired by a human subject study performed as part of this work where test subjects were asked to classify an elicitor's emotion strictly from paralinguistic sounds, and then subsequently provided with contextual information to improve their selections. CxBPARS was rigorously tested and found to, at the worst case, improve the success rate from the state-of-the-art's 42% to 53%.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007836, ucf:52831
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007836
-
-
Title
-
A Fitness Function Elimination Theory for Blackbox Optimization and Problem Class Learning.
-
Creator
-
Anil, Gautham, Wu, Annie, Wiegand, Rudolf, Stanley, Kenneth, Clarke, Thomas, Jansen, Thomas, University of Central Florida
-
Abstract / Description
-
The modern view of optimization is that optimization algorithms are not designed in a vacuum, but can make use of information regarding the broad class of objective functions from which a problem instance is drawn. Using this knowledge, we want to design optimization algorithms that execute quickly (efficiency), solve the objective function with minimal samples (performance), and are applicable over a wide range of problems (abstraction). However, we present a new theory for blackbox...
Show moreThe modern view of optimization is that optimization algorithms are not designed in a vacuum, but can make use of information regarding the broad class of objective functions from which a problem instance is drawn. Using this knowledge, we want to design optimization algorithms that execute quickly (efficiency), solve the objective function with minimal samples (performance), and are applicable over a wide range of problems (abstraction). However, we present a new theory for blackbox optimization from which, we conclude that of these three desired characteristics, only two can be maximized by any algorithm.We put forward an alternate view of optimization where we use knowledge about the problem class and samples from the problem instance to identify which problem instances from the class are being solved. From this Elimination of Fitness Functions approach, an idealized optimization algorithm that minimizes sample counts over any problem class, given complete knowledge about the class, is designed. This theory allows us to learn more about the difficulty of various problems, and we are able to use it to develop problem complexity bounds.We present general methods to model this algorithm over a particular problem class and gain efficiency at the cost of specifically targeting that class. This is demonstrated over the Generalized Leading-Ones problem and a generalization called LO**, and efficient algorithms with optimal performance are derived and analyzed. We also tighten existing bounds for LO***. Additionally, we present a probabilistic framework based on our Elimination of Fitness Functions approach that clarifies how one can ideally learn about the problem class we face from the objective functions. This problem learning increases the performance of an optimization algorithm at the cost of abstraction.In the context of this theory, we re-examine the blackbox framework as an algorithm design framework and suggest several improvements to existing methods, including incorporating problem learning, not being restricted to blackbox framework and building parametrized algorithms. We feel that this theory and our recommendations will help a practitioner make substantially better use of all that is available in typical practical optimization algorithm design scenarios.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004511, ucf:49268
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004511
Pages