Current Search: genetic algorithm (x)
View All Items
Pages
- Title
- BEHAVIOR OF VARIABLE-LENGTH GENETIC ALGORITHMS UNDER RANDOM SELECTION.
- Creator
-
Stringer, Harold, Wu, Annie, University of Central Florida
- Abstract / Description
-
In this work, we show how a variable-length genetic algorithm naturally evolves populations whose mean chromosome length grows shorter over time. A reduction in chromosome length occurs when selection is absent from the GA. Specifically, we divide the mating space into five distinct areas and provide a probabilistic and empirical analysis of the ability of matings in each area to produce children whose size is shorter than the parent generation's average size. Diversity of size within a...
Show moreIn this work, we show how a variable-length genetic algorithm naturally evolves populations whose mean chromosome length grows shorter over time. A reduction in chromosome length occurs when selection is absent from the GA. Specifically, we divide the mating space into five distinct areas and provide a probabilistic and empirical analysis of the ability of matings in each area to produce children whose size is shorter than the parent generation's average size. Diversity of size within a GA's population is shown to be a necessary condition for a reduction in mean chromosome length to take place. We show how a finite variable-length GA under random selection pressure uses 1) diversity of size within the population, 2) over-production of shorter than average individuals, and 3) the imperfect nature of random sampling during selection to naturally reduce the average size of individuals within a population from one generation to the next. In addition to our findings, this work provides GA researchers and practitioners with 1) a number of mathematical tools for analyzing possible size reductions for various matings and 2) new ideas to explore in the area of bloat control.
Show less - Date Issued
- 2007
- Identifier
- CFE0001652, ucf:47249
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001652
- Title
- A METHODOLOGY FOR MINIMIZING THE OSCILLATIONS IN SUPPLY CHAINS USING SYSTEM DYNAMICS AND GENETIC ALGORITHMS.
- Creator
-
LAKKOJU, RAMAMOORTHY, RABELO, LUIS, University of Central Florida
- Abstract / Description
-
Supply Chain Management (SCM) is a critically significant strategy that enterprises depend on to meet challenges that they face because of highly competitive and dynamic business environments of today. Supply chain management involves the entire network of processes from procurement of raw materials/services/technologies to manufacturing or servicing intermediate products/services to converting them into final products or services and then distributing and retailing them till they reach final...
Show moreSupply Chain Management (SCM) is a critically significant strategy that enterprises depend on to meet challenges that they face because of highly competitive and dynamic business environments of today. Supply chain management involves the entire network of processes from procurement of raw materials/services/technologies to manufacturing or servicing intermediate products/services to converting them into final products or services and then distributing and retailing them till they reach final customers. A supply chain network by nature is a large and complex, engineering and management system. Oscillations occurring in a supply chain because of internal and/or external influences and measures to be taken to mitigate/minimize those oscillations are a core concern in managing the supply chain and driving an organization towards a competitive advantage. The objective of this thesis is to develop a methodology to minimize the oscillations occurring in a supply chain by making use of the techniques of System Dynamics (SD) and Genetic Algorithms (GAs). System dynamics is a very efficient tool to model large and complex systems in order to understand their complex, non-linear dynamic behavior. GAs are stochastic search algorithms, based on the mechanics of natural selection and natural genetics, used to search complex and non-linear search spaces where traditional techniques may be unsuitable.
Show less - Date Issued
- 2005
- Identifier
- CFE0000683, ucf:46489
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000683
- Title
- A COMPETITIVE RECONFIGURATION APPROACH TO AUTONOMOUS FAULT HANDLING USING GENETIC ALGORITHMS.
- Creator
-
Zhang, Kening, DeMara, Ronald F, University of Central Florida
- Abstract / Description
-
In this dissertation, a novel self-repair approach based on Consensus Based Evaluation (CBE) for autonomous repair of SRAM-based Field Programmable Gate Arrays (FPGAs) is developed, evaluated, and refined. An initial population of functionally identical (same input-output behavior), yet physically distinct (alternative design or place-and-route realization) FPGA configurations is produced at design time. During run-time, the CBE approach ranks these alternative configurations after evaluating...
Show moreIn this dissertation, a novel self-repair approach based on Consensus Based Evaluation (CBE) for autonomous repair of SRAM-based Field Programmable Gate Arrays (FPGAs) is developed, evaluated, and refined. An initial population of functionally identical (same input-output behavior), yet physically distinct (alternative design or place-and-route realization) FPGA configurations is produced at design time. During run-time, the CBE approach ranks these alternative configurations after evaluating their discrepancy relative to the consensus formed by the population. Through runtime competition, faults in the logical resources become occluded from the visibility of subsequent FPGA operations. Meanwhile, offspring formed through crossover and mutation of faulty and viable configurations are selected at a controlled re-introduction rate for evaluation and refurbishment. Refurbishments are evolved in-situ, with online real-time input-based performance evaluation, enhancing system availability and sustainability, creating an Organic Embedded System (OES). A fault tolerance model called N Modular Redundancy with Standby (NMRSB) is developed which combines the two popular fault tolerance techniques of NMR and Standby fault tolerance in order to facilitate the CBE approach. This dissertation develops two of instances of the NMRSB system Triple Modular Redundancy with Standby (TMRSB) and Duplex with Standby (DSB). A hypothetical Xilinx Virtex-II Pro FPGA model demonstrates their viability for various applications including a 3-bit x 3-bit multiplier, and the MCNC91 benchmark circuits. Experiments conducted on the model iii evaluate the performance of three new genetic operators and demonstrate progress towards a completely self-contained single-chip implementation so that the FPGA can refurbish itself without requiring a PC host to execute the Genetic Algorithm. This dissertation presents results from the simulations of multiple applications with a CBE model implemented in the C++ programming language. Starting with an initial population of 20 and 30 viable configurations for TMRSB and DSB respectively, a single stuck-at fault is introduced in the logic resources. Fault refurbishment experiments are conducted under supervision of CBE using a fitness state evaluation function based on competing outputs, fitness adjustment, and different level threshold. The device remains online throughout the process by which a complete repair is realized with Hamming Distance and Bitweight voting schemes. The results indicate a Hamming Distance TMRSB approach can prevent the most pervasive fault impacts and realize complete refurbishment. Experimental results also show that the Autonomic Layer demonstrates 100% faulty component isolation for both Functional Elements (FEs) and Autonomous Elements (AEs) with randomly injected single and multiple faults. Using logic circuits from the MCNC-91 benchmark set, availability during repair phases averaged 75.05%, 82.21%, and 65.21% for the z4ml, cm85a, and cm138a circuits respectively under stated conditions. In addition to simulation, the proposed OES architecture synthesized from HDL was prototyped on a Xilinx Virtex II Pro FPGA device supporting partial reconfiguration to demonstrate the feasibility for intrinsic regeneration of the selected circuit.
Show less - Date Issued
- 2008
- Identifier
- CFE0002280, ucf:47849
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002280
- Title
- LEARNING FROM GEOMETRY IN LEARNING FOR TACTICAL AND STRATEGIC DECISION DOMAINS.
- Creator
-
Gauci, Jason, Stanley, Kenneth, University of Central Florida
- Abstract / Description
-
Artificial neural networks (ANNs) are an abstraction of the low-level architecture of biological brains that are often applied in general problem solving and function approximation. Neuroevolution (NE), i.e. the evolution of ANNs, has proven effective at solving problems in a variety of domains. Information from the domain is input to the ANN, which outputs its desired actions. This dissertation presents a new NE algorithm called Hypercube-based NeuroEvolution of Augmenting Topologies ...
Show moreArtificial neural networks (ANNs) are an abstraction of the low-level architecture of biological brains that are often applied in general problem solving and function approximation. Neuroevolution (NE), i.e. the evolution of ANNs, has proven effective at solving problems in a variety of domains. Information from the domain is input to the ANN, which outputs its desired actions. This dissertation presents a new NE algorithm called Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT), based on a novel indirect encoding of ANNs. The key insight in HyperNEAT is to make the algorithm aware of the geometry in which the ANNs are embedded and thereby exploit such domain geometry to evolve ANNs more effectively. The dissertation focuses on applying HyperNEAT to tactical and strategic decision domains. These domains involve simultaneously considering short-term tactics while also balancing long-term strategies. Board games such as checkers and Go are canonical examples of such domains; however, they also include real-time strategy games and military scenarios. The dissertation details three proposed extensions to HyperNEAT designed to work in tactical and strategic decision domains. The first is an action selector ANN architecture that allows the ANN to indicate its judgements on every possible action all at once. The second technique is called substrate extrapolation. It allows learning basic concepts at a low resolution, and then increasing the resolution to learn more advanced concepts. The final extension is geometric game-tree pruning, whereby HyperNEAT can endow the ANN the ability to focus on specific areas of a domain (such as a checkers board) that deserve more inspection. The culminating contribution is to demonstrate the ability of HyperNEAT with these extensions to play Go, a most challenging game for artificial intelligence, by combining HyperNEAT with UCT.
Show less - Date Issued
- 2010
- Identifier
- CFE0003464, ucf:48962
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003464
- Title
- Self-Scaling Evolution of Analog Computation Circuits.
- Creator
-
Pyle, Steven, DeMara, Ronald, Vosoughi, Azadeh, Chanda, Debashis, University of Central Florida
- Abstract / Description
-
Energy and performance improvements of continuous-time analog-based computation for selected applications offer an avenue to continue improving the computational ability of tomorrow's electronic devices at current technology scaling limits. However, analog computation is plagued by the difficulty of designing complex computational circuits, programmability, as well as the inherent lack of accuracy and precision when compared to digital implementations. In this thesis, evolutionary algorithm...
Show moreEnergy and performance improvements of continuous-time analog-based computation for selected applications offer an avenue to continue improving the computational ability of tomorrow's electronic devices at current technology scaling limits. However, analog computation is plagued by the difficulty of designing complex computational circuits, programmability, as well as the inherent lack of accuracy and precision when compared to digital implementations. In this thesis, evolutionary algorithm-based techniques are utilized within a reconfigurable analog fabric to realize an automated method of designing analog-based computational circuits while adapting the functional range to improve performance. A Self-Scaling Genetic Algorithm is proposed to adapt solutions to computationally-tractable ranges in hardware-constrained analog reconfigurable fabrics. It operates by utilizing a Particle Swarm Optimization (PSO) algorithm that operates synergistically with a Genetic Algorithm (GA) to adaptively scale and translate the functional range of computational circuits composed of high-level or low-level Computational Analog Elements to improve performance and realize functionality otherwise unobtainable on the intrinsic platform. The technique is demonstrated by evolving square, square-root, cube, and cube-root analog computational circuits on the Cypress PSoC-5LP System-on-Chip. Results indicate that the Self-Scaling Genetic Algorithm improves our error metric on average 7.18-fold, up to 12.92-fold for computational circuits that produce outputs beyond device range. Results were also favorable compared to previous works, which utilized extrinsic evolution of circuits with much greater complexity than was possible on the PSoC-5LP.
Show less - Date Issued
- 2015
- Identifier
- CFE0005866, ucf:50873
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005866
- Title
- AN ADAPTIVE MULTIOBJECTIVE EVOLUTIONARY APPROACH TO OPTIMIZE ARTMAP NEURAL NETWORKS.
- Creator
-
Kaylani, Assem, Georgiopoulos, Michael, University of Central Florida
- Abstract / Description
-
This dissertation deals with the evolutionary optimization of ART neural network architectures. ART (adaptive resonance theory) was introduced by a Grossberg in 1976. In the last 20 years (1987-2007) a number of ART neural network architectures were introduced into the literature (Fuzzy ARTMAP (1992), Gaussian ARTMAP (1996 and 1997) and Ellipsoidal ARTMAP (2001)). In this dissertation, we focus on the evolutionary optimization of ART neural network architectures with the intent of optimizing...
Show moreThis dissertation deals with the evolutionary optimization of ART neural network architectures. ART (adaptive resonance theory) was introduced by a Grossberg in 1976. In the last 20 years (1987-2007) a number of ART neural network architectures were introduced into the literature (Fuzzy ARTMAP (1992), Gaussian ARTMAP (1996 and 1997) and Ellipsoidal ARTMAP (2001)). In this dissertation, we focus on the evolutionary optimization of ART neural network architectures with the intent of optimizing the size and the generalization performance of the ART neural network. A number of researchers have focused on the evolutionary optimization of neural networks, but no research has been performed on the evolutionary optimization of ART neural networks, prior to 2006, when Daraiseh has used evolutionary techniques for the optimization of ART structures. This dissertation extends in many ways and expands in different directions the evolution of ART architectures, such as: (a) uses a multi-objective optimization of ART structures, thus providing to the user multiple solutions (ART networks) with varying degrees of merit, instead of a single solution (b) uses GA parameters that are adaptively determined throughout the ART evolution, (c) identifies a proper size of the validation set used to calculate the fitness function needed for ART's evolution, thus speeding up the evolutionary process, (d) produces experimental results that demonstrate the evolved ART's effectiveness (good accuracy and small size) and efficiency (speed) compared with other competitive ART structures, as well as other classifiers (CART (Classification and Regression Trees) and SVM (Support Vector Machines)). The overall methodology to evolve ART using a multi-objective approach, the chromosome representation of an ART neural network, the genetic operators used in ART's evolution, and the automatic adaptation of some of the GA parameters in ART's evolution could also be applied in the evolution of other exemplar based neural network classifiers such as the probabilistic neural network and the radial basis function neural network.
Show less - Date Issued
- 2008
- Identifier
- CFE0002212, ucf:47907
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002212
- Title
- EVOLUTIONARY OPTIMIZATION OF SUPPORT VECTOR MACHINES.
- Creator
-
Gruber, Fred, Rabelo, Luis, University of Central Florida
- Abstract / Description
-
Support vector machines are a relatively new approach for creating classifiers that have become increasingly popular in the machine learning community. They present several advantages over other methods like neural networks in areas like training speed, convergence, complexity control of the classifier, as well as a stronger mathematical background based on optimization and statistical learning theory. This thesis deals with the problem of model selection with support vector machines, that is...
Show moreSupport vector machines are a relatively new approach for creating classifiers that have become increasingly popular in the machine learning community. They present several advantages over other methods like neural networks in areas like training speed, convergence, complexity control of the classifier, as well as a stronger mathematical background based on optimization and statistical learning theory. This thesis deals with the problem of model selection with support vector machines, that is, the problem of finding the optimal parameters that will improve the performance of the algorithm. It is shown that genetic algorithms provide an effective way to find the optimal parameters for support vector machines. The proposed algorithm is compared with a backpropagation Neural Network in a dataset that represents individual models for electronic commerce.
Show less - Date Issued
- 2004
- Identifier
- CFE0000244, ucf:46251
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000244
- Title
- THE PROTEOMICS APPROACH TO EVOLUTIONARY COMPUTATION: AN ANALYSIS OF PROTEOME-BASED LOCATION INDEPENDENT REPRESENTATIONS BASEDON THE PROPORTIONAL GENETIC ALGORITHM.
- Creator
-
Garibay, Ivan, Wu, Annie, University of Central Florida
- Abstract / Description
-
As the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into...
Show moreAs the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into complex functions or designs. The observation from the central dogma of molecular biology that proteins are the basic building blocks of life and the recent advances in proteomics on analysis of structure, function and interaction of entire protein complements, lead us to propose a unifying framework of thought for these approaches: the proteomics approach. This thesis propose to investigate whether the self-organization of protein analogous structures at the representation level can increase the degree of complexity and ``novelty'' of solutions obtainable using evolutionary search techniques. In order to do so, we identify two fundamental aspects of this transition: (1) proteins interact in a three dimensional medium analogous to a multiset; and (2) proteins are functional structures. The first aspect is foundational for understanding of the second. This thesis analyzes the first aspect. It investigates the effects of using a genome to proteome mapping on evolutionary computation. This analysis is based on a genetic algorithm (GA) with a string to multiset mapping that we call the proportional genetic algorithm (PGA), and it focuses on the feasibility and effectiveness of this mapping. This mapping leads to a fundamental departure from typical EC methods: using a multiset of proteins as an intermediate mapping results in a \emph{completely location independent} problem representation where the location of the genes in a genome has no effect on the fitness of the solutions. Completely location independent representations, by definition, do not suffer from traditional EC hurdles associated with the location of the genes or positional effect in a genome. Such representations have the ability to self-organize into a genomic structure that appears to favor positive correlations between form and quality of represented solutions. Completely location independent representations also introduce new problems of their own such as the need for large alphabets of symbols and the theoretical need for larger representation spaces than traditional approaches. Overall, these representations perform as well or better than traditional representations and they appear to be particularly good for the class of problems involving proportions or multisets. This thesis concludes that the use of protein analogous structures as an intermediate representation in evolutionary computation is not only feasible but in some cases advantageous. In addition, it lays the groundwork for further research on proteins as functional self-organizing structures capable of building increasingly complex functionality, and as basic units of problem representation for evolutionary computation.
Show less - Date Issued
- 2004
- Identifier
- CFE0000311, ucf:46307
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000311
- Title
- ALAYZING THE EFFECTS OF MODULARITY ON SEARCH SPACES.
- Creator
-
Garibay, Ozlem, Wu, Annie, University of Central Florida
- Abstract / Description
-
We are continuously challenged by ever increasing problem complexity and the need to develop algorithms that can solve complex problems and solve them within a reasonable amount of time. Modularity is thought to reduce problem complexity by decomposing large problems into smaller and less complex subproblems. In practice, introducing modularity into evolutionary algorithm representations appears to improve search performance; however, how and why modularity improves performance is not well...
Show moreWe are continuously challenged by ever increasing problem complexity and the need to develop algorithms that can solve complex problems and solve them within a reasonable amount of time. Modularity is thought to reduce problem complexity by decomposing large problems into smaller and less complex subproblems. In practice, introducing modularity into evolutionary algorithm representations appears to improve search performance; however, how and why modularity improves performance is not well understood. In this thesis, we seek to better understand the effects of modularity on search. In particular, what are the effects of module creation on the search space structure and how do these structural changes affect performance? We define a theoretical and empirical framework to study modularity in evolutionary algorithms. Using this framework, we provide evidence of the following. First, not all types of modularity have an effect on search. We can have highly modular spaces that in essence are equivalent to simpler non-modular spaces. This is the case, because these spaces achieve higher degree of modularity without changing the fundamental structure of the search space. Second, for the cases when modularity actually has an effect on the fundamental structure of the search space, if left without guidance, it would only crowd and complicate the space structure resulting in a harder space for most search algorithms. Finally, we have the case when modularity not only has an effect in the search space structure, but most importantly, module creation can be guided by problem domain knowledge. When this knowledge can be used to estimate the value of a module in terms of its contribution toward building the solution, then modularity is extremely effective. It is in this last case that creating high value modules or low value modules has a direct and decisive impact on performance. The results presented in this thesis help to better understand, in a principled way, the effects of modularity on search. Better understanding the effects of modularity on search is a step forward in the larger issue of evolutionary search applied to increasingly complex problems.
Show less - Date Issued
- 2008
- Identifier
- CFE0002490, ucf:47680
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002490
- Title
- PLANNING AND SCHEDULING FOR LARGE-SCALEDISTRIBUTED SYSTEMS.
- Creator
-
Yu, Han, Marinescu, Dan, University of Central Florida
- Abstract / Description
-
Many applications require computing resources well beyond those available on any single system. Simulations of atomic and subatomic systems with application to material science, computations related to study of natural sciences, and computer-aided design are examples of applications that can benefit from the resource-rich environment provided by a large collection of autonomous systems interconnected by high-speed networks. To transform such a collection of systems into a user's virtual...
Show moreMany applications require computing resources well beyond those available on any single system. Simulations of atomic and subatomic systems with application to material science, computations related to study of natural sciences, and computer-aided design are examples of applications that can benefit from the resource-rich environment provided by a large collection of autonomous systems interconnected by high-speed networks. To transform such a collection of systems into a user's virtual machine, we have to develop new algorithms for coordination, planning, scheduling, resource discovery, and other functions that can be automated. Then we can develop societal services based upon these algorithms, which hide the complexity of the computing system for users. In this dissertation, we address the problem of planning and scheduling for large-scale distributed systems. We discuss a model of the system, analyze the need for planning, scheduling, and plan switching to cope with a dynamically changing environment, present algorithms for the three functions, report the simulation results to study the performance of the algorithms, and introduce an architecture for an intelligent large-scale distributed system.
Show less - Date Issued
- 2005
- Identifier
- CFE0000781, ucf:46595
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000781
- Title
- Analysis of large-scale population genetic data using efficient algorithms and data structures.
- Creator
-
Naseri, Ardalan, Zhang, Shaojie, Hughes, Charles, Yooseph, Shibu, Zhi, Degui, University of Central Florida
- Abstract / Description
-
With the availability of genotyping data of very large samples, there is an increasing need for tools that can efficiently identify genetic relationships among all individuals in the sample. Modern biobanks cover genotypes up to 0.1%-1% of an entire large population. At this scale, genetic relatedness among samples is ubiquitous. However, current methods are not efficient for uncovering genetic relatedness at such a scale. We developed a new method, Random Projection for IBD Detection (RaPID)...
Show moreWith the availability of genotyping data of very large samples, there is an increasing need for tools that can efficiently identify genetic relationships among all individuals in the sample. Modern biobanks cover genotypes up to 0.1%-1% of an entire large population. At this scale, genetic relatedness among samples is ubiquitous. However, current methods are not efficient for uncovering genetic relatedness at such a scale. We developed a new method, Random Projection for IBD Detection (RaPID), for detecting Identical-by-Descent (IBD) segments, a fundamental concept in genetics in large panels. RaPID detects all IBD segments over a certain length in time linear to the sample size. We take advantage of an efficient population genotype index, Positional BWT (PBWT), by Richard Durbin. PBWT achieves linear time query of perfectly identical subsequences among all samples. However, the original PBWT is not tolerant to genotyping errors which often interrupt long IBD segments into short fragments. The key idea of RaPID is that the problem of approximate high-resolution matching over a long range can be mapped to the problem of exact matching of low-resolution subsampled sequences with high probability. PBWT provides an appropriate data structure for bi-allelic data. With the increasing sample sizes, more multi-allelic sites are expected to be observed. Hence, there is a necessity to handle multi-allelic genotype data. We also introduce a multi-allelic version of the original Positional Burrows-Wheeler Transform (mPBWT).The increasingly large cohorts of whole genome genotype data present an opportunity for searching genetically related people within a large cohort to an individual. At the same time, doing so efficiently presents a challenge. The PBWT algorithm offers constant time matching between one haplotype and an arbitrarily large panel at each position, but only for the maximal matches. We used the PBWT data structure to develop a method to search for all matches of a given query in a panel. The matches larger than a given length correspond to the all shared IBD segments of certain lengths between the query and other individuals in the panel. The time complexity of the proposed method is independent from the number of individuals in the panel. In order to achieve a time complexity independent from the number of haplotypes, additional data structures are introduced.Some regions of genome may be shared by multiple individuals rather than only a pair. Clusters of identical haplotypes could reveal information about the history of intermarriage, isolation of a population and also be medically important. We propose an efficient method to find clusters of identical segments among individuals in a large panel, called cPBWT, using PBWT data structure. The time complexity of finding all clusters of identical matches is linear to the sample size. Human genome harbors several runs of homozygous sites (ROHs) where identical haplotypes are inherited from each parent. We applied cPBWT on UK-Biobank and searched for clusters of ROH region that are shared among multiple. We discovered strong associations between ROH regions and some non-cancerous diseases, specifically auto-immune disorders.
Show less - Date Issued
- 2018
- Identifier
- CFE0007764, ucf:52393
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007764
- Title
- AN INVERSE ALGORITHM TO ESTIMATE THERMAL CONTACT RESISTANCE.
- Creator
-
Gill, Jennifer, Kassab, Alain, University of Central Florida
- Abstract / Description
-
Thermal systems often feature composite regions that are mechanically mated. In general, there exists a significant temperature drop across the interface between such regions which may be composed of similar or different materials. The parameter characterizing this temperature drop is the thermal contact resistance, which is defined as the ratio of the temperature drop to the heat flux normal to the interface. The thermal contact resistance is due to roughness effects between mating surfaces...
Show moreThermal systems often feature composite regions that are mechanically mated. In general, there exists a significant temperature drop across the interface between such regions which may be composed of similar or different materials. The parameter characterizing this temperature drop is the thermal contact resistance, which is defined as the ratio of the temperature drop to the heat flux normal to the interface. The thermal contact resistance is due to roughness effects between mating surfaces which cause certain regions of the mating surfaces to loose contact thereby creating gaps. In these gap regions, the principal modes of heat transfer are conduction across the contacting regions of the interface, conduction or natural convection in the fluid filling the gap regions of the interface, and radiation across the gap surfaces. Moreover, the contact resistance is a function of contact pressure as this can significantly alter the topology of the contact region. The thermal contact resistance is a phenomenologically complex function and can significantly alter prediction of thermal models of complex multi-component structures. Accurate estimates of thermal contact resistances are important in engineering calculations and find application in thermal analysis ranging from relatively simple layered and composite materials to more complex biomaterials. There have been many studies devoted to the theoretical predictions of thermal contact resistance and although general theories have been somewhat successful in predicting thermal contact resistances, most reliable results have been obtained experimentally. This is due to the fact that the nature of thermal contact resistance is quite complex and depends on many parameters including types of mating materials, surface characteristics of the interfacial region such as roughness and hardness, and contact pressure distribution. In experiments, temperatures are measured at a certain number of locations, usually close to the contact surface, and these measurements are used as inputs to a parameter estimation procedure to arrive at the sought-after thermal contact resistance. Most studies seek a single value for the contact resistance, while the resistance may in fact also vary spatially. In this thesis, an inverse problem (IP) is formulated to estimate the spatial variation of the thermal contact resistance along an interface in a two-dimensional configuration. Temperatures measured at discrete locations using embedded sensors appropriately placed in proximity to the interface provide the additional information required to solve the inverse problem. A superposition method serves to determine sensitivity coefficients and provides guidance in the location of the measuring points. Temperature measurements are then used to define a regularized quadratic functional that is minimized to yield the contact resistance between the two mating surfaces. A boundary element method analysis (BEM) provides the temperature field under current estimates of the contact resistance in the solution of the inverse problem when the geometry of interest is not regular, while an analytical solution can be used for regular geometries. Minimization of the IP functional is carried out by the Levenberg-Marquadt method or by a Genetic Algorithm depending on the problem under consideration. The L-curve method of Hansen is used to choose the optimal regularization parameter. A series of numerical examples are provided to demonstrate and validate the approach.
Show less - Date Issued
- 2005
- Identifier
- CFE0000748, ucf:46582
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000748
- Title
- GENETICALLY ENGINEERED ADAPTIVE RESONANCE THEORY (ART) NEURAL NETWORK ARCHITECTURES.
- Creator
-
Al-Daraiseh, Ahmad, Georgiopoulos, Michael, University of Central Florida
- Abstract / Description
-
Fuzzy ARTMAP (FAM) is currently considered to be one of the premier neural network architectures in solving classification problems. One of the limitations of Fuzzy ARTMAP that has been extensively reported in the literature is the category proliferation problem. That is Fuzzy ARTMAP has the tendency of increasing its network size, as it is confronted with more and more data, especially if the data is of noisy and/or overlapping nature. To remedy this problem a number of researchers have...
Show moreFuzzy ARTMAP (FAM) is currently considered to be one of the premier neural network architectures in solving classification problems. One of the limitations of Fuzzy ARTMAP that has been extensively reported in the literature is the category proliferation problem. That is Fuzzy ARTMAP has the tendency of increasing its network size, as it is confronted with more and more data, especially if the data is of noisy and/or overlapping nature. To remedy this problem a number of researchers have designed modifications to the training phase of Fuzzy ARTMAP that had the beneficial effect of reducing this phenomenon. In this thesis we propose a new approach to handle the category proliferation problem in Fuzzy ARTMAP by evolving trained FAM architectures. We refer to the resulting FAM architectures as GFAM. We demonstrate through extensive experimentation that an evolved FAM (GFAM) exhibits good (sometimes optimal) generalization, small size (sometimes optimal size), and requires reasonable computational effort to produce an optimal or sub-optimal network. Furthermore, comparisons of the GFAM with other approaches, proposed in the literature, which address the FAM category proliferation problem, illustrate that the GFAM has a number of advantages (i.e. produces smaller or equal size architectures, of better or as good generalization, with reduced computational complexity). Furthermore, in this dissertation we have extended the approach used with Fuzzy ARTMAP to other ART architectures, such as Ellipsoidal ARTMAP (EAM) and Gaussian ARTMAP (GAM) that also suffer from the ART category proliferation problem. Thus, we have designed and experimented with genetically engineered EAM and GAM architectures, named GEAM and GGAM. Comparisons of GEAM and GGAM with other ART architectures that were introduced in the ART literature, addressing the category proliferation problem, illustrate similar advantages observed by GFAM (i.e, GEAM and GGAM produce smaller size ART architectures, of better or improved generalization, with reduced computational complexity). Moverover, to optimally cover the input space of a problem, we proposed a genetically engineered ART architecture that combines the category structures of two different ART networks, FAM and EAM. We named this architecture UART (Universal ART). We analyzed the order of search in UART, that is the order according to which a FAM category or an EAM category is accessed in UART. This analysis allowed us to better understand UART's functionality. Experiments were also conducted to compare UART with other ART architectures, in a similar fashion as GFAM and GEAM were compared. Similar conclusions were drawn from this comparison, as in the comparison of GFAM and GEAM with other ART architectures. Finally, we analyzed the computational complexity of the genetically engineered ART architectures and we compared it with the computational complexity of other ART architectures, introduced into the literature. This analytical comparison verified our claim that the genetically engineered ART architectures produce better generalization and smaller sizes ART structures, at reduced computational complexity, compared to other ART approaches. In review, a methodology was introduced of how to combine the answers (categories) of ART architectures, using genetic algorithms. This methodology was successfully applied to FAM, EAM and FAM and EAM ART architectures, with success, resulting in ART neural networks which outperformed other ART architectures, previously introduced into the literature, and quite often produced ART architectures that attained optimal classification results, at reduced computational complexity.
Show less - Date Issued
- 2006
- Identifier
- CFE0000977, ucf:46696
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000977
- Title
- FALCONET: FORCE-FEEDBACK APPROACH FOR LEARNING FROM COACHING AND OBSERVATION USING NATURAL AND EXPERIENTIAL TRAINING.
- Creator
-
Stein, Gary, Gonzalez, Avelino, University of Central Florida
- Abstract / Description
-
Building an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine...
Show moreBuilding an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine learning from observation emerged to produce agent models based on observational data. Learning from observation uses unobtrusive and purely observable information to construct an agent that behaves similarly to the observed human. Typically, an observational system builds an agent only based on prerecorded observations. This type of system works well with respect to agent creation, but lacks the ability to be trained and updated on-line. To overcome these deficiencies, the proposed system works by adding an augmented force-feedback system of training that senses the agents intentions haptically. Furthermore, because not all possible situations can be observed or directly trained, a third stage of learning from practice is added for the agent to gain additional knowledge for a particular mission. These stages of learning mimic the natural way a human might learn a task by first watching the task being performed, then being coached to improve, and finally practicing to self improve. The hypothesis is that a system that is initially trained using human recorded data (Observational), then tuned and adjusted using force-feedback (Instructional), and then allowed to perform the task in different situations (Experiential) will be better than any individual step or combination of steps.
Show less - Date Issued
- 2009
- Identifier
- CFE0002746, ucf:48157
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002746
- Title
- MESHLESS HEMODYNAMICS MODELING AND EVOLUTIONARY SHAPE OPTIMIZATION OF BYPASS GRAFTS ANASTOMOSES.
- Creator
-
El Zahab, Zaher, Kassab, Alain, University of Central Florida
- Abstract / Description
-
Objectives: The main objective of the current dissertation is to establish a formal shape optimization procedure for a given bypass grafts end-to-side distal anastomosis (ETSDA). The motivation behind this dissertation is that most of the previous ETSDA shape optimization research activities cited in the literature relied on direct optimization approaches that do not guaranty accurate optimization results. Three different ETSDA models are considered herein: The conventional, the Miller cuff,...
Show moreObjectives: The main objective of the current dissertation is to establish a formal shape optimization procedure for a given bypass grafts end-to-side distal anastomosis (ETSDA). The motivation behind this dissertation is that most of the previous ETSDA shape optimization research activities cited in the literature relied on direct optimization approaches that do not guaranty accurate optimization results. Three different ETSDA models are considered herein: The conventional, the Miller cuff, and the hood models. Materials and Methods: The ETSDA shape optimization is driven by three computational objects: a localized collocation meshless method (LCMM) solver, an automated geometry pre-processor, and a genetic-algorithm-based optimizer. The usage of the LCMM solver is very convenient to set an autonomous optimization mechanism for the ETSDA models. The task of the automated pre-processor is to randomly distribute solution points in the ETSDA geometries. The task of the optimized is the adjust the ETSDA geometries based on mitigation of the abnormal hemodynamics parameters. Results: The results reported in this dissertation entail the stabilization and validation of the LCMM solver in addition to the shape optimization of the considered ETSDA models. The LCMM stabilization results consists validating a custom-designed upwinding scheme on different one-dimensional and two-dimensional test cases. The LCMM validation is done for incompressible steady and unsteady flow applications in the ETSDA models. The ETSDA shape optimization include single-objective optimization results in steady flow situations and bi-objective optimization results in pulsatile flow situations. Conclusions: The LCMM solver provides verifiably accurate resolution of hemodynamics and is demonstrated to be third order accurate in a comparison to a benchmark analytical solution of the Navier-Stokes. The genetic-algorithm-based shape optimization approach proved to be very effective for the conventional and Miller cuff ETSDA models. The shape optimization results for those two models definitely suggest that the graft caliber should be maximized whereas the anastomotic angle and the cuff height (in the Miller cuff model) should be chosen following a compromise between the wall shear stress spatial and temporal gradients. The shape optimization of the hood ETSDA model did not prove to be advantageous, however it could be meaningful with the inclusion of the suture line cut length as an optimization parameter.
Show less - Date Issued
- 2008
- Identifier
- CFE0002165, ucf:47927
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002165
- Title
- Multi-Objective Optimization for Construction Equipment Fleet Selection and Management In Highway Construction Projects Based on Time, Cost, and Quality Objectives.
- Creator
-
Shehadeh, Ali, Tatari, Omer, Al-Deek, Haitham, Abou-Senna, Hatem, Flitsiyan, Elena, University of Central Florida
- Abstract / Description
-
The sector of highway construction shares approximately 11% of the total construction industry in the US. Construction equipment can be considered as one of the primary reasons this industry has reached such a significant level, as it is considered an essential part of the highway construction process during highway project construction. This research addresses a multi-objective optimization mathematical model that quantifies and optimize the key parameters for excavator, truck, and motor...
Show moreThe sector of highway construction shares approximately 11% of the total construction industry in the US. Construction equipment can be considered as one of the primary reasons this industry has reached such a significant level, as it is considered an essential part of the highway construction process during highway project construction. This research addresses a multi-objective optimization mathematical model that quantifies and optimize the key parameters for excavator, truck, and motor-grader equipment to minimize time and cost objective functions. The model is also aimed to maintain the required level of quality for the targeted construction activity. The mathematical functions for the primary objectives were formulated and then a genetic algorithm-based multi-objective was performed to generate the time-cost Pareto trade-offs for all possible equipment combinations using MATLAB software to facilitate the implementation. The model's capabilities in generating optimal time and cost trade-offs based on optimized equipment number, capacity, and speed to adapt with the complex and dynamic nature of highway construction projects are demonstrated using a highway construction case study. The developed model is a decision support tool during the construction process to adapt with any necessary changes into time or cost requirements taking into consideration environmental, safety and quality aspects. The flexibility and comprehensiveness of the proposed model, along with its programmable nature, make it a powerful tool for managing construction equipment, which will help saving time and money within the optimal quality margins. Also, this environmentally friendly decision-support tool model provided optimal solutions that help to reduce the CO2 emissions reducing the ripple effects of targeted highway construction activities on the global warming phenomenon. The generated optimal solutions offered considerable time and cost savings.
Show less - Date Issued
- 2019
- Identifier
- CFE0007863, ucf:52800
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007863
- Title
- INVERSE BOUNDARY ELEMENT/GENETIC ALGORITHM METHOD FOR RECONSTRUCTION OF MULTI-DIMENSIONAL HEAT FLUX DISTRIBUTIONS WITH FILM COOLING APPLICATIONS.
- Creator
-
Silieti, Mahmood, Kassab, Alain, University of Central Florida
- Abstract / Description
-
A methodology is formulated for the solution of the inverse problem concerned with the reconstruction of multi-dimensional heat fluxes for film cooling applications. The motivation for this study is the characterization of complex thermal conditions in industrial applications such as those encountered in film cooled turbomachinery components. The heat conduction problem in the metal endwall/shroud is solved using the boundary element method (bem), and the inverse problem is solved using a...
Show moreA methodology is formulated for the solution of the inverse problem concerned with the reconstruction of multi-dimensional heat fluxes for film cooling applications. The motivation for this study is the characterization of complex thermal conditions in industrial applications such as those encountered in film cooled turbomachinery components. The heat conduction problem in the metal endwall/shroud is solved using the boundary element method (bem), and the inverse problem is solved using a genetic algorithm (ga). Thermal conditions are overspecified at exposed surfaces amenable to measurement, while the temperature and surface heat flux distributions are unknown at the film cooling hole/slot walls. The latter are determined in an iterative process by developing two approaches. The first approach, developed for 2d applications, solves an inverse problem whose objective is to adjust the film cooling hole/slot wall temperatures and heat fluxes until the temperature and heat flux at the measurement surfaces are matched in an overall heat conduction solution. The second approach, developed for 2d and 3d applications, is to distribute a set of singularities (sinks) at the vicinity of the cooling slots/holes surface inside a fictitious extension of the physical domain or along cooling hole centerline with a given initial strength distribution. The inverse problem iteratively alters the strength distribution of the singularities (sinks) until the measuring surfaces heat fluxes are matched. The heat flux distributions are determined in a post-processing stage after the inverse problem is solved. The second approach provides a tremendous advantage in solving the inverse problem, particularly in 3d applications, and it is recommended as the method of choice for this class of problems. It can be noted that the ga reconstructed heat flux distributions are robust, yielding accurate results to both exact and error-laden inputs. In all cases in this study, results from experiments are simulated using a full conjugate heat transfer (cht) finite volume models which incorporate the interactions of the external convection in the hot turbulent gas, internal convection within the cooling plena, and the heat conduction in the metal endwall/shroud region. Extensive numerical investigations are undertaken to demonstrate the significant importance of conjugate heat transfer in film cooling applications and to identify the implications of various turbulence models in the prediction of accurate and more realistic surface temperatures and heat fluxes in the cht simulations. These, in turn, are used to provide numerical inputs to the inverse problem. Single and multiple cooling slots, cylindrical cooling holes, and fan-shaped cooling holes are considered in this study. The turbulence closure is modeled using several two-equation approach, the four-equation turbulence model, as well as five and seven moment reynolds stress models. The predicted results, by the different turbulence models, for the cases of adiabatic and conjugate models, are compared to experimental data reported in the open literature. Results show the significant effects of conjugate heat transfer on the temperature field in the film cooling hole region, and the additional heating up of the cooling jet itself. Moreover, results from the detailed numerical studies presented in this study validate the inverse problem approaches and reveal good agreement between the bem/ga reconstructed heat fluxes and the cht simulated heat fluxes along the inaccessible cooling slot/hole walls
Show less - Date Issued
- 2004
- Identifier
- CFE0000166, ucf:52896
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000166
- Title
- Reconfigurable Reflectarray Antennas with Bandwidth Enhancement for High Gain, Beam-Steering Applications.
- Creator
-
Trampler, Michael, Gong, Xun, Wahid, Parveen, Jones, W Linwood, Chen, Kenle, Kuebler, Stephen, University of Central Florida
- Abstract / Description
-
Reconfigurable reflectarrays are a class of antennas that combine the advantages of traditional parabolic antennas and phased array antennas. Chapter 1 discusses the basic operational theory of reflectarrays and their design. A review of previous research and the current status is also presented. Furthermore the inherent advantages and disadvantages of the reflectarray topography are presented. In chapter 2, a BST-integrated reflectarray operating at Ka band is presented. Due to the...
Show moreReconfigurable reflectarrays are a class of antennas that combine the advantages of traditional parabolic antennas and phased array antennas. Chapter 1 discusses the basic operational theory of reflectarrays and their design. A review of previous research and the current status is also presented. Furthermore the inherent advantages and disadvantages of the reflectarray topography are presented. In chapter 2, a BST-integrated reflectarray operating at Ka band is presented. Due to the monolithic integration of the tuning element, this design is then extended to V band where a novel interdigital gap configuration is utilized. Finally to overcome loss and phase limitations of the single resonant design, a BST-integrated, dual-resonance unit cell operating at Ka band is designed. While the losses are still high, a 360(&)deg; phase range is demonstrated.In chapter 3, the operational theory of dual-resonant array elements is introduced utilizing Q theory. An equivalent circuit is developed and used to demonstrate design tradeoffs. Using this theory the design procedure of a varactor tuned dual-resonant unit cell operating at X-band is presented. Detailed analysis of the design is performed by full-wave simulations and verified via measurements. In chapter 4, the array performance of the dual-resonance unit cell is analyzed. The effects of varying angles of incidence on the array element are studied using Floquet simulations. The beam scanning, cross-polarization and bandwidth performance of a 7(&)#215;7 element reflectarray is analyzed using full-wave simulations and verified via measurements.In chapter 5 a loss analysis of the dual-resonant reflectarray element is performed. Major sources of loss are identified utilizing full-wave simulations before an equivalent circuit is utilized to optimize the loss performance while maintaining a full phase range and improved bandwidth performance. Finally the dual-resonance unit cell is modified to support two linear polarizations. Overall, the operational and design theory of dual resonant reflectarray unit cells using Q theory is developed. A valuable equivalent circuit is developed and used to aid in array element design as well as optimize the loss and bandwidth performance. The proposed theoretical models provide valuable physical insight through the use of Q theory to greatly aid in reflectarray design
Show less - Date Issued
- 2019
- Identifier
- CFE0007735, ucf:52457
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007735
- Title
- Design of the layout of a manufacturing facility with a closed loop conveyor with shortcuts using queueing theory and genetic algorithms.
- Creator
-
Lasrado, Vernet, Nazzal, Dima, Mollaghasemi, Mansooreh, Reilly, Charles, Garibay, Ivan, Sivo, Stephen, Armacost, Robert, University of Central Florida
- Abstract / Description
-
With the ongoing technology battles and price wars in today's competitive economy, every company is looking for an advantage over its peers. A particular choice of facility layout can have a significant impact on the ability of a company to maintain lower operational expenses under uncertain economic conditions. It is known that systems with less congestion have lower operational costs. Traditionally, manufacturing facility layout problem methods aim at minimizing the total distance traveled,...
Show moreWith the ongoing technology battles and price wars in today's competitive economy, every company is looking for an advantage over its peers. A particular choice of facility layout can have a significant impact on the ability of a company to maintain lower operational expenses under uncertain economic conditions. It is known that systems with less congestion have lower operational costs. Traditionally, manufacturing facility layout problem methods aim at minimizing the total distance traveled, the material handling cost, or the time in the system (based on distance traveled at a specific speed). The proposed methodology solves the looped layout design problem for a looped layout manufacturing facility with a looped conveyor material handling system with shortcuts using a system performance metric, i.e. the work in process (WIP) on the conveyor and at the input stations to the conveyor, as a factor in the minimizing function for the facility layout optimization problem which is solved heuristically using a permutation genetic algorithm. The proposed methodology also presents the case for determining the shortcut locations across the conveyor simultaneously (while determining the layout of the stations around the loop) versus the traditional method which determines the shortcuts sequentially (after the layout of the stations has been determined). The proposed methodology also presents an analytical estimate for the work in process at the input stations to the closed looped conveyor.It is contended that the proposed methodology (using the WIP as a factor in the minimizing function for the facility layout while simultaneously solving for the shortcuts) will yield a facility layout which is less congested than a facility layout generated by the traditional methods (using the total distance traveled as a factor of the minimizing function for the facility layout while sequentially solving for the shortcuts). The proposed methodology is tested on a virtual 300mm Semiconductor Wafer Fabrication Facility with a looped conveyor material handling system with shortcuts. The results show that the facility layouts generated by the proposed methodology have significantly less congestion than facility layouts generated by traditional methods. The validation of the developed analytical estimate of the work in process at the input stations reveals that the proposed methodology works extremely well for systems with Markovian Arrival Processes.
Show less - Date Issued
- 2011
- Identifier
- CFE0004125, ucf:49088
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004125
- Title
- A SUSTAINABLE AUTONOMIC ARCHITECTURE FOR ORGANICALLY RECONFIGURABLE COMPUTING SYSTEMS.
- Creator
-
Oreifej, Rashad, DeMara, Ronald, University of Central Florida
- Abstract / Description
-
A Sustainable Autonomic Architecture for Organically Reconfigurable Computing System based on SRAM Field Programmable Gate Arrays (FPGAs) is proposed, modeled analytically, simulated, prototyped, and measured. Low-level organic elements are analyzed and designed to achieve novel self-monitoring, self-diagnosis, and self-repair organic properties. The prototype of a 2-D spatial gradient Sobel video edge-detection organic system use-case developed on a XC4VSX35 Xilinx Virtex-4 Video Starter Kit...
Show moreA Sustainable Autonomic Architecture for Organically Reconfigurable Computing System based on SRAM Field Programmable Gate Arrays (FPGAs) is proposed, modeled analytically, simulated, prototyped, and measured. Low-level organic elements are analyzed and designed to achieve novel self-monitoring, self-diagnosis, and self-repair organic properties. The prototype of a 2-D spatial gradient Sobel video edge-detection organic system use-case developed on a XC4VSX35 Xilinx Virtex-4 Video Starter Kit is presented. Experimental results demonstrate the applicability of the proposed architecture and provide the infrastructure to quantify the performance and overcome fault-handling limitations. Dynamic online autonomous functionality restoration after a malfunction or functionality shift due to changing requirements is achieved at a fine granularity by exploiting dynamic Partial Reconfiguration (PR) techniques. A Genetic Algorithm (GA)-based hardware/software platform for intrinsic evolvable hardware is designed and evaluated for digital circuit repair using a variety of well-accepted benchmarks. Dynamic bitstream compilation for enhanced mutation and crossover operators is achieved by directly manipulating the bitstream using a layered toolset. Experimental results on the edge-detector organic system prototype have shown complete organic online refurbishment after a hard fault. In contrast to previous toolsets requiring many milliseconds or seconds, an average of 0.47 microseconds is required to perform the genetic mutation, 4.2 microseconds to perform the single point conventional crossover, 3.1 microseconds to perform Partial Match Crossover (PMX) as well as Order Crossover (OX), 2.8 microseconds to perform Cycle Crossover (CX), and 1.1 milliseconds for one input pattern intrinsic evaluation. These represent a performance advantage of three orders of magnitude over the JBITS software framework and more than seven orders of magnitude over the Xilinx design flow. Combinatorial Group Testing (CGT) technique was combined with the conventional GA in what is called CGT-pruned GA to reduce repair time and increase system availability. Results have shown up to 37.6% convergence advantage using the pruned technique. Lastly, a quantitative stochastic sustainability model for reparable systems is formulated to evaluate the Sustainability of FPGA-based reparable systems. This model computes at design-time the resources required for refurbishment to meet mission availability and lifetime requirements in a given fault-susceptible missions. By applying this model to MCNC benchmark circuits and the Sobel Edge-Detector in a realistic space mission use-case on Xilinx Virtex-4 FPGA, we demonstrate a comprehensive model encompassing the inter-relationships between system sustainability and fault rates, utilized, and redundant hardware resources, repair policy parameters and decaying reparability.
Show less - Date Issued
- 2011
- Identifier
- CFE0003969, ucf:48661
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003969