Current Search: Algorithm (x)
Pages
-
-
Title
-
WAVELETS IN REAL-TIME RENDERING.
-
Creator
-
sun, weifeng, Mukherjee, Amar, University of Central Florida
-
Abstract / Description
-
Interactively simulating visual appearance of natural objects under natural illumination is a fundamental problem in computer graphics. 3D computer games, geometry modeling, training and simulation, electronic commerce, visualization, lighting design, digital libraries, geographical information systems, economic and medical image processing are typical candidate applications. Recent advances in graphics hardware have enabled real-time rasterization of complex scenes under artificial lighting...
Show moreInteractively simulating visual appearance of natural objects under natural illumination is a fundamental problem in computer graphics. 3D computer games, geometry modeling, training and simulation, electronic commerce, visualization, lighting design, digital libraries, geographical information systems, economic and medical image processing are typical candidate applications. Recent advances in graphics hardware have enabled real-time rasterization of complex scenes under artificial lighting environment. Meanwhile, pre-computation based soft shadow algorithms are proven effective under low-frequency lighting environment. Under the most practical yet popular all-frequency natural lighting environment, however, real-time rendering of dynamic scenes still remains a challenging problem. In this dissertation, we propose a systematic approach to render dynamic glossy objects under the general all-frequency lighting environment. In our framework, lighting integration is reduced to two rather basic mathematical operations, efficiently computing multi-function product and product integral. The main contribution of our work is a novel mathematical representation and analysis of multi-function product and product integral in the wavelet domain. We show that, multi-function product integral in the primal is equivalent to summation of the product of basis coefficients and integral coefficients. In the dissertation, we give a novel Generalized Haar Integral Coefficient Theorem. We also present a set of efficient algorithms to compute multi-function product and product integral. In the dissertation, we demonstrate practical applications of these algorithms in the interactive rendering of dynamic glossy objects under distant time-variant all-frequency environment lighting with arbitrary view conditions. At each vertex, the shading integral is formulated as the product integral of multiple operand functions. By approximating operand functions in the wavelet domain, we demonstrate rendering dynamic glossy scenes interactively, which is orders of magnitude faster than previous work. As an important enhancement to the popular Pre-computation Based Radiance Transfer (PRT) approach, we present a novel Just-in-time Radiance Transfer (JRT) technique, and demonstrate its application in real-time realistic rendering of dynamic all-frequency shadows under general lighting environment. Our work is a significant step towards real-time rendering of arbitrary scenes under general lighting environment. It is also of great importance to general numerical analysis and signal processing.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001495, ucf:47079
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001495
-
-
Title
-
AN ALGORITHM FOR DETERMINING SATELLITE ATTITUDE BY COMPARING PHYSICAL FEATURE MODELS TO EDGES DETECTED IN SATELLITE OR GROUND-BASED TELESCOPE IMAGERY.
-
Creator
-
Reinhart, Eric, Johnson, Roger, University of Central Florida
-
Abstract / Description
-
This thesis discusses the development and performance of an algorithm created to calculate satellite attitude based on the comparison of satellite "physical feature" models to information derived from edge detection performed on imagery of the satellite. The quality of this imagery could range from the very clear, close-up imagery that may come from an unmanned satellite servicing mission to the faint, unclear imagery that may come from a ground-based telescope investigating a satellite...
Show moreThis thesis discusses the development and performance of an algorithm created to calculate satellite attitude based on the comparison of satellite "physical feature" models to information derived from edge detection performed on imagery of the satellite. The quality of this imagery could range from the very clear, close-up imagery that may come from an unmanned satellite servicing mission to the faint, unclear imagery that may come from a ground-based telescope investigating a satellite anomaly. Satellite "physical feature" models describe where an edge is likely to appear in an image. These are usually defined by physical edges on the structure of the satellite or areas where there are distinct changes in material property. The theory behind this concept is discussed as well as two different approaches to implement it. Various simple examples are used to demonstrate the feasibility of the concept. These examples are well-controlled image simulations of simple physical models with known attitude. The algorithm attempts to perform the edge detection and edge registration of the simulated image and calculate the most likely attitude. Though complete autonomy was not achieved during this effort, the concept and approach show applicability.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001942, ucf:47450
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001942
-
-
Title
-
A COMPARATIVE STUDY OF ANT COLONY OPTIMIZATION.
-
Creator
-
Becker, Matthew, Mohapatra, Ram, University of Central Florida
-
Abstract / Description
-
Ant Colony Optimization (ACO) belongs to a class of biologically-motivated approaches to computing that includes such metaheuristics as artificial neural networks, evolutionary algorithms, and artificial immune systems, among others. Emulating to varying degrees the particular biological phenomena from which their inspiration is drawn, these alternative computational systems have succeeded in finding solutions to complex problems that had heretofore eluded more traditional techniques. Often,...
Show moreAnt Colony Optimization (ACO) belongs to a class of biologically-motivated approaches to computing that includes such metaheuristics as artificial neural networks, evolutionary algorithms, and artificial immune systems, among others. Emulating to varying degrees the particular biological phenomena from which their inspiration is drawn, these alternative computational systems have succeeded in finding solutions to complex problems that had heretofore eluded more traditional techniques. Often, the resulting algorithm bears little resemblance to its biological progenitor, evolving instead into a mathematical abstraction of a singularly useful quality of the phenomenon. In such cases, these abstract computational models may be termed biological metaphors. Mindful that a fine line separates metaphor from distortion, this paper outlines an attempt to better understand the potential consequences an insufficient understanding of the underlying biological phenomenon may have on its transformation into mathematical metaphor. To that end, the author independently develops a rudimentary ACO, remaining as faithful as possible to the behavioral qualities of an ant colony. Subsequently, the performance of this new ACO is compared with that of a more established ACO in three categories: (1) the hybridization of evolutionary computing and ACO, (2) the efficacy of daemon actions, and (3) theoretical properties and convergence proofs.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001192, ucf:46844
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001192
-
-
Title
-
ELECTIMIZE: A NEW EVOLUTIONARY ALGORITHM FOR OPTIMIZATION WITH APPLICATIONS IN CONSTRUCTION ENGINEERING.
-
Creator
-
Abdel-Raheem, Mohamed, Khalafallah, Ahmed, University of Central Florida
-
Abstract / Description
-
Optimization is considered an essential step in reinforcing the efficiency of performance and economic feasibility of construction projects. In the past few decades, evolutionary algorithms (EAs) have been widely utilized to solve various types of construction-related optimization problems due to their efficiency in finding good solutions in relatively short time periods. However, in many cases, these existing evolutionary algorithms failed to identify the optimal solution to several...
Show moreOptimization is considered an essential step in reinforcing the efficiency of performance and economic feasibility of construction projects. In the past few decades, evolutionary algorithms (EAs) have been widely utilized to solve various types of construction-related optimization problems due to their efficiency in finding good solutions in relatively short time periods. However, in many cases, these existing evolutionary algorithms failed to identify the optimal solution to several optimization problems. As such, it is deemed necessary to develop new approaches in order to help identify better-quality solutions. This doctoral research presents the development of a new evolutionary algorithm, named "Electimize," that is based on the simulation of the flow of electric current in the branches of an electric circuit. The main motive in this research is to provide the construction industry with a robust optimization tool that overcomes some of the shortcomings of existing EAs. In solving optimization problems using Electimize, a number of wires (solution strings) composed of a number of segments are fabricated randomly. Each segment corresponds to a decision variable in the objective function. The wires are virtually connected in parallel to a source of an electricity to represent an electric circuit. The electric current passing through each wire is calculated by substituting the values of the segments in the objective function. The quality of the wire is based on its global resistance, which is calculated using Ohm's law. The main objectives of this research are to 1) develop an optimization methodology that is capable of evaluating the quality of decision variable values in the solution string independently; 2) devise internal optimization mechanisms that would enable the algorithm to extensively search the solution space and avoid its convergence toward local optima; and 3) provide the construction industry with a reliable optimization tool that is capable of solving different classes of NP-hard optimization problems. First, internal processes are designed, modeled, and tested to enable the individual assessment of the quality of each decision variable value available in the solution space. The main principle in assessing the quality of each decision variable value individually is to use the segment resistance (local resistance) as an indicator of the quality. This is accomplished by conducting a sensitivity analysis to record the change in the resistance of a control wire, when a certain decision variable value is substituted into the corresponding segment of the control wire. The calculated local resistances of all segments of a wire are then normalized to ensure that their summation is equal to the global wire resistance and no violation is made of Kirchhoff's rule. A benchmark NP-hard cash flow management problem from the literature is attempted to test and validate the performance of the developed approach. Not only was Electimize able to identify the optimal solution for the problem, but also it identified ten alternative optimal solutions, outperforming the existing algorithms. Second, the internal processes for the sensitivity analysis are designed to allow for extensive search of the solution space through the generation of new wires. Every time a decision variable value is substituted in the control wire to assess its quality, a new wire that might have a better quality is generated. To further test the capabilities of Electimize in searching the solution space, Electimize was applied to a multimodal 9-city travelling salesman problem (TSP) that had been previously designed and solved mathematically. The problem has 27 alternative optimal solutions. Electimize succeeded to identify 21 of the 27 alternative optimal solutions in a limited time period. Moreover, Electimize was applied to a 16-city benchmark TSP (Ulysses16) and was able to identify the optimal tour and its alternative. Further, additional parameters are incorporated to 1) allow for the extensive search of the solution space, 2) prevent the convergence towards local optima, and 3) increase the rate of convergence towards the global optima. These parameters are classified into two categories: 1) resistance related parameters, and 2) solution exploration parameters. The resistance related parameters are: a) the conductor resistivity, b) its cross-sectional area, and c) the length of each segment. The main role of this set of parameters is to provide the algorithm with additional gauging parameters to help guide it towards the global optima. The solution exploration parameters included a) the heat factor, and b) the criterion of selecting the control wire. The main role of this set of parameters is to allow for an extensive search of the solution space in order to facilitate the identification all the available alternative optimal solutions; prevent the premature convergence towards local optima; and increase the rate of convergence towards the global optima. Two TSP instances (Bayg29 and ATT48) are attempted and the results obtained illustrate that Electimize outperforms other EAs with respect to the quality of solutions obtained. Third, to test the capabilities of Electimize as a reliable optimization tool in construction optimization problems, three benchmark NP-hard construction optimization problems are attempted. The first problem is the cash flow management problem, as mentioned earlier. The second problem is the time cost tradeoff problem (TCTP) and is used as an example of static optimization. The third problem is a site layout planning problem (SLPP), and represents dynamic optimization. When Electimize was applied to the TCTP, it succeeded to identify the optimal solution of the problem in a single iteration using thirty solution strings, compared to hundreds of iterations and solution strings that were used by EAs to solve the same problem. Electimize was also successful in solving the SLPP and outperformed the existing algorithm used to solve the problem by identifying a better optimal solution. The main contributions of this research are 1) developing a new approach and algorithm for optimization based on the simulation of the phenomenon of electrical conduction, 2) devising processes that enable assessing the quality of decision variable values independently, 3) formulating methodologies that allow for the extensive search of the solution space and identification of alternative optimal solutions, and 4) providing a robust optimization tool for decision makers and construction planners.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003954, ucf:48698
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003954
-
-
Title
-
Information Propagation Algorithms for Consensus Formation in Decentralized Multi-Agent Systems.
-
Creator
-
Hollander, Christopher, Wu, Annie, Shumaker, Randall, Wiegand, Rudolf, Turgut, Damla, Song, Zixia, University of Central Florida
-
Abstract / Description
-
Consensus occurs within a multi-agent system when every agent is in agreement about the value of some particular state. For example, the color of an LED, the position or magnitude of a vector, a rendezvous location, the most recent state of data within a database, or the identity of a leader are all states that agents might need to agree on in order to execute their tasking.The task of the decentralized consensus problem for multi-agent systems is to design an algorithm that enables agents to...
Show moreConsensus occurs within a multi-agent system when every agent is in agreement about the value of some particular state. For example, the color of an LED, the position or magnitude of a vector, a rendezvous location, the most recent state of data within a database, or the identity of a leader are all states that agents might need to agree on in order to execute their tasking.The task of the decentralized consensus problem for multi-agent systems is to design an algorithm that enables agents to communicate and exchange information such that, in finite time, agents are able to form a consensus without the use of a centralized control mechanism. The primary goal of this research is to introduce and provide supporting evidence for Stochastic Local Observation/Gossip (SLOG) algorithms as a new class of solutions to the decentralized consensus problem for multi-agent systems that lack a centralized controller, with the additional constraints that agents act asynchronously, information is discrete, and all consensus options are equally preferable to all agents. Examples of where these constraints might apply include the spread of social norms and conventions in artificial populations, rendezvous among a set of specific locations, and task assignment.This goal is achieved through a combination of theory and experimentation. Information propagation process and an information propagation algorithm are derived by unifying the general structure of multiple existing solutions to the decentralized consensus problem. They are then used to define two classes of algorithms that spread information across a network and solve the decentralized consensus problem: buffered gossip algorithms and local observation algorithms. Buffered gossip algorithms generalize the behavior of many push-based solutions to the decentralized consensus problem. Local observation algorithms generalize the behavior of many pull-based solutions to the decentralized consensus problem. In the language of object oriented design, buffered gossip algorithms and local observation algorithms are abstract classes; information propagation processes are interfaces. SLOG algorithms combine the transmission mechanisms of buffered gossip algorithms and local observation algorithms into a single "hybrid" algorithm that is able to push and pull information within the local neighborhood. A common mathematical framework is constructed and used to determine the conditions under which each of these algorithms are guaranteed to produce a consensus, and thus solve the decentralized consensus problem. Finally, a series of simulation experiments are conducted to study the performance of SLOG algorithms. These experiments compare the average speed of consensus formation between buffered gossip algorithms, local observation algorithms, and SLOG algorithms over four distinct network topologies.Beyond the introduction of the SLOG algorithm, this research also contributes to the existing literature on the decentralized consensus problem by: specifying a theoretical framework that can be used to explore the consensus behavior of push-based and pull-based information propagation algorithms; using this framework to define buffered gossip algorithms and local observation algorithms as generalizations for existing solutions to the decentralized consensus problem; highlighting the similarities between consensus algorithms within control theory and opinion dynamics within computational sociology, and showing how these research areas can be successfully combined to create new and powerful algorithms; and providing an empirical comparison between multiple information propagation algorithms.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005629, ucf:50229
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005629
-
-
Title
-
INTEGRABILITY OF A SINGULARLY PERTURBED MODEL DESCRIBING GRAVITY WATER WAVES ON A SURFACE OF FINITE DEPTH.
-
Creator
-
Little, Steven, Tovbis, Alexander, University of Central Florida
-
Abstract / Description
-
Our work is closely connected with the problem of splitting of separatrices (breaking of homoclinic orbits) in a singularly perturbed model describing gravity water waves on a surface of finite depth. The singularly perturbed model is a family of singularly perturbed fourth-order nonlinear ordinary differential equations, parametrized by an external parameter (in addition to the small parameter of the perturbations). It is known that in general separatrices will not survive a singular...
Show moreOur work is closely connected with the problem of splitting of separatrices (breaking of homoclinic orbits) in a singularly perturbed model describing gravity water waves on a surface of finite depth. The singularly perturbed model is a family of singularly perturbed fourth-order nonlinear ordinary differential equations, parametrized by an external parameter (in addition to the small parameter of the perturbations). It is known that in general separatrices will not survive a singular perturbation. However, it was proven by Tovbis and Pelinovsky that there is a discrete set of exceptional values of the external parameter for which separatrices do survive the perturbation. Since our family of equations can be written in the Hamiltonian form, the question is whether or not survival of separatrices implies integrability of the corresponding equation. The complete integrability of the system is examined from two viewpoints: 1) the existence of a second first integral in involution (Liouville integrability), and 2) the existence of single-valued, meromorphic solutions (complex analytic integrability). In the latter case, a singular point analysis is done using the technique given by Ablowitz, Ramani, and Segur (the ARS algorithm) to determine whether the system is of Painlevé-type (P-type), lacking movable critical points. The system is shown by the algorithm to fail to be of P-type, a strong indication of nonintegrability.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002109, ucf:47550
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002109
-
-
Title
-
BEHAVIOR OF VARIABLE-LENGTH GENETIC ALGORITHMS UNDER RANDOM SELECTION.
-
Creator
-
Stringer, Harold, Wu, Annie, University of Central Florida
-
Abstract / Description
-
In this work, we show how a variable-length genetic algorithm naturally evolves populations whose mean chromosome length grows shorter over time. A reduction in chromosome length occurs when selection is absent from the GA. Specifically, we divide the mating space into five distinct areas and provide a probabilistic and empirical analysis of the ability of matings in each area to produce children whose size is shorter than the parent generation's average size. Diversity of size within a...
Show moreIn this work, we show how a variable-length genetic algorithm naturally evolves populations whose mean chromosome length grows shorter over time. A reduction in chromosome length occurs when selection is absent from the GA. Specifically, we divide the mating space into five distinct areas and provide a probabilistic and empirical analysis of the ability of matings in each area to produce children whose size is shorter than the parent generation's average size. Diversity of size within a GA's population is shown to be a necessary condition for a reduction in mean chromosome length to take place. We show how a finite variable-length GA under random selection pressure uses 1) diversity of size within the population, 2) over-production of shorter than average individuals, and 3) the imperfect nature of random sampling during selection to naturally reduce the average size of individuals within a population from one generation to the next. In addition to our findings, this work provides GA researchers and practitioners with 1) a number of mathematical tools for analyzing possible size reductions for various matings and 2) new ideas to explore in the area of bloat control.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001652, ucf:47249
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001652
-
-
Title
-
A METHODOLOGY FOR MINIMIZING THE OSCILLATIONS IN SUPPLY CHAINS USING SYSTEM DYNAMICS AND GENETIC ALGORITHMS.
-
Creator
-
LAKKOJU, RAMAMOORTHY, RABELO, LUIS, University of Central Florida
-
Abstract / Description
-
Supply Chain Management (SCM) is a critically significant strategy that enterprises depend on to meet challenges that they face because of highly competitive and dynamic business environments of today. Supply chain management involves the entire network of processes from procurement of raw materials/services/technologies to manufacturing or servicing intermediate products/services to converting them into final products or services and then distributing and retailing them till they reach final...
Show moreSupply Chain Management (SCM) is a critically significant strategy that enterprises depend on to meet challenges that they face because of highly competitive and dynamic business environments of today. Supply chain management involves the entire network of processes from procurement of raw materials/services/technologies to manufacturing or servicing intermediate products/services to converting them into final products or services and then distributing and retailing them till they reach final customers. A supply chain network by nature is a large and complex, engineering and management system. Oscillations occurring in a supply chain because of internal and/or external influences and measures to be taken to mitigate/minimize those oscillations are a core concern in managing the supply chain and driving an organization towards a competitive advantage. The objective of this thesis is to develop a methodology to minimize the oscillations occurring in a supply chain by making use of the techniques of System Dynamics (SD) and Genetic Algorithms (GAs). System dynamics is a very efficient tool to model large and complex systems in order to understand their complex, non-linear dynamic behavior. GAs are stochastic search algorithms, based on the mechanics of natural selection and natural genetics, used to search complex and non-linear search spaces where traditional techniques may be unsuitable.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000683, ucf:46489
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000683
-
-
Title
-
MULTIOBJECTIVE SIMULATION OPTIMIZATION USING ENHANCED EVOLUTIONARY ALGORITHM APPROACHES.
-
Creator
-
Eskandari, Hamidreza, Geiger, Christopher, University of Central Florida
-
Abstract / Description
-
In today's competitive business environment, a firm's ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try...
Show moreIn today's competitive business environment, a firm's ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or "noisy") values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, "black-box" objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms' performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001283, ucf:46905
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001283
-
-
Title
-
A COMPETITIVE RECONFIGURATION APPROACH TO AUTONOMOUS FAULT HANDLING USING GENETIC ALGORITHMS.
-
Creator
-
Zhang, Kening, DeMara, Ronald F, University of Central Florida
-
Abstract / Description
-
In this dissertation, a novel self-repair approach based on Consensus Based Evaluation (CBE) for autonomous repair of SRAM-based Field Programmable Gate Arrays (FPGAs) is developed, evaluated, and refined. An initial population of functionally identical (same input-output behavior), yet physically distinct (alternative design or place-and-route realization) FPGA configurations is produced at design time. During run-time, the CBE approach ranks these alternative configurations after evaluating...
Show moreIn this dissertation, a novel self-repair approach based on Consensus Based Evaluation (CBE) for autonomous repair of SRAM-based Field Programmable Gate Arrays (FPGAs) is developed, evaluated, and refined. An initial population of functionally identical (same input-output behavior), yet physically distinct (alternative design or place-and-route realization) FPGA configurations is produced at design time. During run-time, the CBE approach ranks these alternative configurations after evaluating their discrepancy relative to the consensus formed by the population. Through runtime competition, faults in the logical resources become occluded from the visibility of subsequent FPGA operations. Meanwhile, offspring formed through crossover and mutation of faulty and viable configurations are selected at a controlled re-introduction rate for evaluation and refurbishment. Refurbishments are evolved in-situ, with online real-time input-based performance evaluation, enhancing system availability and sustainability, creating an Organic Embedded System (OES). A fault tolerance model called N Modular Redundancy with Standby (NMRSB) is developed which combines the two popular fault tolerance techniques of NMR and Standby fault tolerance in order to facilitate the CBE approach. This dissertation develops two of instances of the NMRSB system Triple Modular Redundancy with Standby (TMRSB) and Duplex with Standby (DSB). A hypothetical Xilinx Virtex-II Pro FPGA model demonstrates their viability for various applications including a 3-bit x 3-bit multiplier, and the MCNC91 benchmark circuits. Experiments conducted on the model iii evaluate the performance of three new genetic operators and demonstrate progress towards a completely self-contained single-chip implementation so that the FPGA can refurbish itself without requiring a PC host to execute the Genetic Algorithm. This dissertation presents results from the simulations of multiple applications with a CBE model implemented in the C++ programming language. Starting with an initial population of 20 and 30 viable configurations for TMRSB and DSB respectively, a single stuck-at fault is introduced in the logic resources. Fault refurbishment experiments are conducted under supervision of CBE using a fitness state evaluation function based on competing outputs, fitness adjustment, and different level threshold. The device remains online throughout the process by which a complete repair is realized with Hamming Distance and Bitweight voting schemes. The results indicate a Hamming Distance TMRSB approach can prevent the most pervasive fault impacts and realize complete refurbishment. Experimental results also show that the Autonomic Layer demonstrates 100% faulty component isolation for both Functional Elements (FEs) and Autonomous Elements (AEs) with randomly injected single and multiple faults. Using logic circuits from the MCNC-91 benchmark set, availability during repair phases averaged 75.05%, 82.21%, and 65.21% for the z4ml, cm85a, and cm138a circuits respectively under stated conditions. In addition to simulation, the proposed OES architecture synthesized from HDL was prototyped on a Xilinx Virtex II Pro FPGA device supporting partial reconfiguration to demonstrate the feasibility for intrinsic regeneration of the selected circuit.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002280, ucf:47849
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002280
-
-
Title
-
CONVERGENCE OF THE MEAN SHIFT ALGORITHM AND ITS GENERALIZATIONS.
-
Creator
-
Hu, Ting, Li, Xin, University of Central Florida
-
Abstract / Description
-
Mean shift is an effective iterative algorithm widely used in image analysis tasks like tracking, image segmentation, smoothing, filtering, edge detection and etc. It iteratively estimates the modes of the probability function of a set of sample data points based in a region. Mean shift was invented in 1975, but it was not widely used until the work by Cheng in 1995. After that, it becomes popular in computer vision. However the convergence, a key character of any iterative algorithm, has...
Show moreMean shift is an effective iterative algorithm widely used in image analysis tasks like tracking, image segmentation, smoothing, filtering, edge detection and etc. It iteratively estimates the modes of the probability function of a set of sample data points based in a region. Mean shift was invented in 1975, but it was not widely used until the work by Cheng in 1995. After that, it becomes popular in computer vision. However the convergence, a key character of any iterative algorithm, has been rigorously proved only very recently, but with strong assumptions. In this thesis, the method of mean shift is introduced systematically first and then the convergence is established under more relaxed assumptions. Finally, generalization of the mean shift method is also given for the estimation of probability density function using generalized multivariate smoothing functions to meet the need for more real life applications.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004059, ucf:49133
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004059
-
-
Title
-
LEARNING FROM GEOMETRY IN LEARNING FOR TACTICAL AND STRATEGIC DECISION DOMAINS.
-
Creator
-
Gauci, Jason, Stanley, Kenneth, University of Central Florida
-
Abstract / Description
-
Artificial neural networks (ANNs) are an abstraction of the low-level architecture of biological brains that are often applied in general problem solving and function approximation. Neuroevolution (NE), i.e. the evolution of ANNs, has proven effective at solving problems in a variety of domains. Information from the domain is input to the ANN, which outputs its desired actions. This dissertation presents a new NE algorithm called Hypercube-based NeuroEvolution of Augmenting Topologies ...
Show moreArtificial neural networks (ANNs) are an abstraction of the low-level architecture of biological brains that are often applied in general problem solving and function approximation. Neuroevolution (NE), i.e. the evolution of ANNs, has proven effective at solving problems in a variety of domains. Information from the domain is input to the ANN, which outputs its desired actions. This dissertation presents a new NE algorithm called Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT), based on a novel indirect encoding of ANNs. The key insight in HyperNEAT is to make the algorithm aware of the geometry in which the ANNs are embedded and thereby exploit such domain geometry to evolve ANNs more effectively. The dissertation focuses on applying HyperNEAT to tactical and strategic decision domains. These domains involve simultaneously considering short-term tactics while also balancing long-term strategies. Board games such as checkers and Go are canonical examples of such domains; however, they also include real-time strategy games and military scenarios. The dissertation details three proposed extensions to HyperNEAT designed to work in tactical and strategic decision domains. The first is an action selector ANN architecture that allows the ANN to indicate its judgements on every possible action all at once. The second technique is called substrate extrapolation. It allows learning basic concepts at a low resolution, and then increasing the resolution to learn more advanced concepts. The final extension is geometric game-tree pruning, whereby HyperNEAT can endow the ANN the ability to focus on specific areas of a domain (such as a checkers board) that deserve more inspection. The culminating contribution is to demonstrate the ability of HyperNEAT with these extensions to play Go, a most challenging game for artificial intelligence, by combining HyperNEAT with UCT.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003464, ucf:48962
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003464
-
-
Title
-
Self-Scaling Evolution of Analog Computation Circuits.
-
Creator
-
Pyle, Steven, DeMara, Ronald, Vosoughi, Azadeh, Chanda, Debashis, University of Central Florida
-
Abstract / Description
-
Energy and performance improvements of continuous-time analog-based computation for selected applications offer an avenue to continue improving the computational ability of tomorrow's electronic devices at current technology scaling limits. However, analog computation is plagued by the difficulty of designing complex computational circuits, programmability, as well as the inherent lack of accuracy and precision when compared to digital implementations. In this thesis, evolutionary algorithm...
Show moreEnergy and performance improvements of continuous-time analog-based computation for selected applications offer an avenue to continue improving the computational ability of tomorrow's electronic devices at current technology scaling limits. However, analog computation is plagued by the difficulty of designing complex computational circuits, programmability, as well as the inherent lack of accuracy and precision when compared to digital implementations. In this thesis, evolutionary algorithm-based techniques are utilized within a reconfigurable analog fabric to realize an automated method of designing analog-based computational circuits while adapting the functional range to improve performance. A Self-Scaling Genetic Algorithm is proposed to adapt solutions to computationally-tractable ranges in hardware-constrained analog reconfigurable fabrics. It operates by utilizing a Particle Swarm Optimization (PSO) algorithm that operates synergistically with a Genetic Algorithm (GA) to adaptively scale and translate the functional range of computational circuits composed of high-level or low-level Computational Analog Elements to improve performance and realize functionality otherwise unobtainable on the intrinsic platform. The technique is demonstrated by evolving square, square-root, cube, and cube-root analog computational circuits on the Cypress PSoC-5LP System-on-Chip. Results indicate that the Self-Scaling Genetic Algorithm improves our error metric on average 7.18-fold, up to 12.92-fold for computational circuits that produce outputs beyond device range. Results were also favorable compared to previous works, which utilized extrinsic evolution of circuits with much greater complexity than was possible on the PSoC-5LP.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005866, ucf:50873
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005866
-
-
Title
-
AN INVERSE ALGORITHM TO ESTIMATE THERMAL CONTACT RESISTANCE.
-
Creator
-
Gill, Jennifer, Kassab, Alain, University of Central Florida
-
Abstract / Description
-
Thermal systems often feature composite regions that are mechanically mated. In general, there exists a significant temperature drop across the interface between such regions which may be composed of similar or different materials. The parameter characterizing this temperature drop is the thermal contact resistance, which is defined as the ratio of the temperature drop to the heat flux normal to the interface. The thermal contact resistance is due to roughness effects between mating surfaces...
Show moreThermal systems often feature composite regions that are mechanically mated. In general, there exists a significant temperature drop across the interface between such regions which may be composed of similar or different materials. The parameter characterizing this temperature drop is the thermal contact resistance, which is defined as the ratio of the temperature drop to the heat flux normal to the interface. The thermal contact resistance is due to roughness effects between mating surfaces which cause certain regions of the mating surfaces to loose contact thereby creating gaps. In these gap regions, the principal modes of heat transfer are conduction across the contacting regions of the interface, conduction or natural convection in the fluid filling the gap regions of the interface, and radiation across the gap surfaces. Moreover, the contact resistance is a function of contact pressure as this can significantly alter the topology of the contact region. The thermal contact resistance is a phenomenologically complex function and can significantly alter prediction of thermal models of complex multi-component structures. Accurate estimates of thermal contact resistances are important in engineering calculations and find application in thermal analysis ranging from relatively simple layered and composite materials to more complex biomaterials. There have been many studies devoted to the theoretical predictions of thermal contact resistance and although general theories have been somewhat successful in predicting thermal contact resistances, most reliable results have been obtained experimentally. This is due to the fact that the nature of thermal contact resistance is quite complex and depends on many parameters including types of mating materials, surface characteristics of the interfacial region such as roughness and hardness, and contact pressure distribution. In experiments, temperatures are measured at a certain number of locations, usually close to the contact surface, and these measurements are used as inputs to a parameter estimation procedure to arrive at the sought-after thermal contact resistance. Most studies seek a single value for the contact resistance, while the resistance may in fact also vary spatially. In this thesis, an inverse problem (IP) is formulated to estimate the spatial variation of the thermal contact resistance along an interface in a two-dimensional configuration. Temperatures measured at discrete locations using embedded sensors appropriately placed in proximity to the interface provide the additional information required to solve the inverse problem. A superposition method serves to determine sensitivity coefficients and provides guidance in the location of the measuring points. Temperature measurements are then used to define a regularized quadratic functional that is minimized to yield the contact resistance between the two mating surfaces. A boundary element method analysis (BEM) provides the temperature field under current estimates of the contact resistance in the solution of the inverse problem when the geometry of interest is not regular, while an analytical solution can be used for regular geometries. Minimization of the IP functional is carried out by the Levenberg-Marquadt method or by a Genetic Algorithm depending on the problem under consideration. The L-curve method of Hansen is used to choose the optimal regularization parameter. A series of numerical examples are provided to demonstrate and validate the approach.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000748, ucf:46582
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000748
-
-
Title
-
EVOLUTIONARY OPTIMIZATION OF SUPPORT VECTOR MACHINES.
-
Creator
-
Gruber, Fred, Rabelo, Luis, University of Central Florida
-
Abstract / Description
-
Support vector machines are a relatively new approach for creating classifiers that have become increasingly popular in the machine learning community. They present several advantages over other methods like neural networks in areas like training speed, convergence, complexity control of the classifier, as well as a stronger mathematical background based on optimization and statistical learning theory. This thesis deals with the problem of model selection with support vector machines, that is...
Show moreSupport vector machines are a relatively new approach for creating classifiers that have become increasingly popular in the machine learning community. They present several advantages over other methods like neural networks in areas like training speed, convergence, complexity control of the classifier, as well as a stronger mathematical background based on optimization and statistical learning theory. This thesis deals with the problem of model selection with support vector machines, that is, the problem of finding the optimal parameters that will improve the performance of the algorithm. It is shown that genetic algorithms provide an effective way to find the optimal parameters for support vector machines. The proposed algorithm is compared with a backpropagation Neural Network in a dataset that represents individual models for electronic commerce.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000244, ucf:46251
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000244
-
-
Title
-
THE PROTEOMICS APPROACH TO EVOLUTIONARY COMPUTATION: AN ANALYSIS OF PROTEOME-BASED LOCATION INDEPENDENT REPRESENTATIONS BASEDON THE PROPORTIONAL GENETIC ALGORITHM.
-
Creator
-
Garibay, Ivan, Wu, Annie, University of Central Florida
-
Abstract / Description
-
As the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into...
Show moreAs the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into complex functions or designs. The observation from the central dogma of molecular biology that proteins are the basic building blocks of life and the recent advances in proteomics on analysis of structure, function and interaction of entire protein complements, lead us to propose a unifying framework of thought for these approaches: the proteomics approach. This thesis propose to investigate whether the self-organization of protein analogous structures at the representation level can increase the degree of complexity and ``novelty'' of solutions obtainable using evolutionary search techniques. In order to do so, we identify two fundamental aspects of this transition: (1) proteins interact in a three dimensional medium analogous to a multiset; and (2) proteins are functional structures. The first aspect is foundational for understanding of the second. This thesis analyzes the first aspect. It investigates the effects of using a genome to proteome mapping on evolutionary computation. This analysis is based on a genetic algorithm (GA) with a string to multiset mapping that we call the proportional genetic algorithm (PGA), and it focuses on the feasibility and effectiveness of this mapping. This mapping leads to a fundamental departure from typical EC methods: using a multiset of proteins as an intermediate mapping results in a \emph{completely location independent} problem representation where the location of the genes in a genome has no effect on the fitness of the solutions. Completely location independent representations, by definition, do not suffer from traditional EC hurdles associated with the location of the genes or positional effect in a genome. Such representations have the ability to self-organize into a genomic structure that appears to favor positive correlations between form and quality of represented solutions. Completely location independent representations also introduce new problems of their own such as the need for large alphabets of symbols and the theoretical need for larger representation spaces than traditional approaches. Overall, these representations perform as well or better than traditional representations and they appear to be particularly good for the class of problems involving proportions or multisets. This thesis concludes that the use of protein analogous structures as an intermediate representation in evolutionary computation is not only feasible but in some cases advantageous. In addition, it lays the groundwork for further research on proteins as functional self-organizing structures capable of building increasingly complex functionality, and as basic units of problem representation for evolutionary computation.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000311, ucf:46307
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000311
-
-
Title
-
ALAYZING THE EFFECTS OF MODULARITY ON SEARCH SPACES.
-
Creator
-
Garibay, Ozlem, Wu, Annie, University of Central Florida
-
Abstract / Description
-
We are continuously challenged by ever increasing problem complexity and the need to develop algorithms that can solve complex problems and solve them within a reasonable amount of time. Modularity is thought to reduce problem complexity by decomposing large problems into smaller and less complex subproblems. In practice, introducing modularity into evolutionary algorithm representations appears to improve search performance; however, how and why modularity improves performance is not well...
Show moreWe are continuously challenged by ever increasing problem complexity and the need to develop algorithms that can solve complex problems and solve them within a reasonable amount of time. Modularity is thought to reduce problem complexity by decomposing large problems into smaller and less complex subproblems. In practice, introducing modularity into evolutionary algorithm representations appears to improve search performance; however, how and why modularity improves performance is not well understood. In this thesis, we seek to better understand the effects of modularity on search. In particular, what are the effects of module creation on the search space structure and how do these structural changes affect performance? We define a theoretical and empirical framework to study modularity in evolutionary algorithms. Using this framework, we provide evidence of the following. First, not all types of modularity have an effect on search. We can have highly modular spaces that in essence are equivalent to simpler non-modular spaces. This is the case, because these spaces achieve higher degree of modularity without changing the fundamental structure of the search space. Second, for the cases when modularity actually has an effect on the fundamental structure of the search space, if left without guidance, it would only crowd and complicate the space structure resulting in a harder space for most search algorithms. Finally, we have the case when modularity not only has an effect in the search space structure, but most importantly, module creation can be guided by problem domain knowledge. When this knowledge can be used to estimate the value of a module in terms of its contribution toward building the solution, then modularity is extremely effective. It is in this last case that creating high value modules or low value modules has a direct and decisive impact on performance. The results presented in this thesis help to better understand, in a principled way, the effects of modularity on search. Better understanding the effects of modularity on search is a step forward in the larger issue of evolutionary search applied to increasingly complex problems.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002490, ucf:47680
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002490
-
-
Title
-
DYNAMIC SHARED STATE MAINTENANCE IN DISTRIBUTED VIRTUAL ENVIRONMENTS.
-
Creator
-
Hamza-Lup, Felix George, Hughes, Charles, University of Central Florida
-
Abstract / Description
-
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual...
Show moreAdvances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual components. One of the challenges in such environments is maintaining a consistent view of the dynamic shared state in the presence of inevitable network latency and jitter. A consistent view of the shared scene will significantly increase the sense of presence among participants and facilitate their interactive collaboration. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model.A review of the literature illustrates that the techniques for consistency maintenance in distributed Virtual Reality (VR) environments can be roughly grouped into three categories: centralized information management, prediction through dead reckoning algorithms, and frequent state regeneration. Additional resource management methods can be applied across these techniques for shared state consistency improvement. Some of these techniques are related to the systems infrastructure, others are related to the human nature of the participants (e.g., human perceptual limitations, area of interest management, and visual and temporal perception).An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000096, ucf:46152
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000096
-
-
Title
-
PLANNING AND SCHEDULING FOR LARGE-SCALEDISTRIBUTED SYSTEMS.
-
Creator
-
Yu, Han, Marinescu, Dan, University of Central Florida
-
Abstract / Description
-
Many applications require computing resources well beyond those available on any single system. Simulations of atomic and subatomic systems with application to material science, computations related to study of natural sciences, and computer-aided design are examples of applications that can benefit from the resource-rich environment provided by a large collection of autonomous systems interconnected by high-speed networks. To transform such a collection of systems into a user's virtual...
Show moreMany applications require computing resources well beyond those available on any single system. Simulations of atomic and subatomic systems with application to material science, computations related to study of natural sciences, and computer-aided design are examples of applications that can benefit from the resource-rich environment provided by a large collection of autonomous systems interconnected by high-speed networks. To transform such a collection of systems into a user's virtual machine, we have to develop new algorithms for coordination, planning, scheduling, resource discovery, and other functions that can be automated. Then we can develop societal services based upon these algorithms, which hide the complexity of the computing system for users. In this dissertation, we address the problem of planning and scheduling for large-scale distributed systems. We discuss a model of the system, analyze the need for planning, scheduling, and plan switching to cope with a dynamically changing environment, present algorithms for the three functions, report the simulation results to study the performance of the algorithms, and introduce an architecture for an intelligent large-scale distributed system.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000781, ucf:46595
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000781
-
-
Title
-
ALGORITHMS FOR HAPLOTYPE INFERENCE AND BLOCK PARTITIONING.
-
Creator
-
Vijaya Satya, Ravi, Mukherjee, Amar, University of Central Florida
-
Abstract / Description
-
The completion of the human genome project in 2003 paved the way for studies to better understand and catalog variation in the human genome. The International HapMap Project was started in 2002 with the aim of identifying genetic variation in the human genome and studying the distribution of genetic variation across populations of individuals. The information collected by the HapMap project will enable researchers in associating genetic variations with phenotypic variations. Single Nucleotide...
Show moreThe completion of the human genome project in 2003 paved the way for studies to better understand and catalog variation in the human genome. The International HapMap Project was started in 2002 with the aim of identifying genetic variation in the human genome and studying the distribution of genetic variation across populations of individuals. The information collected by the HapMap project will enable researchers in associating genetic variations with phenotypic variations. Single Nucleotide Polymorphisms (SNPs) are loci in the genome where two individuals differ in a single base. It is estimated that there are approximately ten million SNPs in the human genome. These ten million SNPS are not completely independent of each other - blocks (contiguous regions) of neighboring SNPs on the same chromosome are inherited together. The pattern of SNPs on a block of the chromosome is called a haplotype. Each block might contain a large number of SNPs, but a small subset of these SNPs are sufficient to uniquely dentify each haplotype in the block. The haplotype map or HapMap is a map of these haplotype blocks. Haplotypes, rather than individual SNP alleles are expected to effect a disease phenotype. The human genome is diploid, meaning that in each cell there are two copies of each chromosome - i.e., each individual has two haplotypes in any region of the chromosome. With the current technology, the cost associated with empirically collecting haplotype data is prohibitively expensive. Therefore, the un-ordered bi-allelic genotype data is collected experimentally. The genotype data gives the two alleles in each SNP locus in an individual, but does not give information about which allele is on which copy of the chromosome. This necessitates computational techniques for inferring haplotypes from genotype data. This computational problem is called the haplotype inference problem. Many statistical approaches have been developed for the haplotype inference problem. Some of these statistical methods have been shown to be reasonably accurate on real genotype data. However, these techniques are very computation-intensive. With the international HapMap project collecting information from nearly 10 million SNPs, and with association studies involving thousands of individuals being undertaken, there is a need for more efficient methods for haplotype inference. This dissertation is an effort to develop efficient perfect phylogeny based combinatorial algorithms for haplotype inference. The perfect phylogeny haplotyping (PPH) problem is to derive a set of haplotypes for a given set of genotypes with the condition that the haplotypes describe a perfect phylogeny. The perfect phylogeny approach to haplotype inference is applicable to the human genome due to the block structure of the human genome. An important contribution of this dissertation is an optimal O(nm) time algorithm for the PPH problem, where n is the number of genotypes and m is the number of SNPs involved. The complexity of the earlier algorithms for this problem was O(nm^2). The O(nm) complexity was achieved by applying some transformations on the input data and by making use of the FlexTree data structure that has been developed as part of this dissertation work, which represents all the possible PPH solution for a given set of genotypes. Real genotype data does not always admit a perfect phylogeny, even within a block of the human genome. Therefore, it is necessary to extend the perfect phylogeny approach to accommodate deviations from perfect phylogeny. Deviations from perfect phylogeny might occur because of recombination events and repeated or back mutations (also referred to as homoplasy events). Another contribution of this dissertation is a set of fixed-parameter tractable algorithms for constructing near-perfect phylogenies with homoplasy events. For the problem of constructing a near perfect phylogeny with q homoplasy events, the algorithm presented here takes O(nm^2+m^(n+m)) time. Empirical analysis on simulated data shows that this algorithm produces more accurate results than PHASE (a popular haplotype inference program), while being approximately 1000 times faster than phase. Another important problem while dealing real genotype or haplotype data is the presence of missing entries. The Incomplete Perfect Phylogeny (IPP) problem is to construct a perfect phylogeny on a set of haplotypes with missing entries. The Incomplete Perfect Phylogeny Haplotyping (IPPH) problem is to construct a perfect phylogeny on a set of genotypes with missing entries. Both the IPP and IPPH problems have been shown to be NP-hard. The earlier approaches for both of these problems dealt with restricted versions of the problem, where the root is either available or can be trivially re-constructed from the data, or certain assumptions were made about the data. We make some novel observations about these problems, and present efficient algorithms for unrestricted versions of these problems. The algorithms have worst-case exponential time complexity, but have been shown to be very fast on practical instances of the problem.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001244, ucf:46894
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001244
Pages