Current Search: Evolutionary algorithms (x)
View All Items
- Title
- ELECTIMIZE: A NEW EVOLUTIONARY ALGORITHM FOR OPTIMIZATION WITH APPLICATIONS IN CONSTRUCTION ENGINEERING.
- Creator
-
Abdel-Raheem, Mohamed, Khalafallah, Ahmed, University of Central Florida
- Abstract / Description
-
Optimization is considered an essential step in reinforcing the efficiency of performance and economic feasibility of construction projects. In the past few decades, evolutionary algorithms (EAs) have been widely utilized to solve various types of construction-related optimization problems due to their efficiency in finding good solutions in relatively short time periods. However, in many cases, these existing evolutionary algorithms failed to identify the optimal solution to several...
Show moreOptimization is considered an essential step in reinforcing the efficiency of performance and economic feasibility of construction projects. In the past few decades, evolutionary algorithms (EAs) have been widely utilized to solve various types of construction-related optimization problems due to their efficiency in finding good solutions in relatively short time periods. However, in many cases, these existing evolutionary algorithms failed to identify the optimal solution to several optimization problems. As such, it is deemed necessary to develop new approaches in order to help identify better-quality solutions. This doctoral research presents the development of a new evolutionary algorithm, named "Electimize," that is based on the simulation of the flow of electric current in the branches of an electric circuit. The main motive in this research is to provide the construction industry with a robust optimization tool that overcomes some of the shortcomings of existing EAs. In solving optimization problems using Electimize, a number of wires (solution strings) composed of a number of segments are fabricated randomly. Each segment corresponds to a decision variable in the objective function. The wires are virtually connected in parallel to a source of an electricity to represent an electric circuit. The electric current passing through each wire is calculated by substituting the values of the segments in the objective function. The quality of the wire is based on its global resistance, which is calculated using Ohm's law. The main objectives of this research are to 1) develop an optimization methodology that is capable of evaluating the quality of decision variable values in the solution string independently; 2) devise internal optimization mechanisms that would enable the algorithm to extensively search the solution space and avoid its convergence toward local optima; and 3) provide the construction industry with a reliable optimization tool that is capable of solving different classes of NP-hard optimization problems. First, internal processes are designed, modeled, and tested to enable the individual assessment of the quality of each decision variable value available in the solution space. The main principle in assessing the quality of each decision variable value individually is to use the segment resistance (local resistance) as an indicator of the quality. This is accomplished by conducting a sensitivity analysis to record the change in the resistance of a control wire, when a certain decision variable value is substituted into the corresponding segment of the control wire. The calculated local resistances of all segments of a wire are then normalized to ensure that their summation is equal to the global wire resistance and no violation is made of Kirchhoff's rule. A benchmark NP-hard cash flow management problem from the literature is attempted to test and validate the performance of the developed approach. Not only was Electimize able to identify the optimal solution for the problem, but also it identified ten alternative optimal solutions, outperforming the existing algorithms. Second, the internal processes for the sensitivity analysis are designed to allow for extensive search of the solution space through the generation of new wires. Every time a decision variable value is substituted in the control wire to assess its quality, a new wire that might have a better quality is generated. To further test the capabilities of Electimize in searching the solution space, Electimize was applied to a multimodal 9-city travelling salesman problem (TSP) that had been previously designed and solved mathematically. The problem has 27 alternative optimal solutions. Electimize succeeded to identify 21 of the 27 alternative optimal solutions in a limited time period. Moreover, Electimize was applied to a 16-city benchmark TSP (Ulysses16) and was able to identify the optimal tour and its alternative. Further, additional parameters are incorporated to 1) allow for the extensive search of the solution space, 2) prevent the convergence towards local optima, and 3) increase the rate of convergence towards the global optima. These parameters are classified into two categories: 1) resistance related parameters, and 2) solution exploration parameters. The resistance related parameters are: a) the conductor resistivity, b) its cross-sectional area, and c) the length of each segment. The main role of this set of parameters is to provide the algorithm with additional gauging parameters to help guide it towards the global optima. The solution exploration parameters included a) the heat factor, and b) the criterion of selecting the control wire. The main role of this set of parameters is to allow for an extensive search of the solution space in order to facilitate the identification all the available alternative optimal solutions; prevent the premature convergence towards local optima; and increase the rate of convergence towards the global optima. Two TSP instances (Bayg29 and ATT48) are attempted and the results obtained illustrate that Electimize outperforms other EAs with respect to the quality of solutions obtained. Third, to test the capabilities of Electimize as a reliable optimization tool in construction optimization problems, three benchmark NP-hard construction optimization problems are attempted. The first problem is the cash flow management problem, as mentioned earlier. The second problem is the time cost tradeoff problem (TCTP) and is used as an example of static optimization. The third problem is a site layout planning problem (SLPP), and represents dynamic optimization. When Electimize was applied to the TCTP, it succeeded to identify the optimal solution of the problem in a single iteration using thirty solution strings, compared to hundreds of iterations and solution strings that were used by EAs to solve the same problem. Electimize was also successful in solving the SLPP and outperformed the existing algorithm used to solve the problem by identifying a better optimal solution. The main contributions of this research are 1) developing a new approach and algorithm for optimization based on the simulation of the phenomenon of electrical conduction, 2) devising processes that enable assessing the quality of decision variable values independently, 3) formulating methodologies that allow for the extensive search of the solution space and identification of alternative optimal solutions, and 4) providing a robust optimization tool for decision makers and construction planners.
Show less - Date Issued
- 2011
- Identifier
- CFE0003954, ucf:48698
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003954
- Title
- MULTIOBJECTIVE SIMULATION OPTIMIZATION USING ENHANCED EVOLUTIONARY ALGORITHM APPROACHES.
- Creator
-
Eskandari, Hamidreza, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
In today's competitive business environment, a firm's ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try...
Show moreIn today's competitive business environment, a firm's ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or "noisy") values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, "black-box" objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms' performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications.
Show less - Date Issued
- 2006
- Identifier
- CFE0001283, ucf:46905
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001283
- Title
- THE PROTEOMICS APPROACH TO EVOLUTIONARY COMPUTATION: AN ANALYSIS OF PROTEOME-BASED LOCATION INDEPENDENT REPRESENTATIONS BASEDON THE PROPORTIONAL GENETIC ALGORITHM.
- Creator
-
Garibay, Ivan, Wu, Annie, University of Central Florida
- Abstract / Description
-
As the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into...
Show moreAs the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into complex functions or designs. The observation from the central dogma of molecular biology that proteins are the basic building blocks of life and the recent advances in proteomics on analysis of structure, function and interaction of entire protein complements, lead us to propose a unifying framework of thought for these approaches: the proteomics approach. This thesis propose to investigate whether the self-organization of protein analogous structures at the representation level can increase the degree of complexity and ``novelty'' of solutions obtainable using evolutionary search techniques. In order to do so, we identify two fundamental aspects of this transition: (1) proteins interact in a three dimensional medium analogous to a multiset; and (2) proteins are functional structures. The first aspect is foundational for understanding of the second. This thesis analyzes the first aspect. It investigates the effects of using a genome to proteome mapping on evolutionary computation. This analysis is based on a genetic algorithm (GA) with a string to multiset mapping that we call the proportional genetic algorithm (PGA), and it focuses on the feasibility and effectiveness of this mapping. This mapping leads to a fundamental departure from typical EC methods: using a multiset of proteins as an intermediate mapping results in a \emph{completely location independent} problem representation where the location of the genes in a genome has no effect on the fitness of the solutions. Completely location independent representations, by definition, do not suffer from traditional EC hurdles associated with the location of the genes or positional effect in a genome. Such representations have the ability to self-organize into a genomic structure that appears to favor positive correlations between form and quality of represented solutions. Completely location independent representations also introduce new problems of their own such as the need for large alphabets of symbols and the theoretical need for larger representation spaces than traditional approaches. Overall, these representations perform as well or better than traditional representations and they appear to be particularly good for the class of problems involving proportions or multisets. This thesis concludes that the use of protein analogous structures as an intermediate representation in evolutionary computation is not only feasible but in some cases advantageous. In addition, it lays the groundwork for further research on proteins as functional self-organizing structures capable of building increasingly complex functionality, and as basic units of problem representation for evolutionary computation.
Show less - Date Issued
- 2004
- Identifier
- CFE0000311, ucf:46307
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000311
- Title
- ALAYZING THE EFFECTS OF MODULARITY ON SEARCH SPACES.
- Creator
-
Garibay, Ozlem, Wu, Annie, University of Central Florida
- Abstract / Description
-
We are continuously challenged by ever increasing problem complexity and the need to develop algorithms that can solve complex problems and solve them within a reasonable amount of time. Modularity is thought to reduce problem complexity by decomposing large problems into smaller and less complex subproblems. In practice, introducing modularity into evolutionary algorithm representations appears to improve search performance; however, how and why modularity improves performance is not well...
Show moreWe are continuously challenged by ever increasing problem complexity and the need to develop algorithms that can solve complex problems and solve them within a reasonable amount of time. Modularity is thought to reduce problem complexity by decomposing large problems into smaller and less complex subproblems. In practice, introducing modularity into evolutionary algorithm representations appears to improve search performance; however, how and why modularity improves performance is not well understood. In this thesis, we seek to better understand the effects of modularity on search. In particular, what are the effects of module creation on the search space structure and how do these structural changes affect performance? We define a theoretical and empirical framework to study modularity in evolutionary algorithms. Using this framework, we provide evidence of the following. First, not all types of modularity have an effect on search. We can have highly modular spaces that in essence are equivalent to simpler non-modular spaces. This is the case, because these spaces achieve higher degree of modularity without changing the fundamental structure of the search space. Second, for the cases when modularity actually has an effect on the fundamental structure of the search space, if left without guidance, it would only crowd and complicate the space structure resulting in a harder space for most search algorithms. Finally, we have the case when modularity not only has an effect in the search space structure, but most importantly, module creation can be guided by problem domain knowledge. When this knowledge can be used to estimate the value of a module in terms of its contribution toward building the solution, then modularity is extremely effective. It is in this last case that creating high value modules or low value modules has a direct and decisive impact on performance. The results presented in this thesis help to better understand, in a principled way, the effects of modularity on search. Better understanding the effects of modularity on search is a step forward in the larger issue of evolutionary search applied to increasingly complex problems.
Show less - Date Issued
- 2008
- Identifier
- CFE0002490, ucf:47680
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002490
- Title
- Methods to Calculate Cut Volumes for Fault Trees with Dependencies Induced by Spatial Locations.
- Creator
-
Hanes, Phillip, Wiegand, Rudolf, Wu, Annie, DeMara, Ronald, Song, Zixia, University of Central Florida
- Abstract / Description
-
Fault tree analysis (FTA) is used to find and mitigate vulnerabilities in systems based on their constituent components. Methods exist to efficiently find minimal cut sets (MCS), which are combinations of components whose failure causes the overall system to fail. However, traditional FTA ignores the physical location of the components. Components in close proximity to each other could be defeated by a single event with a radius of effect, such as an explosion or fire. Events such as the...
Show moreFault tree analysis (FTA) is used to find and mitigate vulnerabilities in systems based on their constituent components. Methods exist to efficiently find minimal cut sets (MCS), which are combinations of components whose failure causes the overall system to fail. However, traditional FTA ignores the physical location of the components. Components in close proximity to each other could be defeated by a single event with a radius of effect, such as an explosion or fire. Events such as the Deepwater Horizon explosion and subsequent oil spill demonstrate the potentially devastating risk posed by such spatial dependencies. This motivates the search for techniques to identify this type of vulnerability. Adding physical locations to the fault tree structure can help identify possible points of failure in the overall system caused by localized disasters. Since existing FTA methods cannot address these concerns, using this information requires extending existing solution methods or developing entirely new ones.A problem complicating research in FTA is the lack of benchmark problems for evaluating methods, especially for fault trees over one hundred components. This research presents a method of using Lindenmeyer systems (L-systems) to generate fault trees that are reproducible, capable of producing fault trees with similar properties to real-world designs, and scalable while maintaining predictable structural properties. This approach will be useful for testing and analyzing different methodologies for FTA tasks at different scales and under different conditions.Using a set of benchmark fault trees derived from L-systems, three approaches to finding these vulnerabilities were explored in this research. These approaches were compared by defining a metric called (")minimal cut volumes(") (MCV) for describing volumes of effect that defeat the system. Since no existing methods are known for solving this problem, the methods are compared to each other to evaluate performance.1) The control method executes traditional FTA software to find minimal cut sets (MCS), then extends this approach by searching for clusters in the resulting MCS to find MCV.2) The next method starts by searching for clusters of components in the three dimensional space, then evaluates combinations of clusters to find MCV that defeat the system.3) The last method uses an evolutionary algorithm to search the space directly by selecting center points, then using the radius of the smallest sphere(s) as the fitness value for identifying MCV.Results generated using each method are presented. The performance of the methods are compared to the control method and their utilities evaluated accordingly.
Show less - Date Issued
- 2018
- Identifier
- CFE0007403, ucf:52075
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007403
- Title
- Quality Diversity: Harnessing Evolution to Generate a Diversity of High-Performing Solutions.
- Creator
-
Pugh, Justin, Stanley, Kenneth, Wu, Annie, Sukthankar, Gita, Garibay, Ivan, University of Central Florida
- Abstract / Description
-
Evolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of...
Show moreEvolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of satisfying quality emerges. At the same time, research in open-ended evolution (OEE) draws different lessons from nature, seeking to identify and recreate processes that lead to the type of perpetual innovation and indefinitely increasing complexity observed in natural evolution. New algorithms in EC such as MAP-Elites and Novelty Search with Local Competition harness the toolkit of evolution for a related purpose: finding as many types of good solutions as possible (rather than merely the single best solution). With the field in its infancy, no empirical studies previously existed comparing these so-called quality diversity (QD) algorithms. This dissertation (1) contains the first extensive and methodical effort to compare different approaches to QD (including both existing published approaches as well as some new methods presented for the first time here) and to understand how they operate to help inform better approaches in the future.It also (2) introduces a new technique for encoding neural networks for evolution with indirect encoding that contain multiple sensory or output modalities.Further, it (3) explores the idea that QD can act as an engine of open-ended discovery by introducing an expressive platform called Voxelbuild where QD algorithms continually evolve robots that stack blocks in new ways. A culminating experiment (4) is presented that investigates evolution in Voxelbuild over a very long timescale. This research thus stands to advance the OEE community's desire to create and understand open-ended systems while also laying the groundwork for QD to realize its potential within EC as a means to automatically generate an endless progression of new content in real-world applications.
Show less - Date Issued
- 2019
- Identifier
- CFE0007513, ucf:52638
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007513
- Title
- Novelty-Assisted Interactive Evolution of Control Behaviors.
- Creator
-
Woolley, Brian, Stanley, Kenneth, Hughes, Charles, Gonzalez, Avelino, Wu, Annie, Hancock, Peter, University of Central Florida
- Abstract / Description
-
The field of evolutionary computation is inspired by the achievements of natural evolution, in which there is no final objective. Yet the pursuit of objectives is ubiquitous in simulated evolution because evolutionary algorithms that can consistently achieve established benchmarks are lauded as successful, thus reinforcing this paradigm. A significant problem is that such objective approaches assume that intermediate stepping stones will increasingly resemble the final objective when in fact...
Show moreThe field of evolutionary computation is inspired by the achievements of natural evolution, in which there is no final objective. Yet the pursuit of objectives is ubiquitous in simulated evolution because evolutionary algorithms that can consistently achieve established benchmarks are lauded as successful, thus reinforcing this paradigm. A significant problem is that such objective approaches assume that intermediate stepping stones will increasingly resemble the final objective when in fact they often do not. The consequence is that while solutions may exist, searching for such objectives may not discover them. This problem with objectives is demonstrated through an experiment in this dissertation that compares how images discovered serendipitously during interactive evolution in an online system called Picbreeder cannot be rediscovered when they become the final objective of the very same algorithm that originally evolved them. This negative result demonstrates that pursuing an objective limits evolution by selecting offspring only based on the final objective. Furthermore, even when high fitness is achieved, the experimental results suggest that the resulting solutions are typically brittle, piecewise representations that only perform well by exploiting idiosyncratic features in the target. In response to this problem, the dissertation next highlights the importance of leveraging human insight during search as an alternative to articulating explicit objectives. In particular, a new approach called novelty-assisted interactive evolutionary computation (NA-IEC) combines human intuition with a method called novelty search for the first time to facilitate the serendipitous discovery of agent behaviors. In this approach, the human user directs evolution by selecting what is interesting from the on-screen population of behaviors. However, unlike in typical IEC, the user can then request that the next generation be filled with novel descendants, as opposed to only the direct descendants of typical IEC. The result of such an approach, unconstrained by a priori objectives, is that it traverses key stepping stones that ultimately accumulate meaningful domain knowledge.To establishes this new evolutionary approach based on the serendipitous discovery of key stepping stones during evolution, this dissertation consists of four key contributions: (1) The first contribution establishes the deleterious effects of a priori objectives on evolution. The second (2) introduces the NA-IEC approach as an alternative to traditional objective-based approaches. The third (3) is a proof-of-concept that demonstrates how combining human insight with novelty search finds solutions significantly faster and at lower genomic complexities than fully-automated processes, including pure novelty search, suggesting an important role for human users in the search for solutions. Finally, (4) the NA-IEC approach is applied in a challenge domain wherein leveraging human intuition and domain knowledge accelerates the evolution of solutions for the nontrivial octopus-arm control task. The culmination of these contributions demonstrates the importance of incorporating human insights into simulated evolution as a means to discovering better solutions more rapidly than traditional approaches.
Show less - Date Issued
- 2012
- Identifier
- CFE0004462, ucf:49335
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004462
- Title
- IMPROVING AIRLINE SCHEDULE RELIABILITY USING A STRATEGIC MULTI-OBJECTIVE RUNWAY SLOT ASSIGNMENT SEARCH HEURISTIC.
- Creator
-
Hafner, Florian, Sepulveda, Alejandro, University of Central Florida
- Abstract / Description
-
Improving the predictability of airline schedules in the National Airspace System (NAS) has been a constant endeavor, particularly as system delays grow with ever-increasing demand. Airline schedules need to be resistant to perturbations in the system including Ground Delay Programs (GDPs) and inclement weather. The strategic search heuristic proposed in this dissertation significantly improves airline schedule reliability by assigning airport departure and arrival slots to each flight in the...
Show moreImproving the predictability of airline schedules in the National Airspace System (NAS) has been a constant endeavor, particularly as system delays grow with ever-increasing demand. Airline schedules need to be resistant to perturbations in the system including Ground Delay Programs (GDPs) and inclement weather. The strategic search heuristic proposed in this dissertation significantly improves airline schedule reliability by assigning airport departure and arrival slots to each flight in the schedule across a network of airports. This is performed using a multi-objective optimization approach that is primarily based on historical flight and taxi times but also includes certain airline, airport, and FAA priorities. The intent of this algorithm is to produce a more reliable, robust schedule that operates in today's environment as well as tomorrow's 4-Dimensional Trajectory Controlled system as described the FAA's Next Generation ATM system (NextGen). This novel airline schedule optimization approach is implemented using a multi-objective evolutionary algorithm which is capable of incorporating limited airport capacities. The core of the fitness function is an extensive database of historic operating times for flight and ground operations collected over a two year period based on ASDI and BTS data. Empirical distributions based on this data reflect the probability that flights encounter various flight and taxi times. The fitness function also adds the ability to define priorities for certain flights based on aircraft size, flight time, and airline usage. The algorithm is applied to airline schedules for two primary US airports: Chicago O'Hare and Atlanta Hartsfield-Jackson. The effects of this multi-objective schedule optimization are evaluated in a variety of scenarios including periods of high, medium, and low demand. The schedules generated by the optimization algorithm were evaluated using a simple queuing simulation model implemented in AnyLogic. The scenarios were simulated in AnyLogic using two basic setups: (1) using modes of flight and taxi times that reflect highly predictable 4-Dimensional Trajectory Control operations and (2) using full distributions of flight and taxi times reflecting current day operations. The simulation analysis showed significant improvements in reliability as measured by the mean square difference (MSD) of filed versus simulated flight arrival and departure times. Arrivals showed the most consistent improvements of up to 80% in on-time performance (OTP). Departures showed reduced overall improvements, particularly when the optimization was performed without the consideration of airport capacity. The 4-Dimensional Trajectory Control environment more than doubled the on-time performance of departures over the current day, more chaotic scenarios. This research shows that airline schedule reliability can be significantly improved over a network of airports using historical flight and taxi time data. It also provides for a mechanism to prioritize flights based on various airline, airport, and ATC goals. The algorithm is shown to work in today's environment as well as tomorrow's NextGen 4-Dimensional Trajectory Control setup.
Show less - Date Issued
- 2008
- Identifier
- CFE0002067, ucf:47572
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002067