Current Search: Geiger, Christopher (x)
View All Items
- Title
- OPTIMIZING THE GLOBAL PERFORMANCE OF BUILD-TO-ORDER SUPPLY CHAINS.
- Creator
-
Shaalan, Tarek, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
Build-to-order supply chains (BOSCs) have recently received increasing attention due to the shifting focus of manufacturing companies from mass production to mass customization. This shift has generated a growing need for efficient methods to design BOSCs. This research proposes an approach for BOSC design that simultaneously considers multiple performance measures at three stages of a BOSC Tier I suppliers, the focal manufacturing company and Tier I customers (product delivery...
Show moreBuild-to-order supply chains (BOSCs) have recently received increasing attention due to the shifting focus of manufacturing companies from mass production to mass customization. This shift has generated a growing need for efficient methods to design BOSCs. This research proposes an approach for BOSC design that simultaneously considers multiple performance measures at three stages of a BOSC Tier I suppliers, the focal manufacturing company and Tier I customers (product delivery couriers). We present a heuristic solution approach that constructs the best BOSC configuration through the selection of suppliers, manufacturing resources at the focal company and delivery couriers. The resulting configuration is the one that yields the best global performance relative to five deterministic performance measures simultaneously, some of which are nonlinear. We compare the heuristic results to those from an exact method, and the results show that the proposed approach yields BOSC configurations with near-optimal performance. The absolute deviation in mean performance across all experiments is consistently less than 4%, with a variance less than 0.5%. We propose a second heuristic approach for the stochastic BOSC environment. Compared to the deterministic BOSC performance, experimental results show that optimizing BOSC performance according to stochastic local performance measures can yield a significantly different supply chain configuration. Local optimization means optimizing according to one performance measure independently of the other four. Using Monte Carlo simulation, we test the impact of local performance variability on the global performance of the BOSC. Experimental results show that, as variability of the local performance increases, the mean global performance decreases, while variation in the global performance increases at steeper levels.
Show less - Date Issued
- 2006
- Identifier
- CFE0001411, ucf:47063
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001411
- Title
- OPTIMIZATION MODELS FOR EMERGENCY RELIEF SHELTER PLANNING FOR ANTICIPATED HURRICANE EVENTS.
- Creator
-
Sharawi, Abeer, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
Natural disasters, specifically hurricanes, can cause catastrophic loss of life and property. In recent years, the United States has endured significant losses due to a series of devastating hurricanes (e.g., Hurricanes Charley and Ivan in 2004, and Hurricanes Katrina and Wilma in 2005). Several Federal authorities report that there are weaknesses in the emergency and disaster planning and response models that are currently employed in practice, thus creating a need for better decision models...
Show moreNatural disasters, specifically hurricanes, can cause catastrophic loss of life and property. In recent years, the United States has endured significant losses due to a series of devastating hurricanes (e.g., Hurricanes Charley and Ivan in 2004, and Hurricanes Katrina and Wilma in 2005). Several Federal authorities report that there are weaknesses in the emergency and disaster planning and response models that are currently employed in practice, thus creating a need for better decision models in emergency situations. The current models not only lack fast communication with emergency responders and the public, but are also inadequate for advising the pre-positioning of supplies at emergency shelters before the storm's impact. The problem of emergency evacuation relief shelter planning during anticipated hurricane events is addressed in this research. The shelter planning problem is modeled as a joint location-allocation-inventory problem, where the number and location of shelter facilities must be identified. In addition, the evacuating citizens must be assigned to the designated shelter facilities, and the amount of emergency supply inventory to pre-position at each facility must be determined. The objective is to minimize total emergency evacuation costs, which is equal to the combined facility opening and preparation cost, evacuee transportation cost and emergency supply inventory cost. A review of the emergency evacuation planning literature reveals that this class of problems has not been largely addressed to date. First, the emergency evacuation relief sheltering problem is formulated under deterministic conditions as a mixed integer non-linear programming (MINLP) model. For three different evacuation scenarios, the proposed MINLP model yields a plan that identifies the locations of relief shelters for evacuees, the assignment of evacuees to those shelters and the amount of emergency supplies to stockpile in advance of an anticipated hurricane. The MINLP model is then used (with minor modifications) to explore the idea of equally distributing the evacuees across the open shelters. The results for the three different scenarios indicate that a balanced utilization of the open shelters is achieved with little increase in the total evacuation cost. Next, the MINLP is enhanced to consider the stochastic characteristics of both hurricane strength and projected trajectory, which can directly influence the storm's behavior. The hurricane's strength is based on its hurricane category according to the Saffir-Simpson Hurricane Scale. Its trajectory is represented as a Markov chain, where the storm's path is modeled as transitions among states (i.e., coordinate locations) within a spherical coordinate system. A specific hurricane that made landfall in the state of Florida is used as a test case for the model. Finally, the stochastic model is employed within a robust optimization strategy, where several probable hurricane behavioral scenarios are solved. Then, a single, robust evacuation sheltering plan that provides the best results, not only in terms of maximum deviation of total evacuation cost across the likely scenarios, but also in terms of maximum deviation of unmet evacuee demand at the shelter locations, is generated. The practical value of this robust plan is quite significant. This plan should accommodate unexpected changes in the behavior of an approaching storm to a reasonable degree with minimal negative impact to the total evacuation cost and the fulfillment of evacuee demand at the shelter locations. Most importantly, the re-allocation and re-mobilization of emergency personnel and supplies are not required, which can cause confusion and potentially increase the response time of responders to the hurricane emergency. The computational results show the promise of this research and usefulness of the proposed models. This work is an initial step in addressing the simultaneous identification of shelter locations, assignment of citizens to those shelters, and determination of a policy for stockpiling emergency supplies in advance of a hurricane. Both the location-allocation problem and the inventory problem have been extensively and individually studied by researchers as well as practitioners. However, this joint location-allocation-inventory problem is a difficult problem to solve, especially in the presence of stochastic storm behavior. The proposed models, even in the deterministic case, are a significant step beyond the current state-of-the-art in the area of emergency and disaster planning.
Show less - Date Issued
- 2007
- Identifier
- CFE0001938, ucf:47446
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001938
- Title
- MULTIOBJECTIVE COORDINATION MODELS FOR MAINTENANCE AND SERVICE PARTS INVENTORY PLANNING AND CONTROL.
- Creator
-
Martinez, Oscar, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
In many equipment-intensive organizations in the manufacturing, service and particularly the defense sectors, service parts inventories constitute a significant source of tactical and operational costs and consume a significant portion of capital investment. For instance, the Defense Logistics Agency manages about 4 million consumable service parts and provides about 93% of all consumable service parts used by the military services. These items required about US$1.9 billion over the fiscal...
Show moreIn many equipment-intensive organizations in the manufacturing, service and particularly the defense sectors, service parts inventories constitute a significant source of tactical and operational costs and consume a significant portion of capital investment. For instance, the Defense Logistics Agency manages about 4 million consumable service parts and provides about 93% of all consumable service parts used by the military services. These items required about US$1.9 billion over the fiscal years 1999-2002. During the same time, the US General Accountability Office discovered that, in the United States Navy, there were about 3.7 billion ship and submarine parts that were not needed. The Federal Aviation Administration says that 26 million aircraft parts are changed each year. In 2002, the holding cost of service parts for the aviation industry was estimated to be US$50 billion. The US Army Institute of Land Warfare reports that, at the beginning of the 2003 fiscal year, prior to Operation Iraqi Freedom the aviation service parts alone was in excess of US$1 billion. This situation makes the management of these items a very critical tactical and strategic issue that is worthy of further study. The key challenge is to maintain high equipment availability with low service cost (e.g., holding, warehousing, transportation, technicians, overhead, etc.). For instance, despite reporting US$10.5 billion in appropriations spent on purchasing service parts in 2000, the United States Air Force (USAF) continues to report shortages of service parts. The USAF estimates that, if the investment on service parts decreases to about US$5.3 billion, weapons systems availability would range from 73 to 100 percent. Thus, better management of service parts inventories should create opportunities for cost savings caused by the efficient management of these inventories. Unfortunately, service parts belong to a class of inventory that continually makes them difficult to manage. Moreover, it can be said that the general function of service parts inventories is to support maintenance actions; therefore, service parts inventory policies are highly related to the resident maintenance policies. However, the interrelationship between service parts inventory management and maintenance policies is often overlooked, both in practice and in the academic literature, when it comes to optimizing maintenance and service parts inventory policies. Hence, there exists a great divide between maintenance and service parts inventory theory and practice. This research investigation specifically considers the aspect of joint maintenance and service part inventory optimization. We decompose the joint maintenance and service part inventory optimization problem into the supplier's problem and the customer's problem. Long-run expected cost functions for each problem that include the most common maintenance cost parameters and service parts inventory cost parameters are presented. Computational experiments are conducted for a single-supplier two-echelon service parts supply chain configuration varying the number of customers in the network. Lateral transshipments (LTs) of service parts between customers are not allowed. For this configuration, we optimize the cost functions using a traditional, or decoupled, approach, where each supply chain entity optimizes its cost individually, and a joint approach, where the cost objectives of both the supplier and customers are optimized simultaneously. We show that the multiple objective optimization approach outperforms the traditional decoupled optimization approach by generating lower system-wide supply chain network costs. The model formulations are extended by relaxing the assumption of no LTs between customers in the supply chain network. Similar to those for the no LTs configuration, the results for the LTs configuration show that the multiobjective optimization outperforms the decoupled optimization in terms of system-wide cost. Hence, it is economically beneficial to jointly consider all parties within the supply network. Further, we compare the model configurations LTs versus no LTs, and we show that using LTs improves the overall savings of the system. It is observed that the improvement is mostly derived from reduced shortage costs since the equipment downtime is reduced due to the proximity of the supply. The models and results of this research have significant practical implications as they can be used to assist decision-makers to determine when and where to pre-position parts inventories to maximize equipment availability. Furthermore, these models can assist in the preparation of the terms of long-term service agreements and maintenance contracts between original equipment manufacturers and their customers (i.e., equipment owners and/or operators), including determining the equitable allocation of all system-wide cost savings under the agreement.
Show less - Date Issued
- 2008
- Identifier
- CFE0002459, ucf:47723
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002459
- Title
- MULTIOBJECTIVE SIMULATION OPTIMIZATION USING ENHANCED EVOLUTIONARY ALGORITHM APPROACHES.
- Creator
-
Eskandari, Hamidreza, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
In today's competitive business environment, a firm's ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try...
Show moreIn today's competitive business environment, a firm's ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or "noisy") values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, "black-box" objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms' performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications.
Show less - Date Issued
- 2006
- Identifier
- CFE0001283, ucf:46905
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001283
- Title
- A DECISION SUPPORT SYSTEM METHODOLOGY FOR THE SELECTION OF RAPID PROTOTYPING TECHNOLOGIES FOR INVESTMENT-CAST GAS TURBINE PARTS.
- Creator
-
Gallagher, Angela, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
In the power generation sector, more specifically, the gas turbine industry, competition has forced the lead time-to-market for product advancements to be more important than ever. For design engineers, this means that product design iterations and final product development must be completed within both critical time windows and budgetary constraints. Therefore, two areas that have received significant attention in the research and in practice are: (1) rapid prototyping technology development...
Show moreIn the power generation sector, more specifically, the gas turbine industry, competition has forced the lead time-to-market for product advancements to be more important than ever. For design engineers, this means that product design iterations and final product development must be completed within both critical time windows and budgetary constraints. Therefore, two areas that have received significant attention in the research and in practice are: (1) rapid prototyping technology development, and (2) rapid prototyping technology selection. Rapid prototyping technology selection is the focus of this research. In practice, selecting the rapid prototyping method that is acceptable for a specific design application is a daunting task. With technological advancements in both rapid prototyping and conventional machining methods, it is difficult for both a novice design engineer as well as an experienced design engineer to decide not only what rapid prototyping method could be applicable, but also if a rapid prototyping method would even be advantageous over a more conventional machining method and where in the manufacturing process any of these processes would be utilized. This research proposes an expert system that assists a design engineer through the decision process relating to the investment casting of a superalloy gas turbine engine component. Investment casting is a well-known technique for the production of many superalloy gas turbine parts such as gas turbine blades and vanes. In fact, investment-cast turbine blades remain the state of the art in gas turbine blade design. The proposed automated expert system allows the engineer to effectively assess rapid prototyping opportunities for desired gas turbine blade application. The system serves as a starting point in presenting an engineer with commercially-available state-of-the-art rapid prototyping options, brief explanations of each option and the advantages and disadvantages of each option. It is not intended to suggest an optimal solution as there is not only one unique answer. For instance, cost and time factors vary depending upon the individual needs of a company at any particular time as well as existing strategic partnerships with particular foundries and vendors. The performance of the proposed expert system is assessed using two real-world case studies. The first case study shows how the expert system can advise the design engineer when suggesting rapid manufacturing in place of investment casting. The second case study shows how rapid prototyping can be used for creating part patterns for use within the investment casting process. The results from these case studies are telling in that their implementations potentially result in an 82 to 94% reduction in design decision lead time and a 92 to 97% cost savings.
Show less - Date Issued
- 2010
- Identifier
- CFE0003338, ucf:48469
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003338
- Title
- MODELING LANE-BASED TRAFFIC FLOW IN EMERGENCY SITUATIONS IN THE PRESENCE OF MULTIPLE HETEROGENEOUS FLOWS.
- Creator
-
Saleh, Amani, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
In recent years, natural, man-made and technological disasters have been increasing in magnitude and frequency of occurrence. Terrorist attacks have increased after the September 11, 2001. Some authorities suggest that global warming is partly the blame for the increase in frequency of natural disasters, such as the series of hurricanes in the early-2000's. Furthermore, there has been noticeable growth in population within many metropolitan areas not only in the US but also worldwide....
Show moreIn recent years, natural, man-made and technological disasters have been increasing in magnitude and frequency of occurrence. Terrorist attacks have increased after the September 11, 2001. Some authorities suggest that global warming is partly the blame for the increase in frequency of natural disasters, such as the series of hurricanes in the early-2000's. Furthermore, there has been noticeable growth in population within many metropolitan areas not only in the US but also worldwide. These and other facts motivate the need for better emergency evacuation route planning (EERP) approaches in order to minimize the loss of human lives and property. This research considers aspects of evacuation routing never before considered in research and, more importantly, in practice. Previous EERP models only either consider unidirectional evacuee flow from the source of a hazard to destinations of safety or unidirectional emergency first responder flow to the hazard source. However, in real-life emergency situations, these heterogeneous, incompatible flows occur simultaneously over a bi-directional capacitated lane-based travel network, especially in unanticipated emergencies. By incompatible, it is meant that the two different flows cannot occupy a given lane and merge or crossing point in the travel network at the same time. In addition, in large-scale evacuations, travel lane normal flow directions can be reversed dynamically to their contraflow directions depending upon the degree of the emergency. These characteristics provide the basis for this investigation. This research considers the multiple flow EERP problem where the network travel lanes can be reconfigured using contraflow lane reversals. The first flow is vehicular flow of evacuees from the source of a hazard to destinations of safety, and the second flow is the emergency first responders to the hazard source. After presenting a review of the work related to the multiple flow EERP problem, mathematical formulations are proposed for three variations of the EERP problem where the objective for each problem is to identify an evacuation plan (i.e., a flow schedule and network contraflow lane configuration) that minimizes network clearance time. Before the proposed formulations, the evacuation problem that considers only the flow of evacuees out of the network, which is viewed as a maximum flow problem, is formulated as an integer linear program. Then, the first proposed model formulation, which addresses the problem that considers the flow of evacuees under contraflow conditions, is presented. Next, the proposed formulation is expanded to consider the flow of evacuees and responders through the network but under normal flow conditions. Lastly, the two-flow problem of evacuees and responders under contraflow conditions is formulated. Using real-world population and travel network data, the EERP problems are each solved to optimality; however, the time required to solve the problems increases exponentially as the problem grows in size and complexity. Due to the intractable nature of the problems as the size of the network increases, a genetic-based heuristic solution procedure that generates evacuation network configurations of reasonable quality is proposed. The proposed heuristic solution approach generates evacuation plans in the order of minutes, which is desirable in emergency situations and needed to allow for immediate evacuation routing plan dissemination and implementation in the targeted areas.
Show less - Date Issued
- 2008
- Identifier
- CFE0002168, ucf:47512
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002168
- Title
- A TAXONOMY OF LEAN SIX SIGMA SUCCESS FACTORSFOR SERVICE ORGANIZATIONS.
- Creator
-
Hajikordestani, Reza, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
ABSTRACT Six Sigma is a business improvement strategy that aims to improve process performance using a structured methodology that identifies and removes the causes of defects in manufacturing and business processes, while implementing the lean concepts attempts to remove wasteful activities from those processes. In practice, the Six Sigma strategy and the Lean philosophy are combined and often viewed as one integrated philosophy, where the philosophy of Lean Six Sigma simultaneously removes...
Show moreABSTRACT Six Sigma is a business improvement strategy that aims to improve process performance using a structured methodology that identifies and removes the causes of defects in manufacturing and business processes, while implementing the lean concepts attempts to remove wasteful activities from those processes. In practice, the Six Sigma strategy and the Lean philosophy are combined and often viewed as one integrated philosophy, where the philosophy of Lean Six Sigma simultaneously removes wasteful activities from a process and reduces the variability of that process. This thesis research reviews the concepts and implementation of Lean thinking, Six Sigma strategy, and the integrated concept of Lean Six Sigma, with emphasis in service organizations. Most importantly, this thesis summarizes the critical success factors for implementing Lean Six Sigma within a service business environment and categorizes them within a proposed multi-level taxonomy that can be used by service business units and service providers to improve the success of Lean Six Sigma implementation.
Show less - Date Issued
- 2010
- Identifier
- CFE0003526, ucf:48966
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003526
- Title
- EMERGENCY EVACUATION ROUTE PLANNING CONSIDERING HUMAN BEHAVIOR DURING SHORT- AND NO-NOTICE EMERGENCY SITUATIONS.
- Creator
-
Kittirattanapaiboon, Suebpong, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
Throughout United States and world history, disasters have caused not only significant loss of life, property but also enormous financial loss. The tsunami that occurred on December 26, 2004 is a telling example of the devastation that can occur unexpectedly. This unexpected natural event never happened before in this area. In addition, there was a lack of an emergency response plan for events of that magnitude. Therefore, this event resulted not only in a natural catastrophe for the people...
Show moreThroughout United States and world history, disasters have caused not only significant loss of life, property but also enormous financial loss. The tsunami that occurred on December 26, 2004 is a telling example of the devastation that can occur unexpectedly. This unexpected natural event never happened before in this area. In addition, there was a lack of an emergency response plan for events of that magnitude. Therefore, this event resulted not only in a natural catastrophe for the people of South and Southeast Asia, but it is also considered one of the greatest natural disasters in world history. After the giant wave dissipated, there were more than 230,000 people dead and more than US$10 billion in property damage and loss. Another significant event was the terrorist incident on September 11, 2001 (commonly referred to as 9/11) in United States. This event was unexpected and an unnatural, i.e., man-made event. It resulted in approximately 3,000 lives lost and about US$21 billion in property damage. These and other unexpected (or unanticipated) events give emergency management officials short- or no-notice to prevent or respond to the situation. These and other facts motivate the need for better emergency evacuation route planning (EERP) approaches in order to minimize the loss of human lives and property in short- or no-notice emergency situations. This research considers aspects of evacuation routing that have received little attention in research and, more importantly, in practice. Previous EERP models only either consider unidirectional evacuee flow from the source of a hazard to destinations of safety or unidirectional emergency first responder flow to the hazard source. However, in real-life emergency situations, these heterogeneous, incompatible flows occur simultaneously over a bi-directional capacitated lane-based travel network, especially in short- and no-notice emergencies. After presenting a review of the work related to the multiple flow EERP problem, mathematical formulations are presented for the EERP problem where the objective for each problem is to identify an evacuation routing plan (i.e., a traffic flow schedule) that maximizes evacuee and responder flow and minimizes network clearance time of both types of flow. In addition, we integrate the general human response behavior flow pattern, where the cumulative flow behavior follows different degrees of an S-shaped curve depending upon the level of the evacuation order. We extend the analysis to consider potential traffic flow conflicts between the two types of flow under these conditions. A conflict occurs when flow of different types occupy a roadway segment at the same time. Further, with different degrees of flow movement flow for both evacuee and responder flow, the identification of points of flow congestion on the roadway segments that occur within the transportation network is investigated.
Show less - Date Issued
- 2009
- Identifier
- CFE0002645, ucf:48229
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002645
- Title
- A Production and Cost Modeling Methodology of 2nd Generation Biofuel in the United States.
- Creator
-
Poole, David, Kincaid, John, Mollaghasemi, Mansooreh, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
The use of biofuels in the United States has increased dramatically in the last few years. The largest source of feedstock for ethanol to date has been corn. However, corn is also a vitally important food crop and is used commonly as feed for cattle and other livestock. To prevent further diversion of an important food crop to production of ethanol, there is great interest in developing commercial-scale technologies to make ethanol from non-food crops, or other suitable plant material. This...
Show moreThe use of biofuels in the United States has increased dramatically in the last few years. The largest source of feedstock for ethanol to date has been corn. However, corn is also a vitally important food crop and is used commonly as feed for cattle and other livestock. To prevent further diversion of an important food crop to production of ethanol, there is great interest in developing commercial-scale technologies to make ethanol from non-food crops, or other suitable plant material. This is commonly referred to as biomass. A review is made of lignocellulosic sources being considered as feedstocks to produce ethanol. Current technologies for pretreatment and hydrolysis of the biomass material are examined and discussed. Production data and cost estimates are culled from the literature, and used to assist in development of mathematical models for evaluation of production ramp-up profiles, and cost estimation. These mathematical models are useful as a planning tool, and provide a methodology to estimate monthly production output and costs for labor, capital, operations and maintenance, feedstock, raw materials, and total cost. Existing credits for ethanol production are also considered and modeled. The production output in liters is modeled as a negative exponential growth curve, with a rate coefficient providing the ability to evaluate slower, or faster, growth in production output and its corresponding effect on monthly cost. The capital and labor costs per unit of product are determined by dividing the monthly debt service and labor costs by that month?s production value. The remaining cost components change at a constant rate in the simulation case studies. This methodology is used to calculate production levels and costs as a function of time for a 25 million gallon per year capacity cellulosic ethanol plant. The parameters of interest are calculated in MATLAB with a deterministic, continuous system simulation model. Simulation results for high, medium, and low cost case studies are included. Assumptions for the model and for each case study are included and some comparisons are made to cost estimates in the literature. While the cost per unit of product decreases and production output increases over time, some reasonable cost values are obtained by the end of the second year for both the low and medium cost case studies. By the end of Year 2, total costs for those case studies are $0.48 per liter and $0.88 per liter, respectively. These cost estimates are well within the reported range of values from the reviewed literature sources. Differing assumptions for calculations made by different sources make a direct cost comparison with the outputs of this modeling methodology extremely difficult. Proposals for reducing costs are introduced. Limitations and shortcomings of the research activity are discussed, along with recommendations for potential future work in improving the simulation model and model verification activities. In summary, the author was not able to find evidence?within the public domain?of any similar modeling and simulation methodology that uses a deterministic, continuous simulation model to evaluate production and costs as a function of time. This methodology is also unique in highlighting the important effect of production ramp-up on monthly costs for capital (debt service) and labor. The resultant simulation model can be used for planning purposes and provides an independent, unbiased estimate of cost as a function of time.
Show less - Date Issued
- 2012
- Identifier
- CFE0004424, ucf:49321
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004424
- Title
- MULTIOBJECTIVE DESIGN OPTIMIZATION OF GAS TURBINE BLADE WITH EMPHASIS ON INTERNAL COOLING.
- Creator
-
Nagaiah, Narasimha, Geiger, Christopher, Nazzal, Dima, Reilly, Charles, Kapat, Jayanta, University of Central Florida
- Abstract / Description
-
In the design of mechanical components, numerical simulations and experimental methods are commonly used for design creation (or modification) and design optimization. However, a major challenge of using simulation and experimental methods is that they are time-consuming and often cost-prohibitive for the designer. In addition, the simultaneous interactions between aerodynamic, thermodynamic and mechanical integrity objectives for a particular component or set of components are difficult to...
Show moreIn the design of mechanical components, numerical simulations and experimental methods are commonly used for design creation (or modification) and design optimization. However, a major challenge of using simulation and experimental methods is that they are time-consuming and often cost-prohibitive for the designer. In addition, the simultaneous interactions between aerodynamic, thermodynamic and mechanical integrity objectives for a particular component or set of components are difficult to accurately characterize, even with the existing simulation tools and experimental methods. The current research and practice of using numerical simulations and experimental methods do little to address the simultaneous (")satisficing(") of multiple and often conflicting design objectives that influence the performance and geometry of a component. This is particularly the case for gas turbine systems that involve a large number of complex components with complicated geometries.Numerous experimental and numerical studies have demonstrated success in generating effective designs for mechanical components; however, their focus has been primarily on optimizing a single design objective based on a limited set of design variables and associated values. In this research, a multiobjective design optimization framework to solve a set of user-specified design objective functions for mechanical components is proposed. The framework integrates a numerical simulation and a nature-inspired optimization procedure that iteratively perturbs a set of design variables eventually converging to a set of tradeoff design solutions. In this research, a gas turbine engine system is used as the test application for the proposed framework. More specifically, the optimization of the gas turbine blade internal cooling channel configuration is performed. This test application is quite relevant as gas turbine engines serve a critical role in the design of the next-generation power generation facilities around the world. Furthermore, turbine blades require better cooling techniques to increase their cooling effectiveness to cope with the increase in engine operating temperatures extending the useful life of the blades.The performance of the proposed framework is evaluated via a computational study, where a set of common, real-world design objectives and a set of design variables that directly influence the set of objectives are considered. Specifically, three objectives are considered in this study: (1) cooling channel heat transfer coefficient, which measures the rate of heat transfer and the goal is to maximize this value; (2) cooling channel air pressure drop, where the goal is to minimize this value; and (3) cooling channel geometry, specifically the cooling channel cavity area, where the goal is to maximize this value. These objectives, which are conflicting, directly influence the cooling effectiveness of a gas turbine blade and the material usage in its design. The computational results show the proposed optimization framework is able to generate, evaluate and identify thousands of competitive tradeoff designs in a fraction of the time that it would take designers using the traditional simulation tools and experimental methods commonly used for mechanical component design generation. This is a significant step beyond the current research and applications of design optimization to gas turbine blades, specifically, and to mechanical components, in general.
Show less - Date Issued
- 2012
- Identifier
- CFE0004495, ucf:49282
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004495
- Title
- Simulation-Based Cognitive Workload Modeling and Evaluation of Adaptive Automation Invoking and Revoking Strategies.
- Creator
-
Rusnock, Christina, Geiger, Christopher, Karwowski, Waldemar, Xanthopoulos, Petros, Reinerman, Lauren, University of Central Florida
- Abstract / Description
-
In human-computer systems, such as supervisory control systems, large volumes of incoming and complex information can degrade overall system performance. Strategically integrating automation to offload tasks from the operator has been shown to increase not only human performance but also operator efficiency and safety. However, increased automation allows for increased task complexity, which can lead to high cognitive workload and degradation of situational awareness. Adaptive automation is...
Show moreIn human-computer systems, such as supervisory control systems, large volumes of incoming and complex information can degrade overall system performance. Strategically integrating automation to offload tasks from the operator has been shown to increase not only human performance but also operator efficiency and safety. However, increased automation allows for increased task complexity, which can lead to high cognitive workload and degradation of situational awareness. Adaptive automation is one potential solution to resolve these issues, while maintaining the benefits of traditional automation. Adaptive automation occurs dynamically, with the quantity of automated tasks changing in real-time to meet performance or workload goals. While numerous studies evaluate the relative performance of manual and adaptive systems, little attention has focused on the implications of selecting particular invoking or revoking strategies for adaptive automation. Thus, evaluations of adaptive systems tend to focus on the relative performance among multiple systems rather than the relative performance within a system.This study takes an intra-system approach specifically evaluating the relationship between cognitive workload and situational awareness that occurs when selecting a particular invoking-revoking strategy for an adaptive system. The case scenario is a human supervisory control situation that involves a system operator who receives and interprets intelligence outputs from multiple unmanned assets, and then identifies and reports potential threats and changes in the environment. In order to investigate this relationship between workload and situational awareness, discrete event simulation (DES) is used. DES is a standard technique in the analysis of systems, and the advantage of using DES to explore this relationship is that it can represent a human-computer system as the state of the system evolves over time. Furthermore, and most importantly, a well-designed DES model can represent the human operators, the tasks to be performed, and the cognitive demands placed on the operators. In addition to evaluating the cognitive workload to situational awareness tradeoff, this research demonstrates that DES can quite effectively model and predict human cognitive workload, specifically for system evaluation.This research finds that the predicted workload of the DES models highly correlates with well-established subjective measures and is more predictive of cognitive workload than numerous physiological measures. This research then uses the validated DES models to explore and predict the cognitive workload impacts of adaptive automation through various invoking and revoking strategies. The study provides insights into the workload-situational awareness tradeoffs that occur when selecting particular invoking and revoking strategies. First, in order to establish an appropriate target workload range, it is necessary to account for both performance goals and the portion of the workload-performance curve for the task in question. Second, establishing an invoking threshold may require a tradeoff between workload and situational awareness, which is influenced by the task's location on the workload-situational awareness continuum. Finally, this study finds that revoking strategies differ in their ability to achieve workload and situational awareness goals. For the case scenario examined, revoking strategies based on duration are best suited to improve workload, while revoking strategies based on revoking thresholds are better for maintaining situational awareness.
Show less - Date Issued
- 2013
- Identifier
- CFE0004927, ucf:49607
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004927
- Title
- A Posteriori and Interactive Approaches for Decision-Making with Multiple Stochastic Objectives.
- Creator
-
Bakhsh, Ahmed, Geiger, Christopher, Mollaghasemi, Mansooreh, Xanthopoulos, Petros, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
Computer simulation is a popular method that is often used as a decision support tool in industry to estimate the performance of systems too complex for analytical solutions. It is a tool that assists decision-makers to improve organizational performance and achieve performance objectives in which simulated conditions can be randomly varied so that critical situations can be investigated without real-world risk. Due to the stochastic nature of many of the input process variables in simulation...
Show moreComputer simulation is a popular method that is often used as a decision support tool in industry to estimate the performance of systems too complex for analytical solutions. It is a tool that assists decision-makers to improve organizational performance and achieve performance objectives in which simulated conditions can be randomly varied so that critical situations can be investigated without real-world risk. Due to the stochastic nature of many of the input process variables in simulation models, the output from the simulation model experiments are random. Thus, experimental runs of computer simulations yield only estimates of the values of performance objectives, where these estimates are themselves random variables.Most real-world decisions involve the simultaneous optimization of multiple, and often conflicting, objectives. Researchers and practitioners use various approaches to solve these multiobjective problems. Many of the approaches that integrate the simulation models with stochastic multiple objective optimization algorithms have been proposed, many of which use the Pareto-based approaches that generate a finite set of compromise, or tradeoff, solutions. Nevertheless, identification of the most preferred solution can be a daunting task to the decision-maker and is an order of magnitude harder in the presence of stochastic objectives. However, to the best of this researcher's knowledge, there has been no focused efforts and existing work that attempts to reduce the number of tradeoff solutions while considering the stochastic nature of a set of objective functions.In this research, two approaches that consider multiple stochastic objectives when reducing the set of the tradeoff solutions are designed and proposed. The first proposed approach is an a posteriori approach, which uses a given set of Pareto optima as input. The second approach is an interactive-based approach that articulates decision-maker preferences during the optimization process. A detailed description of both approaches is given, and computational studies are conducted to evaluate the efficacy of the two approaches. The computational results show the promise of the proposed approaches, in that each approach effectively reduces the set of compromise solutions to a reasonably manageable size for the decision-maker. This is a significant step beyond current applications of decision-making process in the presence of multiple stochastic objectives and should serve as an effective approach to support decision-making under uncertainty.
Show less - Date Issued
- 2013
- Identifier
- CFE0004973, ucf:49574
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004973
- Title
- Shop Scheduling in the Presence of Batching, Sequence-Dependent Setups and Incompatible Job Families Minimizing Earliness and Tardiness Penalties.
- Creator
-
Buchanan, Patricia, Geiger, Christopher, Mollaghasemi, Mansooreh, Pazour, Jennifer, Nazzal, Dima, University of Central Florida
- Abstract / Description
-
The motivation of this research investigation stems from a particular job shop production environment at a large international communications and information technology company in which electro-mechanical assemblies (EMAs) are produced. The production environment of the EMAs includes the continuous arrivals of the EMAs (generally called jobs), with distinct due dates, degrees of importance and routing sequences through the production workstations, to the job shop. Jobs are processed in...
Show moreThe motivation of this research investigation stems from a particular job shop production environment at a large international communications and information technology company in which electro-mechanical assemblies (EMAs) are produced. The production environment of the EMAs includes the continuous arrivals of the EMAs (generally called jobs), with distinct due dates, degrees of importance and routing sequences through the production workstations, to the job shop. Jobs are processed in batches at the workstations, and there are incompatible families of jobs, where jobs from different product families cannot be processed together in the same batch. In addition, there are sequence-dependent setups between batches at the workstations. Most importantly, it is imperative that all product deliveries arrive on time to their customers (internal and external) within their respective delivery time windows. Delivery is allowed outside a time window, but at the expense of a penalty. Completing a job and delivering the job before the start of its respective time window results in a penalty, i.e., inventory holding cost. Delivering a job after its respective time window also results in a penalty, i.e., delay cost or emergency shipping cost. This presents a unique scheduling problem where an earliness-tardiness composite objective is considered.This research approaches this scheduling problem by decomposing this complex job shop scheduling environment into bottleneck and non-bottleneck resources, with the primary focus on effectively scheduling the bottleneck resource. Specifically, the problem of scheduling jobs with unique due dates on a single workstation under the conditions of batching, sequence-dependent setups, incompatible job families in order to minimize weighted earliness and tardiness is formulated as an integer linear program. This scheduling problem, even in its simplest form, is NP-Hard, where no polynomial-time algorithm exists to solve this problem to optimality, especially as the number of jobs increases. As a result, the computational time to arrive at optimal solutions is not of practical use in industrial settings, where production scheduling decisions need to be made quickly. Therefore, this research explores and proposes new heuristic algorithms to solve this unique scheduling problem. The heuristics use order review and release strategies in combination with priority dispatching rules, which is a popular and more commonly-used class of scheduling algorithms in real-world industrial settings. A computational study is conducted to assess the quality of the solutions generated by the proposed heuristics. The computational results show that, in general, the proposed heuristics produce solutions that are competitive to the optimal solutions, yet in a fraction of the time. The results also show that the proposed heuristics are superior in quality to a set of benchmark algorithms within this same class of heuristics.
Show less - Date Issued
- 2014
- Identifier
- CFE0005139, ucf:50717
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005139
- Title
- Agent-Based and System Dynamics Hybrid Modeling and Simulation Approach Using Systems Modeling Language.
- Creator
-
Soyler Akbas, Asli, Karwowski, Waldemar, Geiger, Christopher, Kincaid, John, Mikusinski, Piotr, University of Central Florida
- Abstract / Description
-
Agent-based (AB) and system dynamics (SD) modeling and simulation techniques have been studied and used by various research fields. After the new hybrid modeling field emerged, the combination of these techniques started getting attention in the late 1990's. Applications of using agent-based (AB) and system dynamics (SD) hybrid models for simulating systems have been demonstrated in the literature. However, majority of the work on the domain includes system specific approaches where the...
Show moreAgent-based (AB) and system dynamics (SD) modeling and simulation techniques have been studied and used by various research fields. After the new hybrid modeling field emerged, the combination of these techniques started getting attention in the late 1990's. Applications of using agent-based (AB) and system dynamics (SD) hybrid models for simulating systems have been demonstrated in the literature. However, majority of the work on the domain includes system specific approaches where the models from two techniques are integrated after being independently developed. Existing work on creating an implicit and universal approach is limited to conceptual modeling and structure design. This dissertation proposes an approach for generating AB-SD hybrid models of systems by using Systems Modeling Language (SysML) which can be simulated without exporting to another software platform. Although the approach is demonstrated using IBM's Rational Rhapsody(&)#174; it is applicable to all other SysML platforms. Furthermore, it does not require prior knowledge on agent-based or system dynamics modeling and simulation techniques and limits the use of any programming languages through the use of SysML diagram tools. The iterative modeling approach allows two-step validations, allows establishing a two-way dynamic communication between AB and SD variables and develops independent behavior models that can be reused in representing different systems. The proposed approach is demonstrated using a hypothetical population, movie theater and a real(-)world training management scenarios. In this setting, the work provides methods for independent behavior and system structure modeling. Finally, provides behavior models for probabilistic behavior modeling and time synchronization.
Show less - Date Issued
- 2015
- Identifier
- CFE0006399, ucf:51517
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006399
- Title
- Multi-Vehicle Dispatching and Routing with Time Window Constraints and Limited Dock Capacity.
- Creator
-
El-Nashar, Ahmed, Nazzal, Dima, Sepulveda, Jose, Geiger, Christopher, Hosni, Yasser, University of Central Florida
- Abstract / Description
-
The Vehicle Routing Problem with Time Windows (VRPTW) is an important and computationally hard optimization problem frequently encountered in Scheduling and logistics. The Vehicle Routing Problem (VRP) can be described as the problem of designing the most efficient and economical routes from one depot to a set of customers using a limited number of vehicles. This research addresses the VRPTW under the following additional complicating features that are often encountered in practical problems...
Show moreThe Vehicle Routing Problem with Time Windows (VRPTW) is an important and computationally hard optimization problem frequently encountered in Scheduling and logistics. The Vehicle Routing Problem (VRP) can be described as the problem of designing the most efficient and economical routes from one depot to a set of customers using a limited number of vehicles. This research addresses the VRPTW under the following additional complicating features that are often encountered in practical problems:1. Customers have strict time windows for receiving a vehicle, i.e., vehicles are not allowed to arrive at the customer's location earlier than the lower limit of the specified time window, which is relaxed in previous research work.2. There is a limited number of loading/unloading docks for dispatching/receiving the vehicles at the depotThe main goal of this research is to propose a framework for solving the VRPTW with the constraints stated above by generating near-optimal routes for the vehicles so as to minimize the total traveling distance. First, the proposed framework clusters customers into groups based on their proximity to each other. Second, a Probabilistic Route Generation (PRG) algorithm is applied to each cluster to find the best route for visiting customers by each vehicle; multiple routes per vehicle are generated and each route is associated with a set of feasible dispatching times from the depot. Third, an assignment problem formulation determines the best dispatching time and route for each vehicle that minimizes the total traveling distance.iiiThe proposed algorithm is tested on a set of benchmark problems that were originally developed by Marius M. Solomon and the results indicate that the algorithm works well with about 1.14% average deviation from the best-known solutions. The benchmark problems are then modified by adjusting some of the customer time window limits, and adding the staggered vehicle dispatching constraint. For demonstration purposes, the proposed clustering and PRG algorithms are then applied to the modified benchmark problems.
Show less - Date Issued
- 2012
- Identifier
- CFE0004532, ucf:49233
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004532
- Title
- Systems Analysis for Urban Water Infrastructure Expansion with Global Change Impact under Uncertainties.
- Creator
-
Qi, Cheng, Chang, Ni-bin, Geiger, Christopher, Xanthopoulos, Petros, Wanielista, Martin, University of Central Florida
- Abstract / Description
-
Over the past decades, cost-effectiveness principle or cost-benefit analysis has been employed oftentimes as a typical assessment tool for the expansion of drinking water utility. With changing public awareness of the inherent linkages between climate change, population growth and economic development, the addition of global change impact in the assessment regime has altered the landscape of traditional evaluation matrix. Nowadays, urban drinking water infrastructure requires careful long...
Show moreOver the past decades, cost-effectiveness principle or cost-benefit analysis has been employed oftentimes as a typical assessment tool for the expansion of drinking water utility. With changing public awareness of the inherent linkages between climate change, population growth and economic development, the addition of global change impact in the assessment regime has altered the landscape of traditional evaluation matrix. Nowadays, urban drinking water infrastructure requires careful long-term expansion planning to reduce the risk from global change impact with respect to greenhouse gas (GHG) emissions, economic boom and recession, as well as water demand variation associated with population growth and migration. Meanwhile, accurate prediction of municipal water demand is critically important to water utility in a fast growing urban region for the purpose of drinking water system planning, design and water utility asset management. A system analysis under global change impact due to the population dynamics, water resources conservation, and environmental management policies should be carried out to search for sustainable solutions temporally and spatially with different scales under uncertainties. This study is aimed to develop an innovative, interdisciplinary, and insightful modeling framework to deal with global change issues as a whole based on a real-world drinking water infrastructure system expansion program in Manatee County, Florida. Four intertwined components within the drinking water infrastructure system planning were investigated and integrated, which consists of water demand analysis, GHG emission potential, system optimization for infrastructure expansion, and nested minimax-regret (NMMR) decision analysis under uncertainties. In the water demand analysis, a new system dynamics model was developed to reflect the intrinsic relationship between water demand and changing socioeconomic environment. This system dynamics model is based on a coupled modeling structure that takes the interactions among economic and social dimensions into account offering a satisfactory platform. In the evaluation of GHG emission potential, a life cycle assessment (LCA) is conducted to estimate the carbon footprint for all expansion alternatives for water supply. The result of this LCA study provides an extra dimension for decision makers to extract more effective adaptation strategies. Both water demand forecasting and GHG emission potential were deemed as the input information for system optimization when all alternatives are taken into account simultaneously. In the system optimization for infrastructure expansion, a multiobjective optimization model was formulated for providing the multitemporal optimal facility expansion strategies. With the aid of a multi-stage planning methodology over the partitioned time horizon, such a systems analysis has resulted in a full-scale screening and sequencing with respect to multiple competing objectives across a suite of management strategies. In the decision analysis under uncertainty, such a system optimization model was further developed as a unique NMMR programming model due to the uncertainties imposed by the real-world problem. The proposed NMMR algorithm was successfully applied for solving the real-world problem with a limited scale for the purpose of demonstration.
Show less - Date Issued
- 2012
- Identifier
- CFE0004425, ucf:49354
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004425
- Title
- An Index to Measure Efficiency of Hospital Networks for Mass Casualty Disasters.
- Creator
-
Bull Torres, Maria, Sepulveda, Jose, Sala-Diakanda, Serge, Geiger, Christopher, Kapucu, Naim, University of Central Florida
- Abstract / Description
-
Disaster events have emphasized the importance of healthcare response activities due to the large number of victims. For instance, Hurricane Katrina in New Orleans, in 2005, and the terrorist attacks in New York City and Washington, D.C., on September 11, 2001, left thousands of wounded people. In those disasters, although hospitals had disaster plans established for more than a decade, their plans were not efficient enough to handle the chaos produced by the hurricane and terrorist attacks....
Show moreDisaster events have emphasized the importance of healthcare response activities due to the large number of victims. For instance, Hurricane Katrina in New Orleans, in 2005, and the terrorist attacks in New York City and Washington, D.C., on September 11, 2001, left thousands of wounded people. In those disasters, although hospitals had disaster plans established for more than a decade, their plans were not efficient enough to handle the chaos produced by the hurricane and terrorist attacks. Thus, the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) suggested collaborative planning among hospitals that provide services to a contiguous geographic area during mass casualty disasters. However, the JCAHO does not specify a methodology to determine which hospitals should be included into these cooperative plans. As a result, the problem of selecting the right hospitals to include in exercises and drills at the county level is a common topic in the current preparedness stages. This study proposes an efficiency index to determine the efficient response of cooperative-networks among hospitals before an occurrence of mass casualty disaster. The index built in this research combines operations research techniques, and the prediction of this index used statistical analysis. The consecutive application of three different techniques: network optimization, data envelopment analysis (DEA), and regression analysis allowed to obtain a regression equation to predict efficiency in predefined hospital networks for mass casualty disasters. In order to apply the proposed methodology for creating an efficiency index, we selected the Orlando area, and we defined three disaster sizes. Then, we designed networks considering two perspectives, hub-hospital and hub-disaster networks. In both optimization network models the objective function pursued to: reduce the travel distance and the emergency department (ED) waiting time in hospitals, increase the number of services offered by hospitals in the network, and offer specialized assistance to children. The hospital network optimization generated information for 75 hospital networks in Orlando. The DEA analyzed these 75 hospital networks, or decision making units (DMU's), to estimate their comparative efficiency. Two DEAs were performed in this study. As an output variable for each DMU, the DEA-1 considered the number of survivors allocated in less than a 40 miles range. As the input variables, the DEA-1 included: (i) The number of beds available in the network; (ii) The number of hospitals available in the network; and (iii) The number of services offered by hospitals in the network. This DEA-1 allowed the assignment of an efficiency value to each of the 75 hospital networks. As output variables for each DMU, the DEA-2 considered the number of survivors allocated in less than a 40 miles range and an index for ED waiting time in the network. The input variables included in DEA-2 are (i) The number of beds available in the network; (ii) The number of hospitals available in the network; and (iii) The number of services offered by hospitals in the network. These DEA allowed the assignment of an efficiency value to each of the 75 hospital networks. This efficiency index should allow emergency planners and hospital managers to assess which hospitals should be associated in a cooperative network in order to transfer survivors. Furthermore, JCAHO could use this index to evaluate the cooperating emergency hospitals' plans.However, DEA is a complex methodology that requires significant data gathering and handling. Thus, we studied whether a simpler regression analysis would substantially yield the same results. DEA-1 can be predicted using two regression analyses, which concluded that the average distances between hospitals and the disaster locations, and the size of the disaster explain the efficiency of the hospital network. DEA-2 can be predicted using three regressions, which included size of the disaster, number of hospitals, average distance, and average ED waiting time, as predictors of hospital network efficiency. The models generated for DEA-1 and DEA-2 had a mean absolute percent error (MAPE) around 10%. Thus, the indexes developed through the regression analysis make easier the estimation of the efficiency in predefined hospital networks, generating suitable predictors of the efficiency as determined by the DEA analysis. In conclusion, network optimization, DEA, and regressions analyses can be combined to create an index of efficiency to measure the performance of predefined-hospital networks in a mass casualty disaster, validating the hypothesis of this research.Although the methodology can be applied to any county or city, the regressions proposed for predicting the efficiency of hospital network estimated by DEA can be applied only if the city studied has the same characteristics of the Orlando area. These conditions include the following: (i) networks must have a rate of services lager than 0.76; (ii) the number of survivors must be less than 47% of the bed capacity EDs of the area studied; (iii) all hospitals in the network must have ED and they must be located in less than 48 miles range from the disaster sites, and (iv) EDs should not have more than 60 minutes of waiting time.The proposed methodology, in special the efficiency index, support the operational objectives of the 2012 ESF#8 for Florida State to handle risk and response capabilities conducting and participating in training and exercises to test and improve plans and procedures in the health response.
Show less - Date Issued
- 2012
- Identifier
- CFE0004524, ucf:49290
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004524
- Title
- Integrating Multiobjective Optimization with the Six Sigma Methodology for Online Process Control.
- Creator
-
Abualsauod, Emad, Geiger, Christopher, Elshennawy, Ahmad, Thompson, William, Moore, Karla, University of Central Florida
- Abstract / Description
-
Over the past two decades, the Define-Measure-Analyze-Improve-Control (DMAIC) framework of the Six Sigma methodology and a host of statistical tools have been brought to bear on process improvement efforts in today's businesses. However, a major challenge of implementing the Six Sigma methodology is maintaining the process improvements and providing real-time performance feedback and control after solutions are implemented, especially in the presence of multiple process performance objectives...
Show moreOver the past two decades, the Define-Measure-Analyze-Improve-Control (DMAIC) framework of the Six Sigma methodology and a host of statistical tools have been brought to bear on process improvement efforts in today's businesses. However, a major challenge of implementing the Six Sigma methodology is maintaining the process improvements and providing real-time performance feedback and control after solutions are implemented, especially in the presence of multiple process performance objectives. The consideration of a multiplicity of objectives in business and process improvement is commonplace and, quite frankly, necessary. However, balancing the collection of objectives is challenging as the objectives are inextricably linked, and, oftentimes, in conflict.Previous studies have reported varied success in enhancing the Six Sigma methodology by integrating optimization methods in order to reduce variability. These studies focus these enhancements primarily within the Improve phase of the Six Sigma methodology, optimizing a single objective. The current research and practice of using the Six Sigma methodology and optimization methods do little to address the real-time feedback and control for online process control in the case of multiple objectives.This research proposes an innovative integrated Six Sigma multiobjective optimization (SSMO) approach for online process control. It integrates the Six Sigma DMAIC framework with a nature-inspired optimization procedure that iteratively perturbs a set of decision variables providing feedback to the online process, eventually converging to a set of tradeoff process configurations that improves and maintains process stability. For proof of concept, the approach is applied to a general business process model (-) a well-known inventory management model (-) that is formally defined and specifies various process costs as objective functions. The proposed SSMO approach and the business process model are programmed and incorporated into a software platform. Computational experiments are performed using both three sigma (3?)-based and six sigma (6?)-based process control, and the results reveal that the proposed SSMO approach performs far better than the traditional approaches in improving the stability of the process. This research investigation shows that the benefits of enhancing the Six Sigma method for multiobjective optimization and for online process control are immense.
Show less - Date Issued
- 2013
- Identifier
- CFE0004968, ucf:49561
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004968
- Title
- A Human-Centric Approach to Data Fusion in Post-Disaster Managment: The Development of a Fuzzy Set Theory Based Model.
- Creator
-
Banisakher, Mubarak, McCauley, Pamular, Geiger, Christopher, Lee, Gene, Shi, Fuqian, Zou, Changchun, University of Central Florida
- Abstract / Description
-
It is critical to provide an efficient and accurate information system in the post-disaster phase for individuals' in order to access and obtain the necessary resources in a timely manner; but current map based post-disaster management systems provide all emergency resource lists without filtering them which usually leads to high levels of energy consumed in calculation. Also an effective post-disaster management system (PDMS) will result in distribution of all emergency resources such as,...
Show moreIt is critical to provide an efficient and accurate information system in the post-disaster phase for individuals' in order to access and obtain the necessary resources in a timely manner; but current map based post-disaster management systems provide all emergency resource lists without filtering them which usually leads to high levels of energy consumed in calculation. Also an effective post-disaster management system (PDMS) will result in distribution of all emergency resources such as, hospital, storage and transportation much more reasonably and be more beneficial to the individuals in the post disaster period. In this Dissertation, firstly, semi-supervised learning (SSL) based graph systems was constructed for PDMS. A Graph-based PDMS' resource map was converted to a directed graph that presented by adjacent matrix and then the decision information will be conducted from the PDMS by two ways, one is clustering operation, and another is graph-based semi-supervised optimization process. In this study, PDMS was applied for emergency resource distribution in post-disaster (responses phase), a path optimization algorithm based ant colony optimization (ACO) was used for minimizing the cost in post-disaster, simulation results show the effectiveness of the proposed methodology. This analysis was done by comparing it with clustering based algorithms under improvement ACO of tour improvement algorithm (TIA) and Min-Max Ant System (MMAS) and the results also show that the SSL based graph will be more effective for calculating the optimization path in PDMS. This research improved the map by combining the disaster map with the initial GIS based map which located the target area considering the influence of disaster. First, all initial map and disaster map will be under Gaussian transformation while we acquired the histogram of all map pictures. And then all pictures will be under discrete wavelet transform (DWT), a Gaussian fusion algorithm was applied in the DWT pictures. Second, inverse DWT (iDWT) was applied to generate a new map for a post-disaster management system. Finally, simulation works were proposed and the results showed the effectiveness of the proposed method by comparing it to other fusion algorithms, such as mean-mean fusion and max-UD fusion through the evaluation indices including entropy, spatial frequency (SF) and image quality index (IQI). Fuzzy set model were proposed to improve the presentation capacity of nodes in this GIS based PDMS.
Show less - Date Issued
- 2014
- Identifier
- CFE0005128, ucf:50702
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005128
- Title
- Autonomous Recovery of Reconfigurable Logic Devices using Priority Escalation of Slack.
- Creator
-
Imran, Syednaveed, DeMara, Ronald, Mikhael, Wasfy, Lin, Mingjie, Yuan, Jiann-Shiun, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases.To extend these concepts to semiconductor aging and process variation in the deep...
Show moreField Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases.To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Reconfigurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric.FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria.
Show less - Date Issued
- 2013
- Identifier
- CFE0005006, ucf:50005
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005006