Current Search: optimism (x)
Pages
-
-
Title
-
A DIRECT COMPENSATOR PROFILE OPTIMIZATION APPROACH FOR INTENSITY MODULATED RADIATION TREATMENT PLANNING.
-
Creator
-
Erhart, Kevin, Divo, Eduardo, University of Central Florida
-
Abstract / Description
-
Radiation therapy accounts for treatment of over one million cancer patients each year in the United States alone, and its use will continue to grow rapidly in the coming years. Recently, many important advancements have been developed that greatly improve the outcomes and effectiveness of this treatment technique, the most notable being Intensity Modulated Radiation Therapy (IMRT). IMRT is a sophisticated treatment technique where the radiation dose is conformed to the tumor volume, thereby...
Show moreRadiation therapy accounts for treatment of over one million cancer patients each year in the United States alone, and its use will continue to grow rapidly in the coming years. Recently, many important advancements have been developed that greatly improve the outcomes and effectiveness of this treatment technique, the most notable being Intensity Modulated Radiation Therapy (IMRT). IMRT is a sophisticated treatment technique where the radiation dose is conformed to the tumor volume, thereby sparing nearby healthy tissue from excessive radiation dose. While IMRT is a valuable tool in the planning of radiation treatments, it is not without its difficulties. This research has created, developed, and tested an innovative approach to IMRT treatment planning, coined Direct Compensator Profile Optimization (DCPO), which is shown to eliminate many of the difficulties typically associated with IMRT planning and delivery using solid compensator based treatment. The major innovation of this technique is that it is a direct delivery parameter optimization approach which has adopted a parameterized surface representation using Non-Uniform Rational B-Splines (NURBs) to replace the conventional beamlet weight optimization approach. This new approach brings with it three key advantages: 1) a reduced number of parameters to optimize, reducing the difficulty of numerical optimization; 2) the ability to ensure complete equivalence of planned and actual manufactured compensators; and 3) direct inclusion of delivery device effects during planning with no performance penalties, eliminating the degrading fluence-to-delivery parameter conversion process. Detailed research into the affects of the DCPO approach on IMRT planning has been completed and a thorough analysis of the developments is provided herein. This research includes a complete description of the DCPO surface representation scheme, inverse planning process, as well as quantification of the manufacturing constraint control procedure. Results are presented which demonstrate the performance and innovation offered by this new approach and show that the resulting compensator shapes can be manufactured to nearly 100 percent of the designed shape.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002800, ucf:48099
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002800
-
-
Title
-
THE RELATIONSHIP BETWEEN COUNSELOR HOPE AND OPTIMISM ON CLIENT OUTCOME.
-
Creator
-
Muenzenmeyer, Michelle, Young, Mark, University of Central Florida
-
Abstract / Description
-
The counselor is an important contributor to client outcome. Research findings about therapist effects are mixed. In this study positive psychology variables, hope and optimism, were evaluated with client outcome. The sample for this study consisted of 43 graduate-level counselor trainees in the first or second practicum semester and their adult clients in a university's community counseling clinic. Results revealed no statistically significant relationships between student counselors' hope...
Show moreThe counselor is an important contributor to client outcome. Research findings about therapist effects are mixed. In this study positive psychology variables, hope and optimism, were evaluated with client outcome. The sample for this study consisted of 43 graduate-level counselor trainees in the first or second practicum semester and their adult clients in a university's community counseling clinic. Results revealed no statistically significant relationships between student counselors' hope and optimism and client outcomes. Post hoc analysis of student hope and their post-graduation expectations, revealed statistically significant relationships. Implications for counselor educators are presented along with areas for future research.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003884, ucf:48747
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003884
-
-
Title
-
Thermodynamic Analysis and Optimization of Supercritical Carbon Dioxide Brayton Cycles.
-
Creator
-
Mohagheghi, Mahmood, Kapat, Jayanta, Kassab, Alain, Das, Tuhin, Swami, Muthusamy, University of Central Florida
-
Abstract / Description
-
The power generation industry is facing new challenging issues regarding accelerating growth of electricity demand, fuel cost and environmental pollution. These challenges accompanied by concerns of energy resources becoming scarce necessitate searching for sustainable and economically competitive solutions to supply the future electricity demand. To this end, supercritical carbon dioxide (S-CO2) Brayton cycles present great promise particularly in high temperature concentrated solar power ...
Show moreThe power generation industry is facing new challenging issues regarding accelerating growth of electricity demand, fuel cost and environmental pollution. These challenges accompanied by concerns of energy resources becoming scarce necessitate searching for sustainable and economically competitive solutions to supply the future electricity demand. To this end, supercritical carbon dioxide (S-CO2) Brayton cycles present great promise particularly in high temperature concentrated solar power (CSP) and waste heat recovery (WHR) applications. With this regard, this dissertation is intended to perform thorough thermodynamic analyses and optimization of S-CO2 Brayton cycles for both of these applications.A modeling tool has been developed, which enables one to predict and analyze the thermodynamic performance of the S-CO2 Brayton cycles in various configurations employing recuperation, recompression, intercooling and reheating. The modeling tool is fully flexible in terms of encompassing the entire feasible design domain and rectifying possible infeasible solutions. Moreover, it is computationally efficient in order to handle time consuming optimization problems. A robust optimization tool has also been developed by employing the principles of genetic algorithm. The developed genetic algorithm code is capable of optimizing non-linear systems with several decision variables simultaneously, and without being trapped in local optimum points.Two optimization schemes, i.e. single-objective and multi-objective, are considered in optimizing the S-CO2 cycles for high temperature solar tower applications. In order to reduce the size and cost of solar block, the global maximum efficiency of the power block should be realized. Therefore, the single-objective optimization scheme is considered to find the optimum design points that correspond to the global maximum efficiency of S-CO2 cycles. Four configurations of S-CO2 Brayton cycles are investigated, and the optimum design point for each configuration is determined. Ultimately, the effects of recompression, reheating, and intercooling on the thermodynamic performance of the recuperated S-CO2 Brayton cycle are analyzed. The results reveal that the main limiting factors in the optimization process are maximum cycle temperature, minimum heat rejection temperature, and pinch point temperature difference. The maximum cycle pressure is also a limiting factor in all studied cases except the simple recuperated cycle. The optimized cycle efficiency varies from 55.77% to 62.02% with consideration of reasonable component performances as we add recompression, reheat and intercooling to the simple recuperated cycle (RC). Although addition of reheating and intercooling to the recuperated recompression cycle (RRC) increases the cycle efficiency by about 3.45 percent points, the simplicity of RC and RRC configurations makes them more promising options at this early development stage of S-CO2 cycles, and are used for further studies in this dissertation.The results of efficiency maximization show that achieving the highest efficiency does not necessarily coincide with the highest cycle specific power. In addition to the efficiency, the specific power is also an important parameter when it comes to investment and decision making since it directly affects the power generation capacity, the size of components and the cost of power blocks. Consequently, the multi-objective optimization scheme is devised to simultaneously maximize both the cycle efficiency and specific power in the simple recuperated and recuperated recompression configurations. The optimization results are presented in the form of two optimum trade-off curves, also known as Pareto fronts, which enable decision makers to choose their desired compromise between the objectives, and to avoid naive solution points obtained from a single-objective optimization approach. Moreover, the comparison of the Pareto optimal fronts associated with the studied configurations reveals the optimum operational region of the recompression configuration where it presents superior performance over the simple recuperated cycle.Considering the extensive potential of waste heat recovery from energy intensive industries and stand-alone gas turbines, this dissertation also investigates the optimum design point of S-CO2 Brayton cycles for a wide range of waste heat source temperatures (500 K to 1100 K). Once again, the simple recuperated and recuperated recompression configurations are selected for this application. The utilization of heat in WHR applications is fundamentally different from that in closed loop heat source applications. The temperature pinching issues are recognized in the waste recovery heat exchangers, which brings about a trade-off between the cycle efficiency and amount of recovered heat. Therefore, maximization of net power output for a given waste heat source is of paramount practical interest rather than the maximization of cycle efficiency. The results demonstrate that by changing the heat source temperature from one application to another, the variation of optimum pressure ratio is insignificant. However, the optimum CO2 to waste gas mass flow ratio and turbine inlet temperature should properly be adjusted. The RRC configuration provides minor increase in power output as compared to RC configuration. Although cycle efficiencies as high as 34.8% and 39.7% can be achieved in RC and RRC configurations respectively, the overall conversion efficiency is less than 26% in RRC and 24.5% in RC.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006044, ucf:50993
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006044
-
-
Title
-
MongoDB Incidence Response.
-
Creator
-
Morales, Cory, Lang, Sheau-Dong, Zou, Changchun, Guha, Ratan, University of Central Florida
-
Abstract / Description
-
NoSQL (Not only SQL) databases have been gaining some popularity over the last few years. Such big companies as Expedia, Shutterfly, MetLife, and Forbes use NoSQL databases to manage data on different projects. These databases can contain a variety of information ranging from nonproprietary data to personally identifiable information like social security numbers. Databases run the risk of cyber intrusion at all times. This paper gives a brief explanation of NoSQL and thoroughly explains a...
Show moreNoSQL (Not only SQL) databases have been gaining some popularity over the last few years. Such big companies as Expedia, Shutterfly, MetLife, and Forbes use NoSQL databases to manage data on different projects. These databases can contain a variety of information ranging from nonproprietary data to personally identifiable information like social security numbers. Databases run the risk of cyber intrusion at all times. This paper gives a brief explanation of NoSQL and thoroughly explains a method of Incidence Response with MongoDB, a NoSQL database provider. This method involves an automated process with a new self-built software tool that analyzing MongoDB audit log's and generates an html page with indicators to show possible intrusions and activities on the instance of MongoDB. When dealing with NoSQL databases there is a lot more to consider than with the traditional RDMS's, and since there is not a lot of out of the box support forensics tools can be very helpful.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006538, ucf:51356
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006538
-
-
Title
-
Optimization of Glycerol or Biodiesel Waste Prefermentation to Improve EBPR.
-
Creator
-
Ghasemi, Marzieh, Randall, Andrew, Duranceau, Steven, Lee, Woo Hyoung, Jimenez, Jose, University of Central Florida
-
Abstract / Description
-
The enhanced biological phosphorus removal (EBPR) process efficiency relies on different operational and process conditions especially the type of carbon source available in the wastewater. Acetic acid and propionic acid are the two major volatile fatty acids (VFAs) found in domestic wastewater which can drive biological phosphorus (P) removal to the desired level. However, often domestic wastewater does not have a sufficient amount of VFAs. Due to high acetate and propionate production-cost,...
Show moreThe enhanced biological phosphorus removal (EBPR) process efficiency relies on different operational and process conditions especially the type of carbon source available in the wastewater. Acetic acid and propionic acid are the two major volatile fatty acids (VFAs) found in domestic wastewater which can drive biological phosphorus (P) removal to the desired level. However, often domestic wastewater does not have a sufficient amount of VFAs. Due to high acetate and propionate production-cost, it is not economic to add acetate and propionate directly in full-scale wastewater treatment plants. This brought up the idea of using external carbon sources (e. g. molasses has been used successfully) in EBPR systems that can be converted to VFAs through a fermentation process. On the other hand, biodiesel fuels have been produced increasingly over the last decade. Crude glycerol is a biodiesel production major by-product that can be used as an external carbon source in wastewater treatment plant. Therefore, the main objective of this research is to optimize the glycerol/biodiesel waste fermentation process' operational conditions in pursuit of producing more favorable fermentation end-products (i. e. a mixture of acetic acid and propionic acid) by adding glycerol to a prefermenter versus direct addition to the anaerobic zone or fermentation with waste activated sludge. For this reason, different prefermenter parameters namely: mixing intensity, pH, temperature and solids retention time (SRT), were studied in a small scale fermentation media (serum bottles) and bench scale semi-continuous batch prefermenters. Experimental results revealed that glycerol/biodiesel waste fermentation resulted in a significant amount of VFAs production with propionic acid as the superior end-product followed by acetic acid and butyric acid. The VFA production was at its highest level when the initial pH was adjusted to 7 and 8.5. However, the optimum pH with respect to propionic acid production was 7. Increasing the temperature in serum bottles favored the total VFA production, specifically in the form of propionic acid. Regarding the mixing energy inconsistent results were obtained in the serum bottles compared to the bench scale prefermenters. The VFA production in mixed serum bottles at 200 rpm was higher than that of un-mixed ones. On the other hand, the unmixed or slowly mixed bench scale prefermenters showed higher VFA production than the mixed reactors. However, the serum bottles did not operate long enough to account for biomass acclimation and other long-term effects that the prefermenter experiments could account for. As a consequence one of the most important and consistently results was that VFA production was significantly enhanced by reducing mixing intensity from 100 rpm to 7 rpm and even ceasing mixing all together. This was true both for primary solids and glycerol. In addition propionate content was high under both high and low intensity, and adding glycerol also increased the fraction of primary solids that formed propionic acid instead of acetic acid. Increasing the SRT from 2 to 4 days increased the VFA production about 12% on average. In order to investigate the effect of glycerol on EBPR process efficiency two identical A2/O systems were monitored for 3 months. Experimental results suggested that glycerol addition could increase the P removal efficiency significantly. Adding glycerol to the prefermenter rather than the anaerobic zone resulted in a lower effluent soluble ortho phosphorus (SOP) (0.4 mg-P/L vs. 0.6 mg-P/L) but the difference was apparently statistically significant. Future experimentation should be done to determine if this effect is consistent, especially in carbon poor wastewaters. Also it would be desirable to conduct a longer pilot study or a full scale study to determine if this improvement in effluent SOP remains true over a range of temperature and changing influent conditions.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006310, ucf:51612
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006310
-
-
Title
-
Data Representation in Machine Learning Methods with its Application to Compilation Optimization and Epitope Prediction.
-
Creator
-
Sher, Yevgeniy, Zhang, Shaojie, Dechev, Damian, Leavens, Gary, Gonzalez, Avelino, Zhi, Degui, University of Central Florida
-
Abstract / Description
-
In this dissertation we explore the application of machine learning algorithms to compilation phase order optimization, and epitope prediction. The common thread running through these two disparate domains is the type of data being dealt with. In both problem domains we are dealing with categorical data, with its representation playing a significant role in the performance of classification algorithms.We first present a neuroevolutionary approach which orders optimization phases to generate...
Show moreIn this dissertation we explore the application of machine learning algorithms to compilation phase order optimization, and epitope prediction. The common thread running through these two disparate domains is the type of data being dealt with. In both problem domains we are dealing with categorical data, with its representation playing a significant role in the performance of classification algorithms.We first present a neuroevolutionary approach which orders optimization phases to generate compiled programs with performance superior to those compiled using LLVM's -O3 optimization level. Performance improvements calculated as the speed of the compiled program's execution ranged from 27% for the ccbench program, to 40.8% for bzip2.This dissertation then explores the problem of data representation of 3D biological data, such as amino acids. A new approach for distributed representation of 3D biological data through the process of embedding is proposed and explored. Analogously to word embedding, we developed a system that uses atomic and residue coordinates to generate distributed representation for residues, which we call 3D Residue BioVectors. Preliminary results are presented which demonstrate that even the low dimensional 3D Residue BioVectors can be used to predict conformational epitopes and protein-protein interactions, with promising proficiency. The generation of such 3D BioVectors, and the proposed methodology, opens the door for substantial future improvements, and application domains.The dissertation then explores the problem domain of linear B-Cell epitope prediction. This problem domain deals with predicting epitopes based strictly on the protein sequence. We present the DRREP system, which demonstrates how an ensemble of shallow neural networks can be combined with string kernels and analytical learning algorithm to produce state of the art epitope prediction results. DRREP was tested on the SARS subsequence, the HIV, Pellequer, AntiJen datasets, and the standard SEQ194 test dataset. AUC improvements achieved over the state of the art ranged from 3% to 8%.Finally, we present the SEEP epitope classifier, which is a multi-resolution SMV ensemble based classifier which uses conjoint triad feature representation, and produces state of the art classification results. SEEP leverages the domain specific knowledge based protein sequence encoding developed within the protein-protein interaction research domain. Using an ensemble of multi-resolution SVMs, and a sliding window based pre and post processing pipeline, SEEP achieves an AUC of 91.2 on the standard SEQ194 test dataset, a 24% improvement over the state of the art.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006793, ucf:51829
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006793
-
-
Title
-
cooperative control and advanced management of distributed generators in a smart grid.
-
Creator
-
Maknouninejad, Ali, Qu, Zhihua, Lotfifard, Saeed, Haralambous, Michael, Wu, Xinzhang, Kutkut, Nasser, University of Central Florida
-
Abstract / Description
-
Smart grid is more than just the smart meters. The future smart grids are expected to include ahigh penetration of distributed generations (DGs), most of which will consist of renewable energysources, such as solar or wind energy. It is believed that the high penetration of DGs will resultin the reduction of power losses, voltage profile improvement, meeting future load demand, andoptimizingthe use of non-conventionalenergy sources. However, more serious problems will ariseif a decent control...
Show moreSmart grid is more than just the smart meters. The future smart grids are expected to include ahigh penetration of distributed generations (DGs), most of which will consist of renewable energysources, such as solar or wind energy. It is believed that the high penetration of DGs will resultin the reduction of power losses, voltage profile improvement, meeting future load demand, andoptimizingthe use of non-conventionalenergy sources. However, more serious problems will ariseif a decent control mechanism is not exploited. An improperly managed high PV penetration maycause voltage profile disturbance, conflict with conventional network protection devices, interferewith transformer tap changers, and as a result, cause network instability.Indeed, it is feasible to organize DGs in a microgrid structure which will be connected to the maingrid through a point of common coupling (PCC). Microgrids are natural innovation zones for thesmart grid because of their scalability and flexibility. A proper organization and control of theinteraction between the microgrid and the smartgrid is a challenge.Cooperative control makes it possible to organize different agents in a networked system to actas a group and realize the designated objectives. Cooperative control has been already appliedto the autonomous vehicles and this work investigates its application in controlling the DGs in amicro grid. The microgrid power objectives are set by a higher level control and the application ofthe cooperative control makes it possible for the DGs to utilize a low bandwidth communicationnetwork and realize the objectives.Initially, the basics of the application of the DGs cooperative control are formulated. This includesorganizing all the DGs of a microgrid to satisfy an active and a reactive power objective. Then, thecooperative control is further developed by the introduction of clustering DGs into several groupsto satisfy multiple power objectives. Then, the cooperative distribution optimization is introducedto optimally dispatch the reactive power of the DGs to realize a unified microgrid voltage profileand minimizethelosses. Thisdistributedoptimizationis agradient based techniqueand itis shownthat when the communication is down, it reduces to a form of droop. However, this gradient baseddroop exhibits a superior performance in the transient response, by eliminating the overshootscaused by the conventional droop.Meanwhile, the interaction between each microgrid and the main grid can be formulated as aStackelberg game. The main grid as the leader, by offering proper energy price to the micro grid,minimizes its cost and secures the power. This not only optimizes the economical interests ofboth sides, the microgrids and the main grid, but also yields an improved power flow and shavesthe peak power. As such, a smartgrid may treat microgrids as individually dispatchable loads orgenerators.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004712, ucf:49817
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004712
-
-
Title
-
Optimization problem in single period markets.
-
Creator
-
Jiang, Tian, Yong, Jiongmin, Qi, Yuanwei, Shuai, Zhisheng, University of Central Florida
-
Abstract / Description
-
There had been a number of researches that investigated on the security market without transactioncosts. The focus of this research is in the area that when the security market with transaction costsis fair and in such fair market how one chooses a suitable portfolio to optimize the financial goal.The research approach adopted in this thesis includes linear algebra and elementary probability.The thesis provides evidence that we can maximize expected utility function to achieve our goal...
Show moreThere had been a number of researches that investigated on the security market without transactioncosts. The focus of this research is in the area that when the security market with transaction costsis fair and in such fair market how one chooses a suitable portfolio to optimize the financial goal.The research approach adopted in this thesis includes linear algebra and elementary probability.The thesis provides evidence that we can maximize expected utility function to achieve our goal(maximize expected return under certain risk tolerance). The main conclusions drawn from thisstudy are under certain conditions the security market is arbitrage-free, and we can always find anoptimal portfolio maximizing certain expected utility function.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004696, ucf:49875
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004696
-
-
Title
-
General Vector Explicit - Impact Time and Angle Control Guidance.
-
Creator
-
Robinson, Loren, Qu, Zhihua, Behal, Aman, Xu, Yunjun, University of Central Florida
-
Abstract / Description
-
This thesis proposes and evaluates a new cooperative guidance law called General Vector Explicit -Impact Time and Angle Control Guidance (GENEX-ITACG). The motivation for GENEX-ITACGcame from an explicit trajectory shaping guidance law called General Vector Explicit Guidance(GENEX). GENEX simultaneously achieves design specifications on miss distance and terminalmissile approach angle while also providing a design parameter that adjusts the aggressiveness ofthis approach angle. Encouraged by...
Show moreThis thesis proposes and evaluates a new cooperative guidance law called General Vector Explicit -Impact Time and Angle Control Guidance (GENEX-ITACG). The motivation for GENEX-ITACGcame from an explicit trajectory shaping guidance law called General Vector Explicit Guidance(GENEX). GENEX simultaneously achieves design specifications on miss distance and terminalmissile approach angle while also providing a design parameter that adjusts the aggressiveness ofthis approach angle. Encouraged by the applicability of this user parameter, GENEX-ITACG is anextension that allows a salvo of missiles to cooperatively achieve the same objectives of GENEXagainst a stationary target through the incorporation of a cooperative trajectory shaping guidancelaw called Impact Time and Angle Control Guidance (ITACG).ITACG allows a salvo of missile to simultaneously hit a stationary target at a prescribed impactangle and impact time. This predetermined impact time is what allows each missile involvedin the salvo attack to simultaneously arrived at the target with unique approach angles, whichgreatly increases the probability of success against well defended targets. GENEX-ITACG furtherincreases this probability of kill by allowing each missile to approach the target with a uniqueapproach angle rate through the use of a user design parameter.The incorporation of ITACG into GENEX is accomplished through the use of linear optimal controlby casting the cost function of GENEX into the formulation of ITACG. The feasibility GENEXITACGis demonstrated across three scenarios that demonstrate the ITACG portion of the guidancelaw, the GENEX portion of the guidance law, and finally the entirety of the guidance law. Theresults indicate that GENEX-ITACG is able to successfully guide a salvo of missiles to simultaneouslyhit a stationary target at a predefined terminal impact angle and impact time, while alsoallowing the user to adjust the aggressiveness of approach.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005876, ucf:50868
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005876
-
-
Title
-
HURRICANE EVACUATION: ORIGIN, ROUTE AND DESTINATION.
-
Creator
-
Dixit, Vinayak, Radwan, Essam, University of Central Florida
-
Abstract / Description
-
Recent natural disasters have highlighted the need to evacuate people as quickly as possible. During hurricane Rita in 2005, people were stuck in queue buildups and large scale congestions, due to improper use of capacity, planning and inadequate response to vehicle breakdown, flooding and accidents. Every minute is precious in situation of such disaster scenarios. Understanding evacuation demand loading is an essential part of any evacuation planning. One of the factors often understood to...
Show moreRecent natural disasters have highlighted the need to evacuate people as quickly as possible. During hurricane Rita in 2005, people were stuck in queue buildups and large scale congestions, due to improper use of capacity, planning and inadequate response to vehicle breakdown, flooding and accidents. Every minute is precious in situation of such disaster scenarios. Understanding evacuation demand loading is an essential part of any evacuation planning. One of the factors often understood to effect evacuation, but not modeled has been the effect of a previous hurricane. This has also been termed as the 'Katrina Effect', where, due to the devastation caused by hurricane Katrina, large number of people decided to evacuate during Hurricane Rita, which hit Texas three weeks after Katrina hit Louisiana. An important aspect influencing the rate of evacuation loading is Evacuation Preparation Time also referred to as 'Mobilization time' in literature. A methodology to model the effect of a recent past hurricane on the mobilization times for evacuees in an evacuation has been presented utilizing simultaneous estimation techniques. The errors for the two simultaneously estimated models were significantly correlated, confirming the idea that a previous hurricane does significantly affect evacuation during a subsequent hurricane. The results show that the home ownership, number of individuals in the household, income levels, and level/risk of surge were significant in the model explaining the mobilization times for the households. Pet ownership and number of kids in the households, known to increase the mobilization times during isolated hurricanes, were not found to be significant in the model. Evacuation operations are marred by unexpected blockages, breakdown of vehicles and sudden flooding of transportation infrastructure. A fast and accurate simulation model to incorporate flexibility into the evacuation planning procedure is required to react to such situations. Presently evacuation guidelines are prepared by the local emergency management, by testing various scenarios utilizing micro-simulation, which is extremely time consuming and do not provide flexibility to evacuation plans. To gain computational speed there is a need to move away from the level of detail of a micro-simulation to more aggregated simulation models. The Cell Transmission Model which is a mesoscopic simulation model is considered, and compared with VISSIM a microscopic simulation model. It was observed that the Cell Transmission Model was significantly faster compared to VISSIM, and was found to be accurate. The Cell Transmission model has a nice linear structure, which is utilized to construct Linear Programming Problems to determine optimal strategies. Optimization models were developed to determine strategies for optimal scheduling of evacuation orders and optimal crossover locations for contraflow operations on freeways. A new strategy termed as 'Dynamic Crossovers Strategy' is proposed to alleviate congestion due to lane blockages (due to vehicle breakdowns, incidents etc.). This research finds that the strategy of implementing dynamic crossovers in the event of lane blockages does improve evacuation operations. The optimization model provides a framework within which optimal strategies are determined quickly, without the need to test multiple scenarios using simulation. Destination networks are the cause of the main bottlenecks for evacuation routes, such aspects of transportation networks are rarely studied as part of evacuation operations. This research studies destination networks from a macroscopic perspective. Various relationships between network level macroscopic variables (Average Flow, Average Density and Average speed) over the network were studied. Utilizing these relationships, a "Network Breathing Strategy" was proposed to improve dissipation of evacuating traffic into the destination networks. The network breathing strategy is a cyclic process of allowing vehicles to enter the network till the network reaches congestion, which is followed by closure of their entry into the network until the network reaches an acceptable state. After which entrance into the network is allowed again. The intuitive motivation behind this methodology is to ensure that the network does not remain in congested conditions. The term 'Network Breathing' was coined due to the analogy seen between this strategy to the process of breathing, where vehicles are inhaled by the network (vehicles allowed in) and dissipated by the network (vehicles are not allowed in). It is shown that the network breathing improves the dissipation of vehicle into the destination network. Evacuation operations can be divided into three main levels: at the origin (region at risk), routes and destination. This research encompasses all the three aspects and proposes a framework to assess the whole system in its entirety. At the Origin the demand dictates when to schedule evacuation orders, it also dictates the capacity required on different routes. These breakthroughs will provide a framework for a real time Decision Support System which will help emergency management official make decisions faster and on the fly.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002051, ucf:47589
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002051
-
-
Title
-
MULTIOBJECTIVE COORDINATION MODELS FOR MAINTENANCE AND SERVICE PARTS INVENTORY PLANNING AND CONTROL.
-
Creator
-
Martinez, Oscar, Geiger, Christopher, University of Central Florida
-
Abstract / Description
-
In many equipment-intensive organizations in the manufacturing, service and particularly the defense sectors, service parts inventories constitute a significant source of tactical and operational costs and consume a significant portion of capital investment. For instance, the Defense Logistics Agency manages about 4 million consumable service parts and provides about 93% of all consumable service parts used by the military services. These items required about US$1.9 billion over the fiscal...
Show moreIn many equipment-intensive organizations in the manufacturing, service and particularly the defense sectors, service parts inventories constitute a significant source of tactical and operational costs and consume a significant portion of capital investment. For instance, the Defense Logistics Agency manages about 4 million consumable service parts and provides about 93% of all consumable service parts used by the military services. These items required about US$1.9 billion over the fiscal years 1999-2002. During the same time, the US General Accountability Office discovered that, in the United States Navy, there were about 3.7 billion ship and submarine parts that were not needed. The Federal Aviation Administration says that 26 million aircraft parts are changed each year. In 2002, the holding cost of service parts for the aviation industry was estimated to be US$50 billion. The US Army Institute of Land Warfare reports that, at the beginning of the 2003 fiscal year, prior to Operation Iraqi Freedom the aviation service parts alone was in excess of US$1 billion. This situation makes the management of these items a very critical tactical and strategic issue that is worthy of further study. The key challenge is to maintain high equipment availability with low service cost (e.g., holding, warehousing, transportation, technicians, overhead, etc.). For instance, despite reporting US$10.5 billion in appropriations spent on purchasing service parts in 2000, the United States Air Force (USAF) continues to report shortages of service parts. The USAF estimates that, if the investment on service parts decreases to about US$5.3 billion, weapons systems availability would range from 73 to 100 percent. Thus, better management of service parts inventories should create opportunities for cost savings caused by the efficient management of these inventories. Unfortunately, service parts belong to a class of inventory that continually makes them difficult to manage. Moreover, it can be said that the general function of service parts inventories is to support maintenance actions; therefore, service parts inventory policies are highly related to the resident maintenance policies. However, the interrelationship between service parts inventory management and maintenance policies is often overlooked, both in practice and in the academic literature, when it comes to optimizing maintenance and service parts inventory policies. Hence, there exists a great divide between maintenance and service parts inventory theory and practice. This research investigation specifically considers the aspect of joint maintenance and service part inventory optimization. We decompose the joint maintenance and service part inventory optimization problem into the supplier's problem and the customer's problem. Long-run expected cost functions for each problem that include the most common maintenance cost parameters and service parts inventory cost parameters are presented. Computational experiments are conducted for a single-supplier two-echelon service parts supply chain configuration varying the number of customers in the network. Lateral transshipments (LTs) of service parts between customers are not allowed. For this configuration, we optimize the cost functions using a traditional, or decoupled, approach, where each supply chain entity optimizes its cost individually, and a joint approach, where the cost objectives of both the supplier and customers are optimized simultaneously. We show that the multiple objective optimization approach outperforms the traditional decoupled optimization approach by generating lower system-wide supply chain network costs. The model formulations are extended by relaxing the assumption of no LTs between customers in the supply chain network. Similar to those for the no LTs configuration, the results for the LTs configuration show that the multiobjective optimization outperforms the decoupled optimization in terms of system-wide cost. Hence, it is economically beneficial to jointly consider all parties within the supply network. Further, we compare the model configurations LTs versus no LTs, and we show that using LTs improves the overall savings of the system. It is observed that the improvement is mostly derived from reduced shortage costs since the equipment downtime is reduced due to the proximity of the supply. The models and results of this research have significant practical implications as they can be used to assist decision-makers to determine when and where to pre-position parts inventories to maximize equipment availability. Furthermore, these models can assist in the preparation of the terms of long-term service agreements and maintenance contracts between original equipment manufacturers and their customers (i.e., equipment owners and/or operators), including determining the equitable allocation of all system-wide cost savings under the agreement.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002459, ucf:47723
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002459
-
-
Title
-
DESIGN AND OPTIMIZATION OF NANOSTRUCTURED OPTICAL FILTERS.
-
Creator
-
Brown, Jeremiah, Moharam, Jim, University of Central Florida
-
Abstract / Description
-
Optical filters encompass a vast array of devices and structures for a wide variety of applications. Generally speaking, an optical filter is some structure that applies a designed amplitude and phase transform to an incident signal. Different classes of filters have vastly divergent characteristics, and one of the challenges in the optical design process is identifying the ideal filter for a given application and optimizing it to obtain a specific response. In particular, it is highly...
Show moreOptical filters encompass a vast array of devices and structures for a wide variety of applications. Generally speaking, an optical filter is some structure that applies a designed amplitude and phase transform to an incident signal. Different classes of filters have vastly divergent characteristics, and one of the challenges in the optical design process is identifying the ideal filter for a given application and optimizing it to obtain a specific response. In particular, it is highly advantageous to obtain a filter that can be seamlessly integrated into an overall device package without requiring exotic fabrication steps, extremely sensitive alignments, or complicated conversions between optical and electrical signals. This dissertation explores three classes of nano-scale optical filters in an effort to obtain different types of dispersive response functions. First, dispersive waveguides are designed using a sub-wavelength periodic structure to transmit a single TE propagating mode with very high second order dispersion. Next, an innovative approach for decoupling waveguide trajectories from Bragg gratings is outlined and used to obtain a uniform second-order dispersion response while minimizing fabrication limitations. Finally, high Q-factor microcavities are coupled into axisymmetric pillar structures that offer extremely high group delay over very narrow transmission bandwidths. While these three novel filters are quite diverse in their operation and target applications, they offer extremely compact structures given the magnitude of the dispersion or group delay they introduce to an incident signal. They are also designed and structured as to be formed on an optical wafer scale using standard integrated circuit fabrication techniques. A number of frequency-domain numerical simulation methods are developed to fully characterize and model each of the different filters. The complete filter response, which includes the dispersion and delay characteristics and optical coupling, is used to evaluate each filter design concept. However, due to the complex nature of the structure geometries and electromagnetic interactions, an iterative optimization approach is required to improve the structure designs and obtain a suitable response. To this end, a Particle Swarm Optimization algorithm is developed and applied to the simulated filter responses to generate optimal filter designs.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002502, ucf:47678
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002502
-
-
Title
-
OPTIMIZATION OF AN UNSTRUCTURED FINITE ELEMENT MESH FOR TIDE AND STORM SURGE MODELING APPLICATIONS IN THE WESTERN NORTH ATLANTIC OCEAN.
-
Creator
-
Kojima, Satoshi, Hagen, Scott, University of Central Florida
-
Abstract / Description
-
Recently, a highly resolved, finite element mesh was developed for the purpose of performing hydrodynamic calculations in the Western North Atlantic Tidal (WNAT) model domain. The WNAT model domain consists of the Gulf of Mexico, the Caribbean Sea, and the entire portion of the North Atlantic Ocean found west of the 60° W meridian. This high resolution mesh (333K) employs 332,582 computational nodes and 647,018 triangular elements to provide approximately 1.0 to 25 km node spacing. In the...
Show moreRecently, a highly resolved, finite element mesh was developed for the purpose of performing hydrodynamic calculations in the Western North Atlantic Tidal (WNAT) model domain. The WNAT model domain consists of the Gulf of Mexico, the Caribbean Sea, and the entire portion of the North Atlantic Ocean found west of the 60° W meridian. This high resolution mesh (333K) employs 332,582 computational nodes and 647,018 triangular elements to provide approximately 1.0 to 25 km node spacing. In the previous work, the 333K mesh was applied in a Localized Truncation Error Analysis (LTEA) to produce nodal density requirements for the WNAT model domain. The goal of the work herein is to use these LTEA-based element sizing guidelines in order to obtain a more optimal finite element mesh for the WNAT model domain, where optimal refers to minimizing nodes (to enhance computational efficiency) while maintaining model accuracy, through an automated procedure. Initially, three finite element meshes are constructed: 95K, 60K, and 53K. The 95K mesh consists of 95,062 computational nodes and 182,941 triangular elements providing about 0.5 to 120 km node spacing. The 60K mesh contains 60,487 computational nodes and 108,987 triangular elements. It has roughly 0.5 to 185 km node spacing. The 53K mesh includes 52,774 computational nodes and 98,365 triangular elements. This is a particularly coarse mesh, consisting of approximately 0.5 to 160 km node spacing. It is important to note that these three finite element meshes were produced automatically, with each employing the bathymetry and coastline (of various levels of resolution) of the 333K mesh, thereby enabling progress towards an optimal finite element mesh. Tidal simulations are then performed for the WNAT model domain by solving the shallow water equations in a time marching manner for the deviation from mean sea level and depth-integrated velocities at each computational node of the different finite element meshes. In order to verify the model output and compare the performance of the various finite element mesh applications, historical tidal constituent data from 150 tidal stations located within the WNAT model domain are collected and examined. These historical harmonic data are applied in two types of comparative analyses to evaluate the accuracy of the simulation results. First, qualitative comparisons are based on visual sense by utilizing plots of resynthesized model output and historical tidal constituents. Second, quantitative comparisons are performed via a statistical analysis of the errors between model response and historical data. The latter method elicits average phase errors and goodness of average amplitude fits in terms of numerical values, thus providing a quantifiable way to present model error. The error analysis establishes the 53K finite element mesh as optimal when compared to the 333K, 95K, and 60K meshes. However, its required time step of less than ten seconds constrains its application. Therefore, the 53K mesh is manually edited to uphold accurate simulation results and to produce a more computationally efficient mesh, by increasing its time step, so that it can be applied to forecast tide and storm surge in the Western North Atlantic Ocean on a real-time basis.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000565, ucf:46421
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000565
-
-
Title
-
'NO HOME HERE': FEMALE SPACE AND THE MODERNIST AESTHETIC IN NELLA LARSEN'S QUICKSAND AND SYLVIA PLATH'S THE BELL JAR.
-
Creator
-
Cherinka, Julianna N, Mathes, Carmen Faye, University of Central Florida
-
Abstract / Description
-
In her 1929 essay "A Room of One's Own," Virginia Woolf famously asserts that "a woman must have money and a room of her own if she is to write fiction" (4). This concept places an immediate importance on the role of the Modernist female subject as an artist and as an architect, constructing the places and spaces that she exists within. With Woolf's argument as its point of departure, this thesis investigates the theme of female space in two Modernist texts: Nella Larsen's Quicksand (1928)...
Show moreIn her 1929 essay "A Room of One's Own," Virginia Woolf famously asserts that "a woman must have money and a room of her own if she is to write fiction" (4). This concept places an immediate importance on the role of the Modernist female subject as an artist and as an architect, constructing the places and spaces that she exists within. With Woolf's argument as its point of departure, this thesis investigates the theme of female space in two Modernist texts: Nella Larsen's Quicksand (1928) and Sylvia Plath's The Bell Jar (1963). The respective protagonists of Quicksand and The Bell Jar, Helga Crane and Esther Greenwood, each undertake journeys to obtain spaces that are purely their own. However, this thesis positions each space that Helga and Esther occupy as both male-constructed and male-dominated in order to address the inherent gendering of space and its impact on the development of feminine identities. This thesis focuses specifically on the roles of the mother, the muse, and the female mentor, tracking the spaces in which Helga and Esther begin to adhere to these roles. Expanding on Lauren Berlant's theory of cruel optimism, this thesis will use the term "cruel femininity" to support its intervening claim that the respective relationships that Helga and Esther each have with their own feminine identities begin to turn cruel as they internalize the male-dominated spatial structures surrounding them. Overall, this thesis argues that there is no space in existence where Helga and Esther can realize their full potential as human beings, as long as the spatial structures within their communities continue to be controlled by hegemonic, patriarchal beliefs.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFH2000412, ucf:45708
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH2000412
-
-
Title
-
MESHLESS HEMODYNAMICS MODELING AND EVOLUTIONARY SHAPE OPTIMIZATION OF BYPASS GRAFTS ANASTOMOSES.
-
Creator
-
El Zahab, Zaher, Kassab, Alain, University of Central Florida
-
Abstract / Description
-
Objectives: The main objective of the current dissertation is to establish a formal shape optimization procedure for a given bypass grafts end-to-side distal anastomosis (ETSDA). The motivation behind this dissertation is that most of the previous ETSDA shape optimization research activities cited in the literature relied on direct optimization approaches that do not guaranty accurate optimization results. Three different ETSDA models are considered herein: The conventional, the Miller cuff,...
Show moreObjectives: The main objective of the current dissertation is to establish a formal shape optimization procedure for a given bypass grafts end-to-side distal anastomosis (ETSDA). The motivation behind this dissertation is that most of the previous ETSDA shape optimization research activities cited in the literature relied on direct optimization approaches that do not guaranty accurate optimization results. Three different ETSDA models are considered herein: The conventional, the Miller cuff, and the hood models. Materials and Methods: The ETSDA shape optimization is driven by three computational objects: a localized collocation meshless method (LCMM) solver, an automated geometry pre-processor, and a genetic-algorithm-based optimizer. The usage of the LCMM solver is very convenient to set an autonomous optimization mechanism for the ETSDA models. The task of the automated pre-processor is to randomly distribute solution points in the ETSDA geometries. The task of the optimized is the adjust the ETSDA geometries based on mitigation of the abnormal hemodynamics parameters. Results: The results reported in this dissertation entail the stabilization and validation of the LCMM solver in addition to the shape optimization of the considered ETSDA models. The LCMM stabilization results consists validating a custom-designed upwinding scheme on different one-dimensional and two-dimensional test cases. The LCMM validation is done for incompressible steady and unsteady flow applications in the ETSDA models. The ETSDA shape optimization include single-objective optimization results in steady flow situations and bi-objective optimization results in pulsatile flow situations. Conclusions: The LCMM solver provides verifiably accurate resolution of hemodynamics and is demonstrated to be third order accurate in a comparison to a benchmark analytical solution of the Navier-Stokes. The genetic-algorithm-based shape optimization approach proved to be very effective for the conventional and Miller cuff ETSDA models. The shape optimization results for those two models definitely suggest that the graft caliber should be maximized whereas the anastomotic angle and the cuff height (in the Miller cuff model) should be chosen following a compromise between the wall shear stress spatial and temporal gradients. The shape optimization of the hood ETSDA model did not prove to be advantageous, however it could be meaningful with the inclusion of the suture line cut length as an optimization parameter.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002165, ucf:47927
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002165
-
-
Title
-
Development and Application of an Optimization Approach for Cost-Effective Deployment of Advanced Wrong-Way Driving Countermeasures.
-
Creator
-
Sandt, Adrian, Al-Deek, Haitham, Eluru, Naveen, Hasan, Samiul, Zheng, Qipeng, University of Central Florida
-
Abstract / Description
-
Wrong-way driving (WWD) is a dangerous behavior, especially on high-speed divided highways. The nature of WWD crashes makes it difficult for agencies to combat them effectively. Advanced WWD countermeasures equipped with flashing lights, detection devices, and cameras can significantly reduce WWD. However, these countermeasures' high costs mean that agencies often cannot deploy them at all exit ramps. To help agencies identify the most cost-effective deployment locations for advanced WWD...
Show moreWrong-way driving (WWD) is a dangerous behavior, especially on high-speed divided highways. The nature of WWD crashes makes it difficult for agencies to combat them effectively. Advanced WWD countermeasures equipped with flashing lights, detection devices, and cameras can significantly reduce WWD. However, these countermeasures' high costs mean that agencies often cannot deploy them at all exit ramps. To help agencies identify the most cost-effective deployment locations for advanced WWD countermeasures, an innovative WWD countermeasure optimization approach was developed. This approach consists of a WWD hotspots model and a WWD countermeasures optimization algorithm. The WWD hotspots model uses non-crash WWD events, interchange designs, and traffic volumes to predict the number of WWD crashes on multi-exit roadway segments and identify hotspot segments with high WWD crash risk (WWCR). Then, the optimization algorithm uses these WWCR values to identify the optimal exits for advanced WWD countermeasure deployment based on available resources and other applicable constraints. This approach was applied to the Central Florida Expressway Authority (CFX) and Florida's Turnpike Enterprise (FTE) toll road networks. In both applications, the optimization algorithm provided significant WWCR reduction while meeting investment and other constraints and better allocated the agencies' resources compared to only deploying advanced WWD countermeasures in WWD hotspots. The optimization algorithm was also used to identify mainline sections on the CFX network with high WWCR. Additionally, the optimization algorithm was used to evaluate existing Rectangular Flashing Beacon (RFB) and Light-Emitting Diode (LED) advanced WWD countermeasures on the CFX (RFBs) and FTE (RFBs and LEDs) networks. These evaluations showed that the crash reduction and injury reduction benefits of these advanced WWD countermeasures have exceeded their costs since these countermeasures have been deployed. By using this WWD countermeasures optimization approach, agencies throughout the United States could proactively and cost-effectively deploy advanced WWD countermeasures to reduce WWD.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007364, ucf:52093
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007364
-
-
Title
-
Multi-Objective Optimization for Construction Equipment Fleet Selection and Management In Highway Construction Projects Based on Time, Cost, and Quality Objectives.
-
Creator
-
Shehadeh, Ali, Tatari, Omer, Al-Deek, Haitham, Abou-Senna, Hatem, Flitsiyan, Elena, University of Central Florida
-
Abstract / Description
-
The sector of highway construction shares approximately 11% of the total construction industry in the US. Construction equipment can be considered as one of the primary reasons this industry has reached such a significant level, as it is considered an essential part of the highway construction process during highway project construction. This research addresses a multi-objective optimization mathematical model that quantifies and optimize the key parameters for excavator, truck, and motor...
Show moreThe sector of highway construction shares approximately 11% of the total construction industry in the US. Construction equipment can be considered as one of the primary reasons this industry has reached such a significant level, as it is considered an essential part of the highway construction process during highway project construction. This research addresses a multi-objective optimization mathematical model that quantifies and optimize the key parameters for excavator, truck, and motor-grader equipment to minimize time and cost objective functions. The model is also aimed to maintain the required level of quality for the targeted construction activity. The mathematical functions for the primary objectives were formulated and then a genetic algorithm-based multi-objective was performed to generate the time-cost Pareto trade-offs for all possible equipment combinations using MATLAB software to facilitate the implementation. The model's capabilities in generating optimal time and cost trade-offs based on optimized equipment number, capacity, and speed to adapt with the complex and dynamic nature of highway construction projects are demonstrated using a highway construction case study. The developed model is a decision support tool during the construction process to adapt with any necessary changes into time or cost requirements taking into consideration environmental, safety and quality aspects. The flexibility and comprehensiveness of the proposed model, along with its programmable nature, make it a powerful tool for managing construction equipment, which will help saving time and money within the optimal quality margins. Also, this environmentally friendly decision-support tool model provided optimal solutions that help to reduce the CO2 emissions reducing the ripple effects of targeted highway construction activities on the global warming phenomenon. The generated optimal solutions offered considerable time and cost savings.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007863, ucf:52800
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007863
-
-
Title
-
STOCHASTIC OPTIMIZATION AND APPLICATIONS WITH ENDOGENOUS UNCERTAINTIES VIA DISCRETE CHOICE MODELSl.
-
Creator
-
Chen, Mengnan, Zheng, Qipeng, Boginski, Vladimir, Vela, Adan, Yayla Kullu, Muge, University of Central Florida
-
Abstract / Description
-
Stochastic optimization is an optimization method that solves stochastic problems for minimizing or maximizing an objective function when there is randomness in the optimization process. In this dissertation, various stochastic optimization problems from the areas of Manufacturing, Health care, and Information Cascade are investigated in networks systems. These stochastic optimization problems aim to make plan for using existing resources to improve production efficiency, customer...
Show moreStochastic optimization is an optimization method that solves stochastic problems for minimizing or maximizing an objective function when there is randomness in the optimization process. In this dissertation, various stochastic optimization problems from the areas of Manufacturing, Health care, and Information Cascade are investigated in networks systems. These stochastic optimization problems aim to make plan for using existing resources to improve production efficiency, customer satisfaction, and information influence within limitation. Since the strategies are made for future planning, there are environmental uncertainties in the network systems. Sometimes, the environment may be changed due to the action of the decision maker. To handle this decision-dependent situation, the discrete choice model is applied to estimate the dynamic environment in the stochastic programming model. In the manufacturing project, production planning of lot allocation is performed to maximize the expected output within a limited time horizon. In the health care project, physician is allocated to different local clinics to maximize the patient utilization. In the information cascade project, seed selection of the source user helps the information holder to diffuse the message to target users using the independent cascade model to reach influence maximization. \parThe computation complexities of the three projects mentioned above grow exponentially by the network size. To solve the stochastic optimization problems of large-scale networks within a reasonable time, several problem-specific algorithms are designed for each project. In the manufacturing project, the sampling average approximation method is applied to reduce the scenario size. In the health care project, both the guided local search with gradient ascent and large neighborhood search with Tabu search are developed to approach the optimal solution. In the information cascade project, the myopic policy is used to separate stochastic programming by discrete time, and the Markov decision process is implemented in policy evaluation and updating.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007792, ucf:52347
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007792
-
-
Title
-
Vision-Based Testbeds for Control System Applicaitons.
-
Creator
-
Sivilli, Robert, Xu, Yunjun, Gou, Jihua, Cho, Hyoung, Pham, Khanh, University of Central Florida
-
Abstract / Description
-
In the field of control systems, testbeds are a pivotal step in the validation and improvement of new algorithms for different applications. They provide a safe, controlled environment typically having a significantly lower cost of failure than the final application. Vision systems provide nonintrusive methods of measurement that can be easily implemented for various setups and applications. This work presents methods for modeling, removing distortion, calibrating, and rectifying single and...
Show moreIn the field of control systems, testbeds are a pivotal step in the validation and improvement of new algorithms for different applications. They provide a safe, controlled environment typically having a significantly lower cost of failure than the final application. Vision systems provide nonintrusive methods of measurement that can be easily implemented for various setups and applications. This work presents methods for modeling, removing distortion, calibrating, and rectifying single and two camera systems, as well as, two very different applications of vision-based control system testbeds: deflection control of shape memory polymers and trajectory planning for mobile robots. First, a testbed for the modeling and control of shape memory polymers (SMP) is designed. Red-green-blue (RGB) thresholding is used to assist in the webcam-based, 3D reconstruction of points of interest. A PID based controller is designed and shown to work with SMP samples, while state space models were identified from step input responses. Models were used to develop a linear quadratic regulator that is shown to work in simulation. Also, a simple to use graphical interface is designed for fast and simple testing of a series of samples. Second, a robot testbed is designed to test new trajectory planning algorithms. A template-based predictive search algorithm is investigated to process the images obtained through a low-cost webcam vision system, which is used to monitor the testbed environment. Also a user-friendly graphical interface is developed such that the functionalities of the webcam, robots, and optimizations are automated. The testbeds are used to demonstrate a wavefront-enhanced, B-spline augmented virtual motion camouflage algorithm for single or multiple robots to navigate through an obstacle dense and changing environment, while considering inter-vehicle conflicts, obstacle avoidance, nonlinear dynamics, and different constraints. In addition, it is expected that this testbed can be used to test different vehicle motion planning and control algorithms.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004601, ucf:49187
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004601
-
-
Title
-
Optimal Attitude Control Management for a Cubesat.
-
Creator
-
Develle, Michael, Xu, Yunjun, Lin, Kuo-Chi, Chew, Phyekeng, University of Central Florida
-
Abstract / Description
-
CubeSats have become popular among universities, research organizations, and government agencies due to their low cost, small size, and light weight. Their standardized configurations further reduce the development time and ensure more frequent launch opportunities. Early cubesat missions focused on hardware validation and simple communication missions, with little requirement for pointing accuracy. Most of these used magnetic torque rods or coils for attitude stabilization. However, the...
Show moreCubeSats have become popular among universities, research organizations, and government agencies due to their low cost, small size, and light weight. Their standardized configurations further reduce the development time and ensure more frequent launch opportunities. Early cubesat missions focused on hardware validation and simple communication missions, with little requirement for pointing accuracy. Most of these used magnetic torque rods or coils for attitude stabilization. However, the intrinsic problems associated with magnetictorque systems, such as the lack of three-axis control and low pointing accuracy, make them unsuitable for more advanced missions such as detailed imaging and on-orbit inspection. Three-axis control in a cubesat can be achieved by combining magnetic torque coils with other devices such as thrusters, but the lifetime is limited by the fuel source onboard. To maximize the missionlifetime, a fast attitude control management algorithm that could optimally manage the usage of the magnetic and thruster torques is desirable. Therefore, a recently developed method, the B-Spline-augmented virtual motion camouflage, is presented in this defense to solve the problem. This approach provides results which are very close to those obtained through other popular nonlinear constrained optimal control methods with a significantly reduced computational time.Simulation results are presented to validate the capabilities of the method in this application.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004099, ucf:49102
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004099
Pages