Current Search: stochastic (x)
View All Items
- Title
- USING MODELING AND SIMULATION TO EVALUATE DISEASE CONTROL MEASURES.
- Creator
-
Atkins, Tracy, Clarke, Thomas, University of Central Florida
- Abstract / Description
-
This dissertation introduced several issues concerning the analysis of diseases by showing how modeling and simulation could be used to assist in creating health policy by estimating the effects of such policies. The first question posed was how would education, vaccination and a combination of these two programs effect the possible outbreak of meningitis on a college campus. After creating a model representative of the transmission dynamics of meningitis and establishing parameter values...
Show moreThis dissertation introduced several issues concerning the analysis of diseases by showing how modeling and simulation could be used to assist in creating health policy by estimating the effects of such policies. The first question posed was how would education, vaccination and a combination of these two programs effect the possible outbreak of meningitis on a college campus. After creating a model representative of the transmission dynamics of meningitis and establishing parameter values characteristic of the University of Central Florida main campus, the results of a deterministic model were presented in several forms. The result of this model was the combination of education and vaccination would eliminate the possibility of an epidemic on our campus. Next, we used simulation to evaluate how quarantine and treatment would affect an outbreak of influenza on the same population. A mathematical model was created specific to influenza on the UCF campus. Numerical results from this model were then presented in tabular and graphical form. The results comparing the simulations for quarantine and treatment show the best course of action would be to enact a quarantine policy on the campus thus reducing the maximum number of infected while increasing the time to reach this peak. Finally, we addressed the issue of performing the analysis stochastically versus deterministically. Additional models were created with the progression of the disease occurring by chance. Statistical analysis was done on the mean of 100 stochastic simulation runs comparing that value to the one deterministic outcome. The results for this analysis were inconclusive, as the results for meningitis were comparable while those for influenza appeared to be different.
Show less - Date Issued
- 2010
- Identifier
- CFE0003232, ucf:48535
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003232
- Title
- Optimization Approaches for Electricity Generation Expansion Planning Under Uncertainty.
- Creator
-
Zhan, Yiduo, Zheng, Qipeng, Vela, Adan, Garibay, Ivan, Sun, Wei, University of Central Florida
- Abstract / Description
-
In this dissertation, we study the long-term electricity infrastructure investment planning problems in the electrical power system. These long-term capacity expansion planning problems aim at making the most effective and efficient investment decisions on both thermal and wind power generation units. One of our research focuses are uncertainty modeling in these long-term decision-making problems in power systems, because power systems' infrastructures require a large amount of investments,...
Show moreIn this dissertation, we study the long-term electricity infrastructure investment planning problems in the electrical power system. These long-term capacity expansion planning problems aim at making the most effective and efficient investment decisions on both thermal and wind power generation units. One of our research focuses are uncertainty modeling in these long-term decision-making problems in power systems, because power systems' infrastructures require a large amount of investments, and need to stay in operation for a long time and accommodate many different scenarios in the future. The uncertainties we are addressing in this dissertation mainly include demands, electricity prices, investment and maintenance costs of power generation units. To address these future uncertainties in the decision-making process, this dissertation adopts two different optimization approaches: decision-dependent stochastic programming and adaptive robust optimization. In the decision-dependent stochastic programming approach, we consider the electricity prices and generation units' investment and maintenance costs being endogenous uncertainties, and then design probability distribution functions of decision variables and input parameters based on well-established econometric theories, such as the discrete-choice theory and the economy-of-scale mechanism. In the adaptive robust optimization approach, we focus on finding the multistage adaptive robust solutions using affine policies while considering uncertain intervals of future demands.This dissertation mainly includes three research projects. The study of each project consists of two main parts, the formulation of its mathematical model and the development of solution algorithms for the model. This first problem concerns a large-scale investment problem on both thermal and wind power generation from an integrated angle without modeling all operational details. In this problem, we take a multistage decision-dependent stochastic programming approach while assuming uncertain electricity prices. We use a quasi-exact solution approach to solve this multistage stochastic nonlinear program. Numerical results show both computational efficient of the solutions approach and benefits of using our decision-dependent model over traditional stochastic programming models. The second problem concerns the long-term investment planning with detailed models of real-time operations. We also take a multistage decision-dependent stochastic programming approach to address endogenous uncertainties such as generation units' investment and maintenance costs. However, the detailed modeling of operations makes the problem a bilevel optimization problem. We then transform it to a Mathematic Program with Equilibrium Constraints (MPEC) problem. We design an efficient algorithm based on Dantzig-Wolfe decomposition to solve this multistage stochastic MPEC problem. The last problem concerns a multistage adaptive investment planning problem while considering uncertain future demand at various locations. To solve this multi-level optimization problem, we take advantage of affine policies to transform it to a single-level optimization problem. Our numerical examples show the benefits of using this multistage adaptive robust planning model over both traditional stochastic programming and single-level robust optimization approaches. Based on numerical studies in the three projects, we conclude that our approaches provide effective and efficient modeling and computational tools for advanced power systems' expansion planning.
Show less - Date Issued
- 2016
- Identifier
- CFE0006676, ucf:51248
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006676
- Title
- Stochastic-Based Computing with Emerging Spin-Based Device Technologies.
- Creator
-
Bai, Yu, Lin, Mingjie, DeMara, Ronald, Wang, Jun, Jin, Yier, Dong, Yajie, University of Central Florida
- Abstract / Description
-
In this dissertation, analog and emerging device physics is explored to provide a technology plat- form to design new bio-inspired system and novel architecture. With CMOS approaching the nano-scaling, their physics limits in feature size. Therefore, their physical device characteristics will pose severe challenges to constructing robust digital circuitry. Unlike transistor defects due to fabrication imperfection, quantum-related switching uncertainties will seriously increase their sus-...
Show moreIn this dissertation, analog and emerging device physics is explored to provide a technology plat- form to design new bio-inspired system and novel architecture. With CMOS approaching the nano-scaling, their physics limits in feature size. Therefore, their physical device characteristics will pose severe challenges to constructing robust digital circuitry. Unlike transistor defects due to fabrication imperfection, quantum-related switching uncertainties will seriously increase their sus- ceptibility to noise, thus rendering the traditional thinking and logic design techniques inadequate. Therefore, the trend of current research objectives is to create a non-Boolean high-level compu- tational model and map it directly to the unique operational properties of new, power efficient, nanoscale devices.The focus of this research is based on two-fold: 1) Investigation of the physical hysteresis switching behaviors of domain wall device. We analyze phenomenon of domain wall device and identify hys- teresis behavior with current range. We proposed the Domain-Wall-Motion-based (DWM) NCL circuit that achieves approximately 30x and 8x improvements in energy efficiency and chip layout area, respectively, over its equivalent CMOS design, while maintaining similar delay performance for a one bit full adder. 2) Investigation of the physical stochastic switching behaviors of Mag- netic Tunnel Junction (MTJ) device. With analyzing of stochastic switching behaviors of MTJ, we proposed an innovative stochastic-based architecture for implementing artificial neural network (S-ANN) with both magnetic tunneling junction (MTJ) and domain wall motion (DWM) devices, which enables efficient computing at an ultra-low voltage. For a well-known pattern recognition task, our mixed-model HSPICE simulation results have shown that a 34-neuron S-ANN imple- mentation, when compared with its deterministic-based ANN counterparts implemented with dig- ital and analog CMOS circuits, achieves more than 1.5 ? 2 orders of magnitude lower energy consumption and 2 ? 2.5 orders of magnitude less hidden layer chip area.
Show less - Date Issued
- 2016
- Identifier
- CFE0006680, ucf:51921
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006680
- Title
- STOCHASTIC OPTIMIZATION AND APPLICATIONS WITH ENDOGENOUS UNCERTAINTIES VIA DISCRETE CHOICE MODELSl.
- Creator
-
Chen, Mengnan, Zheng, Qipeng, Boginski, Vladimir, Vela, Adan, Yayla Kullu, Muge, University of Central Florida
- Abstract / Description
-
Stochastic optimization is an optimization method that solves stochastic problems for minimizing or maximizing an objective function when there is randomness in the optimization process. In this dissertation, various stochastic optimization problems from the areas of Manufacturing, Health care, and Information Cascade are investigated in networks systems. These stochastic optimization problems aim to make plan for using existing resources to improve production efficiency, customer...
Show moreStochastic optimization is an optimization method that solves stochastic problems for minimizing or maximizing an objective function when there is randomness in the optimization process. In this dissertation, various stochastic optimization problems from the areas of Manufacturing, Health care, and Information Cascade are investigated in networks systems. These stochastic optimization problems aim to make plan for using existing resources to improve production efficiency, customer satisfaction, and information influence within limitation. Since the strategies are made for future planning, there are environmental uncertainties in the network systems. Sometimes, the environment may be changed due to the action of the decision maker. To handle this decision-dependent situation, the discrete choice model is applied to estimate the dynamic environment in the stochastic programming model. In the manufacturing project, production planning of lot allocation is performed to maximize the expected output within a limited time horizon. In the health care project, physician is allocated to different local clinics to maximize the patient utilization. In the information cascade project, seed selection of the source user helps the information holder to diffuse the message to target users using the independent cascade model to reach influence maximization. \parThe computation complexities of the three projects mentioned above grow exponentially by the network size. To solve the stochastic optimization problems of large-scale networks within a reasonable time, several problem-specific algorithms are designed for each project. In the manufacturing project, the sampling average approximation method is applied to reduce the scenario size. In the health care project, both the guided local search with gradient ascent and large neighborhood search with Tabu search are developed to approach the optimal solution. In the information cascade project, the myopic policy is used to separate stochastic programming by discrete time, and the Markov decision process is implemented in policy evaluation and updating.
Show less - Date Issued
- 2019
- Identifier
- CFE0007792, ucf:52347
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007792
- Title
- STOCHASTIC RESOURCE CONSTRAINED PROJECT SCHEDULING WITH STOCHASTIC TASK INSERTION PROBLEMS.
- Creator
-
Archer, Sandra, Armacost, Robert, University of Central Florida
- Abstract / Description
-
The area of focus for this research is the Stochastic Resource Constrained Project Scheduling Problem (SRCPSP) with Stochastic Task Insertion (STI). The STI problem is a specific form of the SRCPSP, which may be considered to be a cross between two types of problems in the general form: the Stochastic Project Scheduling Problem, and the Resource Constrained Project Scheduling Problem. The stochastic nature of this problem is in the occurrence/non-occurrence of tasks with deterministic...
Show moreThe area of focus for this research is the Stochastic Resource Constrained Project Scheduling Problem (SRCPSP) with Stochastic Task Insertion (STI). The STI problem is a specific form of the SRCPSP, which may be considered to be a cross between two types of problems in the general form: the Stochastic Project Scheduling Problem, and the Resource Constrained Project Scheduling Problem. The stochastic nature of this problem is in the occurrence/non-occurrence of tasks with deterministic duration. Researchers Selim (2002) and Grey (2007) laid the groundwork for the research on this problem. Selim (2002) developed a set of robustness metrics and used these to evaluate two initial baseline (predictive) scheduling techniques, optimistic (0% buffer) and pessimistic (100% buffer), where none or all of the stochastic tasks were scheduled, respectively. Grey (2007) expanded the research by developing a new partial buffering strategy for the initial baseline predictive schedule for this problem and found the partial buffering strategy to be superior to Selim's "extreme" buffering approach. The current research continues this work by focusing on resource aspects of the problem, new buffering approaches, and a new rescheduling method. If resource usage is important to project managers, then a set of metrics that describes changes to the resource flow would be important to measure between the initial baseline predictive schedule and the final "as-run" schedule. Two new sets of resource metrics were constructed regarding resource utilization and resource flow. Using these new metrics, as well as the Selim/Grey metrics, a new buffering approach was developed that used resource information to size the buffers. The resource-sized buffers did not show to have significant improvement over Grey's 50% buffer used as a benchmark. The new resource metrics were used to validate that the 50% buffering strategy is superior to the 0% or 100% buffering by Selim. Recognizing that partial buffers appear to be the most promising initial baseline development approach for STI problems, and understanding that experienced project managers may be able to predict stochastic probabilities based on prior projects, the next phase of the research developed a new set of buffering strategies where buffers are inserted that are proportional to the probability of occurrence. The results of this proportional buffering strategy were very positive, with the majority of the metrics (both robustness and resource), except for stability metrics, improved by using the proportional buffer. Finally, it was recognized that all research thus far for the SRCPSP with STI focused solely on the development of predictive schedules. Therefore, the final phase of this research developed a new reactive strategy that tested three different rescheduling points during schedule eventuation when a complete rescheduling of the latter portion of the schedule would occur. The results of this new reactive technique indicate that rescheduling improves the schedule performance in only a few metrics under very specific network characteristics (those networks with the least restrictive parameters). This research was conducted with extensive use of Base SAS v9.2 combined with SAS/OR procedures to solve project networks, solve resource flow problems, and implement reactive scheduling heuristics. Additionally, Base SAS code was paired with Visual Basic for Applications in Excel 2003 to implement an automated Gantt chart generator that provided visual inspection for validation of the repair heuristics. The results of this research when combined with the results of Selim and Grey provide strong guidance for project managers regarding how to develop baseline predictive schedules and how to reschedule the project as stochastic tasks (e.g. unplanned work) do or do not occur. Specifically, the results and recommendations are provided in a summary tabular format that describes the recommended initial baseline development approach if a project manager has a good idea of the level and location of the stochasticity for the network, highlights two cases where rescheduling during schedule eventuation may be beneficial, and shows when buffering proportional to the probability of occurrence is recommended, or not recommended, or the cases where the evidence is inconclusive.
Show less - Date Issued
- 2008
- Identifier
- CFE0002491, ucf:47673
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002491
- Title
- UTILIZING A REAL LIFE DATA WAREHOUSE TO DEVELOP FREEWAY TRAVEL TIME ELIABILITY STOCHASTIC MODELS.
- Creator
-
Emam, Emam, Al-Deek, Haitham, University of Central Florida
- Abstract / Description
-
During the 20th century, transportation programs were focused on the development of the basic infrastructure for the transportation networks. In the 21st century, the focus has shifted to management and operations of these networks. Transportation network reliability measure plays an important role in judging the performance of the transportation system and in evaluating the impact of new Intelligent Transportation Systems (ITS) deployment. The measurement of transportation network travel...
Show moreDuring the 20th century, transportation programs were focused on the development of the basic infrastructure for the transportation networks. In the 21st century, the focus has shifted to management and operations of these networks. Transportation network reliability measure plays an important role in judging the performance of the transportation system and in evaluating the impact of new Intelligent Transportation Systems (ITS) deployment. The measurement of transportation network travel time reliability is imperative for providing travelers with accurate route guidance information. It can be applied to generate the shortest path (or alternative paths) connecting the origins and destinations especially under conditions of varying demands and limited capacities. The measurement of transportation network reliability is a complex issue because it involves both the infrastructure and the behavioral responses of the users. Also, this subject is challenging because there is no single agreed-upon reliability measure. This dissertation developed a new method for estimating the effect of travel demand variation and link capacity degradation on the reliability of a roadway network. The method is applied to a hypothetical roadway network and the results show that both travel time reliability and capacity reliability are consistent measures for reliability of the road network, but each may have a different use. The capacity reliability measure is of special interest to transportation network planners and engineers because it addresses the issue of whether the available network capacity relative to the present or forecast demand is sufficient, whereas travel time reliability is especially interesting for network users. The new travel time reliability method is sensitive to the users' perspective since it reflects that an increase in segment travel time should always result in less travel time reliability. And, it is an indicator of the operational consistency of a facility over an extended period of time. This initial theoretical effort and basic research was followed by applying the new method to the I-4 corridor in Orlando, Florida. This dissertation utilized a real life transportation data warehouse to estimate travel time reliability of the I-4 corridor. Four different travel time stochastic models: Weibull, Exponential, Lognormal, and Normal were tested. Lognormal was the best-fit model. Unlike the mechanical equipments, it is unrealistic that any freeway segment can be traversed in zero seconds no matter how fast the vehicles are. So, an adjustment of the developed best-fit statistical model (Lognormal) location parameter was needed to accurately estimate the travel time reliability. The adjusted model can be used to compute and predict travel time reliability of freeway corridors and report this information in real time to the public through traffic management centers. Compared to existing Florida Method and California Buffer Time Method, the new reliability method showed higher sensitivity to geographical locations, which reflects the level of congestion and bottlenecks. The major advantages/benefits of this new method to practitioners and researchers over the existing methods are its ability to estimate travel time reliability as a function of departure time, and that it treats travel time as a continuous variable that captures the variability experienced by individual travelers over an extended period of time. As such, the new method developed in this dissertation could be utilized in transportation planning and freeway operations for estimating the important travel time reliability measure of performance. Then, the segment length impacts on travel time reliability calculations were investigated utilizing the wealth of data available in the I-4 data warehouse. The developed travel time reliability models showed significant evidence of the relationship between the segment length and the results accuracy. The longer the segment, the less accurate were the travel time reliability estimates. Accordingly, long segments (e.g., 25 miles) are more appropriate for planning purposes as a macroscopic performance measure of the freeway corridor. Short segments (e.g., 5 miles) are more appropriate for the evaluation of freeway operations as a microscopic performance measure. Further, this dissertation has explored the impact of relaxing an important assumption in reliability analysis: Link independency. In real life, assuming that link failures on a road network are statistically independent is dubious. The failure of a link in one particular area does not necessarily result in the complete failure of the neighboring link, but may lead to deterioration of its performance. The "Cause-Based Multimode Model" (CBMM) has been used to address link dependency in communication networks. However, the transferability of this model to transportation networks has not been tested and this approach has not been considered before in the calculation of transportation networks' reliability. This dissertation presented the CBMM and applied it to predict transportation networks' travel time reliability that an origin demand can reach a specified destination under multimodal dependency link failure conditions. The new model studied the multi-state system reliability analysis of transportation networks for which one cannot formulate an "all or nothing" type of failure criterion and in which dependent link failures are considered. The results demonstrated that the newly developed method has true potential and can be easily extended to large-scale networks as long as the data is available. More specifically, the analysis of a hypothetical network showed that the dependency assumption is very important to obtain more reasonable travel time reliability estimates of links, paths, and the entire network. The results showed large discrepancy between the dependency and independency analysis scenarios. Realistic scenarios that considered the dependency assumption were on the safe side, this is important for transportation network decision makers. Also, this could aid travelers in making better choices. In contrast, deceptive information caused by the independency assumption could add to the travelers' anxiety associated with the unknown length of delay. This normally reflects negatively on highway agencies and management of taxpayers' resources.
Show less - Date Issued
- 2006
- Identifier
- CFE0000965, ucf:46709
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000965
- Title
- Development of an Adaptive Restoration Tool For a Self-Healing Smart Grid.
- Creator
-
Golshani, Amir, Sun, Wei, Qu, Zhihua, Vosoughi, Azadeh, Zhou, Qun, Zheng, Qipeng, University of Central Florida
- Abstract / Description
-
Large power outages become more commonplace due to the increase in both frequency and strength of natural disasters and cyber-attacks. The outages and blackouts cost American industries and business billions of dollars and jeopardize the lives of hospital patients. The losses can be greatlyreduced with a fast, reliable and flexible restoration tool. Fast recovery and successfully adapting to extreme events are critical to build a resilient, and ultimately self-healing power grid. This...
Show moreLarge power outages become more commonplace due to the increase in both frequency and strength of natural disasters and cyber-attacks. The outages and blackouts cost American industries and business billions of dollars and jeopardize the lives of hospital patients. The losses can be greatlyreduced with a fast, reliable and flexible restoration tool. Fast recovery and successfully adapting to extreme events are critical to build a resilient, and ultimately self-healing power grid. This dissertation is aimed to tackle the challenging task of developing an adaptive restoration decisionsupport system (RDSS). The RDSS determines restoration actions both in planning and real-time phases and adapts to constantly changing system conditions. First, an efficient network partitioning approach is developed to provide initial conditions for RDSS by dividing large outage network into smaller islands. Then, the comprehensive formulation of RDSS integrates different recovery phases into one optimization problem, and encompasses practical constraints including AC powerflow, dynamic reserve, and dynamic behaviors of generators and load. Also, a frequency constrained load recovery module is proposed and integrated into the RDSS to determine the optimal location and amount of load pickup. Next, the proposed RDSS is applied to harness renewable energy sources and pumped-storage hydro (PSH) units by addressing the inherent variabilities and uncertainties of renewable and coordinating wind and PSH generators. A two-stage stochastic and robust optimization problem is formulated, and solved by the integer L-shaped and column-and-constraintsgeneration decomposition algorithms. The developed RDSS tool has been tested onthe modified IEEE 39-bus and IEEE 57-bus systems under different scenarios. Numerical results demonstrate the effectiveness and efficiency of the proposed RDSS. In case of contingencies or unexpected outages during the restoration process, RDSS can quickly update the restoration plan and adapt to changing system conditions. RDSS is an important step toward a self-healing power grid and its implementation will reduce the recovery time while maintaining system security.
Show less - Date Issued
- 2017
- Identifier
- CFE0007284, ucf:52169
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007284
- Title
- BUFFER TECHNIQUES FOR STOCHASTIC RESOURCE CONSTRAINED PROJECT SCHEDULING WITH STOCHASTIC TASK INSERTIONS PROBLEMS.
- Creator
-
Grey, Jennifer, Armacost, Robert, University of Central Florida
- Abstract / Description
-
Project managers are faced with the challenging task of managing an environment filled with uncertainties that may lead to multiple disruptions during project execution. In particular, they are frequently confronted with planning for routine and non-routine unplanned work: known, identified, tasks that may or may not occur depending upon various, often unpredictable, factors. This problem is known as the stochastic task insertion problem, where tasks of deterministic duration occur...
Show moreProject managers are faced with the challenging task of managing an environment filled with uncertainties that may lead to multiple disruptions during project execution. In particular, they are frequently confronted with planning for routine and non-routine unplanned work: known, identified, tasks that may or may not occur depending upon various, often unpredictable, factors. This problem is known as the stochastic task insertion problem, where tasks of deterministic duration occur stochastically. Traditionally, project managers may include an extra margin within deterministic task times or an extra time buffer may be allotted at the end of the project schedule to protect the final project completion milestone. Little scientific guidance is available to better integrate buffers strategically into the project schedule. Motivated by the Critical Chain and Buffer Management approach of Goldratt, this research identifies, defines, and demonstrates new buffer sizing techniques to improve project duration and stability metrics associated with the stochastic resource constrained project scheduling problem with stochastic task insertions. Specifically, this research defines and compares partial buffer sizing strategies for projects with varying levels of resource and network complexity factors as well as the level and location of the stochastically occurring tasks. Several project metrics may be impacted by the stochastic occurrence or non-occurrence of a task such as the project makespan and the project stability. New duration and stability metrics are developed in this research and are used to evaluate the effectiveness of the proposed buffer sizing techniques. These "robustness measures" are computed through the comparison of the characteristics of the initial schedule (termed the infeasible base schedule), a modified base schedule (or as-run schedule) and an optimized version of the base schedule (or perfect knowledge schedule). Seven new buffer sizing techniques are introduced in this research. Three are based on a fixed percentage of task duration and the remaining four provide variable buffer sizes based upon the location of the stochastic task in the schedule and knowledge of the task stochasticity characteristic. Experimental analysis shows that partial buffering produces improvements in the project stability and duration metrics when compared to other baseline scheduling approaches. Three of the new partial buffering techniques produced improvements in project metrics. One of these partial buffers was based on a fixed percentage of task duration and the other two used a variable buffer size based on knowledge of the location of the task in the project network. This research provides project schedulers with new partial buffering techniques and recommendations for the type of partial buffering technique that should be utilized when project duration and stability performance improvements are desired. When a project scheduler can identify potential unplanned work and where it might occur, the use of these partial buffer techniques will yield a better estimated makespan. Furthermore, it will result in less disruption to the planned schedule and minimize the amount of time that specific tasks will have to move to accommodate the unplanned tasks.
Show less - Date Issued
- 2007
- Identifier
- CFE0001584, ucf:52850
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001584
- Title
- Modeling and Solving Large-scale Stochastic Mixed-Integer Problems in Transportation and Power Systems.
- Creator
-
Huang, Zhouchun, Zheng, Qipeng, Xanthopoulos, Petros, Pazour, Jennifer, Chang, Ni-bin, University of Central Florida
- Abstract / Description
-
In this dissertation, various optimization problems from the area of transportation and power systems will be respectively investigated and the uncertainty will be considered in each problem. Specifically, a long-term problem of electricity infrastructure investment is studied to address the planning for capacity expansion in electrical power systems with the integration of short-term operations. The future investment costs and real-time customer demands cannot be perfectly forecasted and...
Show moreIn this dissertation, various optimization problems from the area of transportation and power systems will be respectively investigated and the uncertainty will be considered in each problem. Specifically, a long-term problem of electricity infrastructure investment is studied to address the planning for capacity expansion in electrical power systems with the integration of short-term operations. The future investment costs and real-time customer demands cannot be perfectly forecasted and thus are considered to be random. Another maintenance scheduling problem is studied for power systems, particularly for natural gas fueled power plants, taking into account gas contracting and the opportunity of purchasing and selling gas in the spot market as well as the maintenance scheduling considering the uncertainty of electricity and gas prices in the spot market. In addition, different vehicle routing problems are researched seeking the route for each vehicle so that the total traveling cost is minimized subject to the constraints and uncertain parameters in corresponding transportation systems.The investigation of each problem in this dissertation mainly consists of two parts, i.e., the formulation of its mathematical model and the development of solution algorithm for solving the model. The stochastic programming is applied as the framework to model each problem and address the uncertainty, while the approach of dealing with the randomness varies in terms of the relationships between the uncertain elements and objective functions or constraints. All the problems will be modeled as stochastic mixed-integer programs, and the huge numbers of involved decision variables and constraints make each problem large-scale and very difficult to manage. In this dissertation, efficient algorithms are developed for these problems in the context of advanced methodologies of optimization and operations research, such as branch and cut, benders decomposition, column generation and Lagrangian method. Computational experiments are implemented for each problem and the results will be present and discussed. The research carried out in this dissertation would be beneficial to both researchers and practitioners seeking to model and solve similar optimization problems in transportation and power systems when uncertainty is involved.
Show less - Date Issued
- 2016
- Identifier
- CFE0006328, ucf:51559
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006328
- Title
- Probabilistic-Based Computing Transformation with Reconfigurable Logic Fabrics.
- Creator
-
Alawad, Mohammed, Lin, Mingjie, DeMara, Ronald, Mikhael, Wasfy, Wang, Jun, Das, Tuhin, University of Central Florida
- Abstract / Description
-
Effectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such...
Show moreEffectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such, the traditional computing means of first-principlemodeling or explicit statistical modeling will very likely be ineffective to achieveflexibility, autonomy, and human interaction. The bottom line is clear: given where we areheaded, the fundamental principle of modern computing(-)deterministic logic circuits canflawlessly emulate propositional logic deduction governed by Boolean algebra(-)has to bereexamined, and transformative changes in the foundation of modern computing must bemade.This dissertation presents a novel stochastic-based computing methodology. It efficientlyrealizes the algorithmatic computing through the proposed concept of Probabilistic DomainTransform (PDT). The essence of PDT approach is to encode the input signal asthe probability density function, perform stochastic computing operations on the signal inthe probabilistic domain, and decode the output signal by estimating the probability densityfunction of the resulting random samples. The proposed methodology possesses manynotable advantages. Specifically, it uses much simplified circuit units to conduct complexoperations, which leads to highly area- and energy-efficient designs suitable for parallel processing.Moreover, it is highly fault-tolerant because the information to be processed isencoded with a large ensemble of random samples. As such, the local perturbations of itscomputing accuracy will be dissipated globally, thus becoming inconsequential to the final overall results. Finally, the proposed probabilistic-based computing can facilitate buildingscalable precision systems, which provides an elegant way to trade-off between computingaccuracy and computing performance/hardware efficiency for many real-world applications.To validate the effectiveness of the proposed PDT methodology, two important signal processingapplications, discrete convolution and 2-D FIR filtering, are first implemented andbenchmarked against other deterministic-based circuit implementations. Furthermore, alarge-scale Convolutional Neural Network (CNN), a fundamental algorithmic building blockin many computer vision and artificial intelligence applications that follow the deep learningprinciple, is also implemented with FPGA based on a novel stochastic-based and scalablehardware architecture and circuit design. The key idea is to implement all key componentsof a deep learning CNN, including multi-dimensional convolution, activation, and poolinglayers, completely in the probabilistic computing domain. The proposed architecture notonly achieves the advantages of stochastic-based computation, but can also solve severalchallenges in conventional CNN, such as complexity, parallelism, and memory storage.Overall, being highly scalable and energy efficient, the proposed PDT-based architecture iswell-suited for a modular vision engine with the goal of performing real-time detection, recognitionand segmentation of mega-pixel images, especially those perception-based computingtasks that are inherently fault-tolerant.
Show less - Date Issued
- 2016
- Identifier
- CFE0006828, ucf:51768
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006828
- Title
- PRICE DISCOVERY IN THE U.S. BOND MARKETS: TRADING STRATEGIES AND THE COST OF LIQUIDITY.
- Creator
-
Shao, Haimei, Yong, Jiongmin, University of Central Florida
- Abstract / Description
-
The world bond market is nearly twice as large as the equity market. The goal of this dissertation is to study the dynamics of bond price. Among the liquidity risk, interest rate risk and default risk, this dissertation will focus on the liquidity risk and trading strategy. Under the mathematical frame of stochastic control, we model price setting in U.S. bond markets where dealers have multiple instruments to smooth inventory imbalances. The difficulty in obtaining the optimal trading...
Show moreThe world bond market is nearly twice as large as the equity market. The goal of this dissertation is to study the dynamics of bond price. Among the liquidity risk, interest rate risk and default risk, this dissertation will focus on the liquidity risk and trading strategy. Under the mathematical frame of stochastic control, we model price setting in U.S. bond markets where dealers have multiple instruments to smooth inventory imbalances. The difficulty in obtaining the optimal trading strategy is that the optimal strategy and value function depend on each other, and the corresponding HJB equation is nonlinear. To solve this problem, we derived an approximate optimal explicit trading strategy. The result shows that this trading strategy is better than the benchmark central symmetric trading strategy.
Show less - Date Issued
- 2011
- Identifier
- CFE0003633, ucf:48858
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003633
- Title
- Batch and Online Implicit Weighted Gaussian Processes for Robust Novelty Detection.
- Creator
-
Ramirez Padron, Ruben, Gonzalez, Avelino, Georgiopoulos, Michael, Stanley, Kenneth, Mederos, Boris, Wang, Chung-Ching, University of Central Florida
- Abstract / Description
-
This dissertation aims mainly at obtaining robust variants of Gaussian processes (GPs) that do not require using non-Gaussian likelihoods to compensate for outliers in the training data. Bayesian kernel methods, and in particular GPs, have been used to solve a variety of machine learning problems, equating or exceeding the performance of other successful techniques. That is the case of a recently proposed approach to GP-based novelty detection that uses standard GPs (i.e. GPs employing...
Show moreThis dissertation aims mainly at obtaining robust variants of Gaussian processes (GPs) that do not require using non-Gaussian likelihoods to compensate for outliers in the training data. Bayesian kernel methods, and in particular GPs, have been used to solve a variety of machine learning problems, equating or exceeding the performance of other successful techniques. That is the case of a recently proposed approach to GP-based novelty detection that uses standard GPs (i.e. GPs employing Gaussian likelihoods). However, standard GPs are sensitive to outliers in training data, and this limitation carries over to GP-based novelty detection. This limitation has been typically addressed by using robust non-Gaussian likelihoods. However, non-Gaussian likelihoods lead to analytically intractable inferences, which require using approximation techniques that are typically complex and computationally expensive. Inspired by the use of weights in quasi-robust statistics, this work introduces a particular type of weight functions, called here data weighers, in order to obtain robust GPs that do not require approximation techniques and retain the simplicity of standard GPs. This work proposes implicit weighted variants of batch GP, online GP, and sparse online GP (SOGP) that employ weighted Gaussian likelihoods. Mathematical expressions for calculating the posterior implicit weighted GPs are derived in this work. In our experiments, novelty detection based on our weighted batch GPs consistently and significantly outperformed standard batch GP-based novelty detection whenever data was contaminated with outliers. Additionally, our experiments show that novelty detection based on online GPs can perform similarly to batch GP-based novelty detection. Membership scores previously introduced by other authors are also compared in our experiments.
Show less - Date Issued
- 2015
- Identifier
- CFE0005869, ucf:50858
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005869
- Title
- Harnessing Spatial Intensity Fluctuations for Optical Imaging and Sensing.
- Creator
-
Akhlaghi Bouzan, Milad, Dogariu, Aristide, Saleh, Bahaa, Pang, Sean, Atia, George, University of Central Florida
- Abstract / Description
-
Properties of light such as amplitude and phase, temporal and spatial coherence, polarization, etc. are abundantly used for sensing and imaging. Regardless of the passive or active nature of the sensing method, optical intensity fluctuations are always present! While these fluctuations are usually regarded as noise, there are situations where one can harness the intensity fluctuations to enhance certain attributes of the sensing procedure. In this thesis, we developed different sensing...
Show moreProperties of light such as amplitude and phase, temporal and spatial coherence, polarization, etc. are abundantly used for sensing and imaging. Regardless of the passive or active nature of the sensing method, optical intensity fluctuations are always present! While these fluctuations are usually regarded as noise, there are situations where one can harness the intensity fluctuations to enhance certain attributes of the sensing procedure. In this thesis, we developed different sensing methodologies that use statistical properties of optical fluctuations for gauging specific information. We examine this concept in the context of three different aspects of computational optical imaging and sensing. First, we study imposing specific statistical properties to the probing field to image or characterize certain properties of an object through a statistical analysis of the spatially integrated scattered intensity. This offers unique capabilities for imaging and sensing techniques operating in highly perturbed environments and low-light conditions. Next, we examine optical sensing in the presence of strong perturbations that preclude any controllable field modification. We demonstrate that inherent properties of diffused coherent fields and fluctuations of integrated intensity can be used to track objects hidden behind obscurants. Finally, we address situations where, due to coherent noise, image accuracy is severely degraded by intensity fluctuations. By taking advantage of the spatial coherence properties of optical fields, we show that this limitation can be effectively mitigated and that a significant improvement in the signal-to-noise ratio can be achieved even in one single-shot measurement. The findings included in this dissertation illustrate different circumstances where optical fluctuations can affect the efficacy of computational optical imaging and sensing. A broad range of applications, including biomedical imaging and remote sensing, could benefit from the new approaches to suppress, enhance, and exploit optical fluctuations, which are described in this dissertation.
Show less - Date Issued
- 2017
- Identifier
- CFE0007274, ucf:52200
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007274
- Title
- Automated Synthesis of Unconventional Computing Systems.
- Creator
-
Hassen, Amad Ul, Jha, Sumit Kumar, Sundaram, Kalpathy, Fan, Deliang, Ewetz, Rickard, Rahman, Talat, University of Central Florida
- Abstract / Description
-
Despite decades of advancements, modern computing systems which are based on the von Neumann architecture still carry its shortcomings. Moore's law, which had substantially masked the effects of the inherent memory-processor bottleneck of the von Neumann architecture, has slowed down due to transistor dimensions nearing atomic sizes. On the other hand, modern computational requirements, driven by machine learning, pattern recognition, artificial intelligence, data mining, and IoT, are growing...
Show moreDespite decades of advancements, modern computing systems which are based on the von Neumann architecture still carry its shortcomings. Moore's law, which had substantially masked the effects of the inherent memory-processor bottleneck of the von Neumann architecture, has slowed down due to transistor dimensions nearing atomic sizes. On the other hand, modern computational requirements, driven by machine learning, pattern recognition, artificial intelligence, data mining, and IoT, are growing at the fastest pace ever. By their inherent nature, these applications are particularly affected by communication-bottlenecks, because processing them requires a large number of simple operations involving data retrieval and storage. The need to address the problems associated with conventional computing systems at the fundamental level has given rise to several unconventional computing paradigms. In this dissertation, we have made advancements for automated syntheses of two types of unconventional computing paradigms: in-memory computing and stochastic computing. In-memory computing circumvents the problem of limited communication bandwidth by unifying processing and storage at the same physical locations. The advent of nanoelectronic devices in the last decade has made in-memory computing an energy-, area-, and cost-effective alternative to conventional computing. We have used Binary Decision Diagrams (BDDs) for in-memory computing on memristor crossbars. Specifically, we have used Free-BDDs, a special class of binary decision diagrams, for synthesizing crossbars for flow-based in-memory computing. Stochastic computing is a re-emerging discipline with several times smaller area/power requirements as compared to conventional computing systems. It is especially suited for fault-tolerant applications like image processing, artificial intelligence, pattern recognition, etc. We have proposed a decision procedures-based iterative algorithm to synthesize Linear Finite State Machines (LFSM) for stochastically computing non-linear functions such as polynomials, exponentials, and hyperbolic functions.
Show less - Date Issued
- 2019
- Identifier
- CFE0007648, ucf:52462
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007648
- Title
- Development of Regional Optimization and Market Penetration Models For the Electric Vehicles in the United States.
- Creator
-
Noori, Mehdi, Tatari, Omer, Oloufa, Amr, Nam, Boo Hyun, Xanthopoulos, Petros, University of Central Florida
- Abstract / Description
-
Since the transportation sector still relies mostly on fossil fuels, the emissions and overall environmental impacts of the transportation sector are particularly relevant to the mitigation of the adverse effects of climate change. Sustainable transportation therefore plays a vital role in the ongoing discussion on how to promote energy insecurity and address future energy requirements. One of the most promising ways to increase energy security and reduce emissions from the transportation...
Show moreSince the transportation sector still relies mostly on fossil fuels, the emissions and overall environmental impacts of the transportation sector are particularly relevant to the mitigation of the adverse effects of climate change. Sustainable transportation therefore plays a vital role in the ongoing discussion on how to promote energy insecurity and address future energy requirements. One of the most promising ways to increase energy security and reduce emissions from the transportation sector is to support alternative fuel technologies, including electric vehicles (EVs). As vehicles become electrified, the transportation fleet will rely on the electric grid as well as traditional transportation fuels for energy. The life cycle cost and environmental impacts of EVs are still very uncertain, but are nonetheless extremely important for making policy decisions. Moreover, the use of EVs will help to diversify the fuel mix and thereby reduce dependence on petroleum. In this respect, the United States has set a goal of a 20% share of EVs on U.S. roadways by 2030. However, there is also a considerable amount of uncertainty in the market share of EVs that must be taken into account. This dissertation aims to address these inherent uncertainties by presenting two new models: the Electric Vehicles Regional Optimizer (EVRO), and Electric Vehicle Regional Market Penetration (EVReMP). Using these two models, decision makers can predict the optimal combination of drivetrains and the market penetration of the EVs in different regions of the United States for the year 2030.First, the life cycle cost and life cycle environmental emissions of internal combustion engine vehicles, gasoline hybrid electric vehicles, and three different EV types (gasoline plug-in hybrid EVs, gasoline extended-range EVs, and all-electric EVs) are evaluated with their inherent uncertainties duly considered. Then, the environmental damage costs and water footprints of the studied drivetrains are estimated. Additionally, using an Exploratory Modeling and Analysis method, the uncertainties related to the life cycle costs, environmental damage costs, and water footprints of the studied vehicle types are modeled for different U.S. electricity grid regions. Next, an optimization model is used in conjunction with this Exploratory Modeling and Analysis method to find the ideal combination of different vehicle types in each U.S. region for the year 2030. Finally, an agent-based model is developed to identify the optimal market shares of the studied vehicles in each of 22 electric regions in the United States. The findings of this research will help policy makers and transportation planners to prepare our nation's transportation system for the future influx of EVs.The findings of this research indicate that the decision maker's point of view plays a vital role in selecting the optimal fleet array. While internal combustion engine vehicles have the lowest life cycle cost, the highest environmental damage cost, and a relatively low water footprint, they will not be a good choice in the future. On the other hand, although all-electric vehicles have a relatively low life cycle cost and the lowest environmental damage cost of the evaluated vehicle options, they also have the highest water footprint, so relying solely on all-electric vehicles is not an ideal choice either. Rather, the best fleet mix in 2030 will be an electrified fleet that relies on both electricity and gasoline. From the agent-based model results, a deviation is evident between the ideal fleet mix and that resulting from consumer behavior, in which EV shares increase dramatically by the year 2030 but only dominate 30 percent of the market. Therefore, government subsidies and the word-of-mouth effect will play a vital role in the future adoption of EVs.
Show less - Date Issued
- 2015
- Identifier
- CFE0005852, ucf:50927
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005852
- Title
- Biophysical Sources of 1/f Noises in Neurological Systems.
- Creator
-
Paris, Alan, Vosoughi, Azadeh, Atia, George, Wiegand, Rudolf, Douglas, Pamela, Berman, Steven, University of Central Florida
- Abstract / Description
-
High levels of random noise are a defining characteristic of neurological signals at all levels, from individual neurons up to electroencephalograms (EEG). These random signals degrade the performance of many methods of neuroengineering and medical neuroscience. Understanding this noise also is essential for applications such as real-time brain-computer interfaces (BCIs), which must make accurate control decisions from very short data epochs. The major type of neurological noise is of the so...
Show moreHigh levels of random noise are a defining characteristic of neurological signals at all levels, from individual neurons up to electroencephalograms (EEG). These random signals degrade the performance of many methods of neuroengineering and medical neuroscience. Understanding this noise also is essential for applications such as real-time brain-computer interfaces (BCIs), which must make accurate control decisions from very short data epochs. The major type of neurological noise is of the so-called 1/f-type, whose origins and statistical nature has remained unexplained for decades. This research provides the first simple explanation of 1/f-type neurological noise based on biophysical fundamentals. In addition, noise models derived from this theory provide validated algorithm performance improvements over alternatives.Specifically, this research defines a new class of formal latent-variable stochastic processes called hidden quantum models (HQMs) which clarify the theoretical foundations of ion channel signal processing. HQMs are based on quantum state processes which formalize time-dependent observation. They allow the quantum-based calculation of channel conductance autocovariance functions, essential for frequency-domain signal processing. HQMs based on a particular type of observation protocol called independent activated measurements are shown to be distributionally equivalent to hidden Markov models yet without an underlying physical Markov process. Since the formal Markov processes are non-physical, the theory of activated measurement allows merging energy-based Eyring rate theories of ion channel behavior with the more common phenomenological Markov kinetic schemes to form energy-modulated quantum channels. These unique biophysical concepts developed to understand the mechanisms of ion channel kinetics have the potential of revolutionizing our understanding of neurological computation.To apply this theory, the simplest quantum channel model consistent with neuronal membrane voltage-clamp experiments is used to derive the activation eigenenergies for the Hodgkin-Huxley K+ and Na+ ion channels. It is shown that maximizing entropy under constrained activation energy yields noise spectral densities approximating S(f) = 1/f, thus offering a biophysical explanation for this ubiquitous noise component. These new channel-based noise processes are called generalized van der Ziel-McWhorter (GVZM) power spectral densities (PSDs). This is the only known EEG noise model that has a small, fixed number of parameters, matches recorded EEG PSD's with high accuracy from 0 Hz to over 30 Hz without infinities, and has approximately 1/f behavior in the mid-frequencies. In addition to the theoretical derivation of the noise statistics from ion channel stochastic processes, the GVZM model is validated in two ways. First, a class of mixed autoregressive models is presented which simulate brain background noise and whose periodograms are proven to be asymptotic to the GVZM PSD. Second, it is shown that pairwise comparisons of GVZM-based algorithms, using real EEG data from a publicly-available data set, exhibit statistically significant accuracy improvement over two well-known and widely-used steady-state visual evoked potential (SSVEP) estimators.
Show less - Date Issued
- 2016
- Identifier
- CFE0006485, ucf:51418
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006485
- Title
- Stochastic Optimization for Integrated Energy System with Reliability Improvement Using Decomposition Algorithm.
- Creator
-
Huang, Yuping, Zheng, Qipeng, Xanthopoulos, Petros, Pazour, Jennifer, Liu, Andrew, University of Central Florida
- Abstract / Description
-
As energy demands increase and energy resources change, the traditional energy system has beenupgraded and reconstructed for human society development and sustainability. Considerable studies have been conducted in energy expansion planning and electricity generation operations bymainly considering the integration of traditional fossil fuel generation with renewable generation.Because the energy market is full of uncertainty, we realize that these uncertainties have continuously challenged...
Show moreAs energy demands increase and energy resources change, the traditional energy system has beenupgraded and reconstructed for human society development and sustainability. Considerable studies have been conducted in energy expansion planning and electricity generation operations bymainly considering the integration of traditional fossil fuel generation with renewable generation.Because the energy market is full of uncertainty, we realize that these uncertainties have continuously challenged market design and operations, even a national energy policy. In fact, only a few considerations were given to the optimization of energy expansion and generation taking into account the variability and uncertainty of energy supply and demand in energy markets. This usually causes an energy system unreliable to cope with unexpected changes, such as a surge in fuel price, a sudden drop of demand, or a large renewable supply fluctuation. Thus, for an overall energy system, optimizing a long-term expansion planning and market operation in a stochastic environment are crucial to improve the system's reliability and robustness.As little consideration was paid to imposing risk measure on the power management system, this dissertation discusses applying risk-constrained stochastic programming to improve the efficiency,reliability and economics of energy expansion and electric power generation, respectively. Considering the supply-demand uncertainties affecting the energy system stability, three different optimization strategies are proposed to enhance the overall reliability and sustainability of an energy system. The first strategy is to optimize the regional energy expansion planning which focuses on capacity expansion of natural gas system, power generation system and renewable energy system, in addition to transmission network. With strong support of NG and electric facilities, the second strategy provides an optimal day-ahead scheduling for electric power generation system incorporating with non-generation resources, i.e. demand response and energy storage. Because of risk aversion, this generation scheduling enables a power system qualified with higher reliability and promotes non-generation resources in smart grid. To take advantage of power generation sources, the third strategy strengthens the change of the traditional energy reserve requirements to risk constraints but ensuring the same level of systems reliability In this way we can maximize the use of existing resources to accommodate internal or/and external changes in a power system.All problems are formulated by stochastic mixed integer programming, particularly consideringthe uncertainties from fuel price, renewable energy output and electricity demand over time. Taking the benefit of models structure, new decomposition strategies are proposed to decompose the stochastic unit commitment problems which are then solved by an enhanced Benders Decomposition algorithm. Compared to the classic Benders Decomposition, this proposed solution approachis able to increase convergence speed and thus reduce 25% of computation times on the same cases.
Show less - Date Issued
- 2014
- Identifier
- CFE0005506, ucf:50339
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005506
- Title
- Multi-Level Safety Performance Functions for High Speed Facilities.
- Creator
-
Ahmed, Mohamed, Abdel-Aty, Mohamed, Radwan, Ahmed, Al-Deek, Haitham, Mackie, Kevin, Pande, Anurag, Uddin, Nizam, University of Central Florida
- Abstract / Description
-
High speed facilities are considered the backbone of any successful transportation system; Interstates, freeways, and expressways carry the majority of daily trips on the transportation network. Although these types of roads are relatively considered the safest among other types of roads, they still experience many crashes, many of which are severe, which not only affect human lives but also can have tremendous economical and social impacts. These facts signify the necessity of enhancing the...
Show moreHigh speed facilities are considered the backbone of any successful transportation system; Interstates, freeways, and expressways carry the majority of daily trips on the transportation network. Although these types of roads are relatively considered the safest among other types of roads, they still experience many crashes, many of which are severe, which not only affect human lives but also can have tremendous economical and social impacts. These facts signify the necessity of enhancing the safety of these high speed facilities to ensure better and efficient operation. Safety problems could be assessed through several approaches that can help in mitigating the crash risk on long and short term basis. Therefore, the main focus of the research in this dissertation is to provide a framework of risk assessment to promote safety and enhance mobility on freeways and expressways. Multi-level Safety Performance Functions (SPFs) were developed at the aggregate level using historical crash data and the corresponding exposure and risk factors to identify and rank sites with promise (hot-spots). Additionally, SPFs were developed at the disaggregate level utilizing real-time weather data collected from meteorological stations located at the freeway section as well as traffic flow parameters collected from different detection systems such as Automatic Vehicle Identification (AVI) and Remote Traffic Microwave Sensors (RTMS). These disaggregate SPFs can identify real-time risks due to turbulent traffic conditions and their interactions with other risk factors.In this study, two main datasets were obtained from two different regions. Those datasets comprise historical crash data, roadway geometrical characteristics, aggregate weather and traffic parameters as well as real-time weather and traffic data.At the aggregate level, Bayesian hierarchical models with spatial and random effects were compared to Poisson models to examine the safety effects of roadway geometrics on crash occurrence along freeway sections that feature mountainous terrain and adverse weather. At the disaggregate level; a main framework of a proactive safety management system using traffic data collected from AVI and RTMS, real-time weather and geometrical characteristics was provided. Different statistical techniques were implemented. These techniques ranged from classical frequentist classification approaches to explain the relationship between an event (crash) occurring at a given time and a set of risk factors in real time to other more advanced models. Bayesian statistics with updating approach to update beliefs about the behavior of the parameter with prior knowledge in order to achieve more reliable estimation was implemented. Also a relatively recent and promising Machine Learning technique (Stochastic Gradient Boosting) was utilized to calibrate several models utilizing different datasets collected from mixed detection systems as well as real-time meteorological stations. The results from this study suggest that both levels of analyses are important, the aggregate level helps in providing good understanding of different safety problems, and developing policies and countermeasures to reduce the number of crashes in total. At the disaggregate level, real-time safety functions help toward more proactive traffic management system that will not only enhance the performance of the high speed facilities and the whole traffic network but also provide safer mobility for people and goods. In general, the proposed multi-level analyses are useful in providing roadway authorities with detailed information on where countermeasures must be implemented and when resources should be devoted. The study also proves that traffic data collected from different detection systems could be a useful asset that should be utilized appropriately not only to alleviate traffic congestion but also to mitigate increased safety risks. The overall proposed framework can maximize the benefit of the existing archived data for freeway authorities as well as for road users.
Show less - Date Issued
- 2012
- Identifier
- CFE0004508, ucf:49274
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004508
- Title
- A Comparative Evaluation of FDSA,GA, and SA Non-Linear Programming Algorithms and Development of System-Optimal Dynamic Congestion Pricing Methodology on I-95 Express.
- Creator
-
Graham, Don, Radwan, Ahmed, Abdel-Aty, Mohamed, Al-Deek, Haitham, Uddin, Nizam, University of Central Florida
- Abstract / Description
-
As urban population across the globe increases, the demand for adequatetransportation grows. Several strategies have been suggested as a solution to the congestion which results from this high demand outpacing the existing supply of transportation facilities.High (-)Occupancy Toll (HOT) lanes have become increasingly more popular as a feature on today's highway system. The I-95 Express HOT lane in Miami Florida, which is currently being expanded from a single Phase (Phase I) into two Phases,...
Show moreAs urban population across the globe increases, the demand for adequatetransportation grows. Several strategies have been suggested as a solution to the congestion which results from this high demand outpacing the existing supply of transportation facilities.High (-)Occupancy Toll (HOT) lanes have become increasingly more popular as a feature on today's highway system. The I-95 Express HOT lane in Miami Florida, which is currently being expanded from a single Phase (Phase I) into two Phases, is one such HOT facility. With the growing abundance of such facilities comes the need for in- depth study of demand patterns and development of an appropriate pricing scheme which reduces congestion.This research develops a method for dynamic pricing on the I-95 HOT facility such as to minimize total travel time and reduce congestion. We apply non-linear programming (NLP) techniques and the finite difference stochastic approximation (FDSA), genetic algorithm (GA) and simulated annealing (SA) stochastic algorithms to formulate and solve the problem within a cell transmission framework. The solution produced is the optimal flow and optimal toll required to minimize total travel time and thus is the system-optimal solution.We perform a comparative evaluation of FDSA, GA and SA non-linear programmingalgorithms used to solve the NLP and the ANOVA results show that there are differences in the performance of the NLP algorithms in solving this problem and reducing travel time. We then conclude by demonstrating that econometric forecasting methods utilizing vector autoregressive (VAR) techniques can be applied to successfully forecast demand for Phase 2 of the 95 Express which is planned for 2014.
Show less - Date Issued
- 2013
- Identifier
- CFE0005000, ucf:50019
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005000
- Title
- MEASURING THE EFFECT OF ERRATIC DEMANDON SIMULATED MULTI-CHANNEL MANUFACTURINGSYSTEM PERFORMANCE.
- Creator
-
Kohan, Nancy, Kulonda, Dennis, University of Central Florida
- Abstract / Description
-
ABSTRACT To handle uncertainties and variabilities in production demands, many manufacturing companies have adopted different strategies, such as varying quoted lead time, rejecting orders, increasing stock or inventory levels, and implementing volume flexibility. Make-to-stock (MTS) systems are designed to offer zero lead time by providing an inventory buffer for the organizations, but they are costly and involve risks such as obsolescence and wasted expenditures. The main concern of make-to...
Show moreABSTRACT To handle uncertainties and variabilities in production demands, many manufacturing companies have adopted different strategies, such as varying quoted lead time, rejecting orders, increasing stock or inventory levels, and implementing volume flexibility. Make-to-stock (MTS) systems are designed to offer zero lead time by providing an inventory buffer for the organizations, but they are costly and involve risks such as obsolescence and wasted expenditures. The main concern of make-to-order (MTO) systems is eliminating inventories and reducing the non-value-added processes and wastes; however, these systems are based on the assumption that the manufacturing environments and customers' demand are deterministic. Research shows that in MTO systems variability and uncertainty in the demand levels causes instability in the production flow, resulting in congestion in the production flow, long lead times, and low throughput. Neither strategy is wholly satisfactory. A new alternative approach, multi-channel manufacturing (MCM) systems are designed to manage uncertainties and variabilities in demands by first focusing on customers' response time. The products are divided into different product families, each with its own manufacturing stream or sub-factory. MCM also allocates the production capacity needed in each sub-factory to produce each product family. In this research, the performance of an MCM system is studied by implementing MCM in a real case scenario from textile industry modeled via discrete event simulation. MTS and MTO systems are implemented for the same case scenario and the results are studied and compared. The variables of interest for this research are the throughput of products, the level of on-time deliveries, and the inventory level. The results conducted from the simulation experiments favor the simulated MCM system for all mentioned criteria. Further research activities, such as applying MCM to different manufacturing contexts, is highly recommended.
Show less - Date Issued
- 2004
- Identifier
- CFE0000240, ucf:46275
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000240