Current Search: predicting (x)
View All Items
Pages
- Title
- PREDICTING MERGERS AND ACQUISITIONS.
- Creator
-
D'Angelo, John, Gilkeson, Jim, University of Central Florida
- Abstract / Description
-
Being able to predict a merger or acquisition before it takes place could lead to an investor earning a premium, if they owned shares of the targeted firm before the merger or acquisition attempt is announced. On average acquiring firms pay a premium when acquiring or merging with a targeted firm. This study uses publicly available financial information for 7,267 attempted takeover targets and 52,343 non-targeted firms for the period January 3, 2000 through December 31, 2007 to estimate ...
Show moreBeing able to predict a merger or acquisition before it takes place could lead to an investor earning a premium, if they owned shares of the targeted firm before the merger or acquisition attempt is announced. On average acquiring firms pay a premium when acquiring or merging with a targeted firm. This study uses publicly available financial information for 7,267 attempted takeover targets and 52,343 non-targeted firms for the period January 3, 2000 through December 31, 2007 to estimate (using logit) predictive models. Financial ratios are constructed based on six hypotheses found in the literature. Although statistical evidence supports a few of the hypotheses, the low predictive power of the models does not indicate the ability to accurately predict targeted firms ahead of time, let alone with any economic significance.
Show less - Date Issued
- 2012
- Identifier
- CFH0004133, ucf:44892
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004133
- Title
- IMPROVING BRANCH PREDICTION ACCURACY VIA EFFECTIVE SOURCE INFORMATION AND PREDICTION ALGORITHMS.
- Creator
-
GAO, HONGLIANG, ZHOU, HUIYANG, University of Central Florida
- Abstract / Description
-
Modern superscalar processors rely on branch predictors to sustain a high instruction fetch throughput. Given the trend of deep pipelines and large instruction windows, a branch misprediction will incur a large performance penalty and result in a significant amount of energy wasted by the instructions along wrong paths. With their critical role in high performance processors, there has been extensive research on branch predictors to improve the prediction accuracy. Conceptually a dynamic...
Show moreModern superscalar processors rely on branch predictors to sustain a high instruction fetch throughput. Given the trend of deep pipelines and large instruction windows, a branch misprediction will incur a large performance penalty and result in a significant amount of energy wasted by the instructions along wrong paths. With their critical role in high performance processors, there has been extensive research on branch predictors to improve the prediction accuracy. Conceptually a dynamic branch prediction scheme includes three major components: a source, an information processor, and a predictor. Traditional works mainly focus on the algorithm for the predictor. In this dissertation, besides novel prediction algorithms, we investigate other components and develop untraditional ways to improve the prediction accuracy. First, we propose an adaptive information processing method to dynamically extract the most effective inputs to maximize the correlation to be exploited by the predictor. Second, we propose a new prediction algorithm, which improves the Prediction by Partial Matching (PPM) algorithm by selectively combining multiple partial matches. The PPM algorithm was previously considered optimal and has been used to derive the upper limit of branch prediction accuracy. Our proposed algorithm achieves higher prediction accuracy than PPM and can be implemented in realistic hardware budget. Third, we discover a new locality existing between the address of producer loads and the outcomes of their consumer branches. We study this address-branch correlation in detail and propose a branch predictor to explore this correlation for long-latency and hard-to-predict branches, which existing branch predictors fail to predict accurately.
Show less - Date Issued
- 2008
- Identifier
- CFE0002283, ucf:47877
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002283
- Title
- PREDICTIVE CONTROL FOR DYNAMIC SYSTEMS TO TRACK UNKNOWN INPUT IN THE PRESENCE OF TIME DELAY.
- Creator
-
Li, Yulan, Qu, Zhihua, University of Central Florida
- Abstract / Description
-
This study investigated a tracking system to trace unknown signal in the presence oftime delay. A predictive control method is proposed in order to compensate the time delay. Root locus method is applied when designing the controller, parameter setting is carried out through error and trail technique in w-plane. State space equation is derived for the system, with special state chose of tracking error. To analyze the asymptotic stability of the proposed predictive control system, the Lyapunov...
Show moreThis study investigated a tracking system to trace unknown signal in the presence oftime delay. A predictive control method is proposed in order to compensate the time delay. Root locus method is applied when designing the controller, parameter setting is carried out through error and trail technique in w-plane. State space equation is derived for the system, with special state chose of tracking error. To analyze the asymptotic stability of the proposed predictive control system, the Lyapunov function is constructed. It is shown that the designed system is asymptotically stable when input signal is rather low frequency signal. In order to illustrate the system performance, simulations are done based on the data profile technique. Signal profiles including acceleration pro le, velocity pro le, and trajectory profile are listed. Based on these profiles, simulations can be carried out and results can be taken as a good estimation for practical performance of the designed predictive control system. Signal noise is quite a common phenomenon in practical control systems. Under the situation that the input signal is with measurement noise, low pass filter is designed to filter out the noise and keep the low frequency input signal. Two typical kinds of noise are specified, i.e Gaussian noise and Pink noise. Simulations results are displayed to show that the proposed predictive control with low-pass filter design can achieve better performance in the case of both kinds of noise.
Show less - Date Issued
- 2005
- Identifier
- CFE0000819, ucf:46688
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000819
- Title
- Performance Predication Model for Advance Traffic Control System (ATCS) using field data.
- Creator
-
Mirza, Masood, Radwan, Essam, Abou-Senna, Hatem, Abdel-Aty, Mohamed, Zheng, Qipeng, University of Central Florida
- Abstract / Description
-
Reductions in capital expenditure revenues have created greater demands from users for quality service from existing facilities at lower costs forcing agencies to evaluate the performance of projects in more comprehensive and "greener" ways. The use of Adaptive Traffic Controls Systems (ATCS) is a step in the right direction by enabling practitioners and engineers to develop and implement traffic optimization strategies to achieve greater capacity out of the existing systems by optimizing...
Show moreReductions in capital expenditure revenues have created greater demands from users for quality service from existing facilities at lower costs forcing agencies to evaluate the performance of projects in more comprehensive and "greener" ways. The use of Adaptive Traffic Controls Systems (ATCS) is a step in the right direction by enabling practitioners and engineers to develop and implement traffic optimization strategies to achieve greater capacity out of the existing systems by optimizing traffic signal based on real time traffic demands and flow pattern. However, the industry is lagging in developing modeling tools for the ATCS which can predict the changes in MOEs due to the changes in traffic flow (i.e. volume and/or travel direction) making it difficult for the practitioners to measure the magnitude of the impacts and to develop an appropriate mitigation strategy. The impetus of this research was to explore the potential of utilizing available data from the ATCS for developing prediction models for the critical MOEs and for the entire intersection. Firstly, extensive data collections efforts were initiated to collect data from the intersections in Marion County, Florida. The data collected included volume, geometry, signal operations, and performance for an extended period. Secondly, the field data was scrubbed using macros to develop a clean data set for model development. Thirdly, the prediction models for the MOEs (wait time and queue) for the critical movements were developed using General Linear Regression Modeling techniques and were based on Poisson distribution with log linear function. Finally, the models were validated using the data collected from the intersections within Orange County, Florida. Also, as a part of this research, an Intersection Performance Index (IPI) model, a LOS prediction model for the entire intersection, was developed. This model was based on the MOEs (wait time and queue) for the critical movements.In addition, IPI Thresholds and corresponding intersection capacity designations were developed to establish level of service at the intersection. The IPI values and thresholds were developed on the same principles as Intersection Capacity Utilization (ICU) procedures, tested, and validated against corresponding ICU values and corresponding ICU LOS.
Show less - Date Issued
- 2018
- Identifier
- CFE0007055, ucf:51975
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007055
- Title
- PREDICTING RISKS OF INVASION OF CAULERPA SPECIES IN FLORIDA.
- Creator
-
Glardon, Christian, Walters, Linda, University of Central Florida
- Abstract / Description
-
Invasions of exotic species are one of the primary causes of biodiversity loss on our planet (National Research Council 1995). In the marine environment, all habitat types including estuaries, coral reefs, mud flats, and rocky intertidal shorelines have been impacted (e.g. Bertness et al. 2001). Recently, the topic of invasive species has caught the public's attention. In particular, there is worldwide concern about the aquarium strain of the green alga Caulerpa taxifolia (Vahl) C. Agardh...
Show moreInvasions of exotic species are one of the primary causes of biodiversity loss on our planet (National Research Council 1995). In the marine environment, all habitat types including estuaries, coral reefs, mud flats, and rocky intertidal shorelines have been impacted (e.g. Bertness et al. 2001). Recently, the topic of invasive species has caught the public's attention. In particular, there is worldwide concern about the aquarium strain of the green alga Caulerpa taxifolia (Vahl) C. Agardh that was introduced to the Mediterranean Sea in 1984 from the Monaco Oceanographic Museum. Since that time, it has flourished in thousands of hectares of near-shore waters. More recently, C. taxifolia has invaded southern Californian and Australian waters. Since the waters of Florida are similar to the waters of the Mediterranean Sea and other invasive sites my study will focus on determining potential invasion locations in Florida. I will look at the present distribution of C. taxifolia - native strain in Florida as well as the distribution of the whole genus around the state. During this study, I address three questions: 1) What is the current distribution of Caulerpa spp. in Florida? 2) Can I predict the location of potential Caulerpa spp. invasions using a set of environmental parameters and correlate them to the occurrence of the algae with the support of Geographic Information System (GIS) maps? 3) Using the results of part two, is there an ecological preferred environment for one or all Caulerpa spp. in Florida? To answer these questions, I surveyed 24 areas in each of 6 zones chosen in a stratified manner along the Floridian coastline to evaluate the association of potential indicators Caulerpa. Latitude, presence or absence of seagrass beds, human population density, and proximity to marinas were chosen as the 4 parameters expected to correlate to Caulerpa occurrences. A logistic regression model assessing the association of Caulerpa occurrence with measured variables has been developed to predict current and future probabilities of Caulerpa spp. presence throughout the state. Fourteen different species of Caulerpa spp. were found in 26 of the 132 sites visited. There was a positive correlation between Caulerpa spp. and seagrass beds presence and proximity to marinas. There was a negative correlation with latitude and human population density. C. taxifolia aquarium strain wasn't found. Percent correct for our model was of 61.5% for presence and 98.1% for absence. This prediction model will allow us to focus on particular areas for future surveys.
Show less - Date Issued
- 2006
- Identifier
- CFE0001041, ucf:46796
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001041
- Title
- Predicting Risk to Reoffend: Establishing the Validity of the Postive Achievement Change Tool.
- Creator
-
Martin, Julie, Wan, Thomas, Winton, Mark, Martin, Lawrence, Chen, Hsueh-Fen, University of Central Florida
- Abstract / Description
-
In recent years, there has been increased reliance on the use of risk assessment in the juvenile justice system to predict and classify offenders based on their risk to reoffend. Over the years, the predictive validity of risk assessments has improved through the inclusion of actuarial assessment and dynamic risk factors. The predictive validity of certain assessments, such as the Youth Level of Service/Case Management Inventory (YLS/CMI), has been well established through numerous...
Show moreIn recent years, there has been increased reliance on the use of risk assessment in the juvenile justice system to predict and classify offenders based on their risk to reoffend. Over the years, the predictive validity of risk assessments has improved through the inclusion of actuarial assessment and dynamic risk factors. The predictive validity of certain assessments, such as the Youth Level of Service/Case Management Inventory (YLS/CMI), has been well established through numerous replication studies on different subgroups of the population. The validity of other instruments, such as the Positive Achievement Change Tool (PACT), is in its infancy having only been validated on the sample of the population for which it was created. The PACT, a relatively new juvenile risk assessment tool, was adapted from the Washington State Juvenile Court Assessment and validated on the Florida juvenile population. This study sought to demonstrate the predictive validity of the PACT risk assessment, analyze gender differences in juvenile recidivism, and determine the relative importance of individual-level, social-level, and community-level variables in the prediction of recidivism for a sample of juveniles in Tarrant County, Texas. The results of this research confirmed the predictive validity of the PACT for juveniles served by Tarrant County Juvenile Services (TCJS). Despite possessing adequate predictive validity for the entire population, gender-specific analyses revealed differences in the ability of the PACT to accurately classify female delinquents based on risk to reoffend. Not only did gender differences emerge in the predictive validity of the PACT, but males and female recidivism was also predicted by different social-level indicators. The results of this research provided further evidence for social-causation theories of crime and delinquency, with social-level indicators exerting the strongest relationship with recidivism when compared to individual-level and community-level predictors. The inability of community-level predictors to enhance the predictive accuracy of the assessment suggest broad application of the PACT across jurisdictions. TCJS has invested a considerable amount of time, resources, and funding in the implementation and maintenance of the PACT. The results of this study provided support and direction for the continued use of the PACT at TCJS. In addition, establishing the predictive validity of the PACT on the Tarrant County juvenile population satisfied the legislative requirement for a population specific validation of the risk assessment implemented in each county.
Show less - Date Issued
- 2012
- Identifier
- CFE0004221, ucf:48992
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004221
- Title
- Internet of Things Business Modeling and Analysis using Agent-Based Simulation.
- Creator
-
Basingab, Mohammed, Rabelo, Luis, Elshennawy, Ahmad, Lee, Gene, Rahal, Ahmad, University of Central Florida
- Abstract / Description
-
Internet of Things (IoT) is a new vision of an integrated network covering physical objects that are able to collect and exchange data. It enables previously unconnected devices and objects to become connected using equipping devices with communication technology such as sensors and radio frequency identification tags (RFID). As technology progresses towards new paradigm such as IoT, there is a need for an approach to identify the significance of these projects. Conventional simulation...
Show moreInternet of Things (IoT) is a new vision of an integrated network covering physical objects that are able to collect and exchange data. It enables previously unconnected devices and objects to become connected using equipping devices with communication technology such as sensors and radio frequency identification tags (RFID). As technology progresses towards new paradigm such as IoT, there is a need for an approach to identify the significance of these projects. Conventional simulation modeling and data analysis approaches are not able to capture the system complexity or suffer from a lack of data needed that can help to build a prediction. Agent-based Simulation (ABM) proposes an efficient simulation scheme to capture the structure of this dimension and offer a potential solution.Two case studies were proposed in this research. The first one introduces a conceptual case study addressing the use of agent-based simulations to verify the effectiveness of the business model of IoT. The objective of the study is to assess the feasibility of such application, of the market in the city of Orlando (Florida, United States). The second case study seeks to use ABM to simulate the operational behavior of refrigeration units (7,420) in one of largest retail organizations in Saudi Arabia and assess the economic feasibility of IoT implementation by estimating the return on investment (ROI).
Show less - Date Issued
- 2017
- Identifier
- CFE0006855, ucf:51756
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006855
- Title
- TWO ESSAYS ON INSTITUTIONAL INVESTORS.
- Creator
-
Nguyen, Hoang, Chen, Honghui, University of Central Florida
- Abstract / Description
-
This dissertation consists of two essays investigating the trading by institutions and its impact on the stock market. In the first essay, I investigate why changes in institutional breadth predict return. I first show that changes in breadth are positively associated with abnormal returns over the following four quarters. I then demonstrate that this return predictability can be attributed to the information about the firms' future operating performance. When I examine different types of...
Show moreThis dissertation consists of two essays investigating the trading by institutions and its impact on the stock market. In the first essay, I investigate why changes in institutional breadth predict return. I first show that changes in breadth are positively associated with abnormal returns over the following four quarters. I then demonstrate that this return predictability can be attributed to the information about the firms' future operating performance. When I examine different types of institutions independently, I find that the predictive power varies across the population of institutions. More specifically, institutions that follow active management style are better able to predict future returns than the passive institutions, and their predictive power appears to be associated with information about future earnings growth. These findings are consistent with the information hypothesis that changes in breadth of institutional ownership can predict return because they contain information about the fundamental value of firms. In the second essay, I examine institutional herding behavior and its impact on stock prices. I document that herds by institutions usually last for more than one quarter and that herds occur more frequently for small and medium size stocks. I find that after herds end, there are reversals in stocks returns for up to four quarters. The magnitude of reversals is positively related to the duration of herding, and negatively related to the price impact of current herding activity. This pattern in returns prevails for all sub-periods examined and is concentrated in small and medium size stocks. My findings suggest that institutional herding may destabilize stock prices.
Show less - Date Issued
- 2007
- Identifier
- CFE0001731, ucf:47304
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001731
- Title
- THE EFFECTS OF DIFFERENTIAL ITEM FUNCTIONING ON PREDICTIVE BIAS.
- Creator
-
Bryant, Damon, Stone-Romero, Eugene, University of Central Florida
- Abstract / Description
-
The purpose of this research was to investigate the relation between measurement bias at the item level (differential item functioning, dif) and predictive bias at the test score level. Dif was defined as a difference in the probability of getting a test item correct for examinees with the same ability but from different subgroups. Predictive bias was defined as a difference in subgroup regression intercepts and/or slopes in predicting a criterion. Data were simulated by computer. Two...
Show moreThe purpose of this research was to investigate the relation between measurement bias at the item level (differential item functioning, dif) and predictive bias at the test score level. Dif was defined as a difference in the probability of getting a test item correct for examinees with the same ability but from different subgroups. Predictive bias was defined as a difference in subgroup regression intercepts and/or slopes in predicting a criterion. Data were simulated by computer. Two hypothetical subgroups (a reference group and a focal group) were used. The predictor was a composite score on a dimensionally complex test with 60 items. Sample size (35, 70, and 105 per group), validity coefficient (.3 or .5), and the mean difference on the predictor (0, .33, .66, and 1 standard deviation, sd) and the criterion (0 and .35 sd) were manipulated. The percentage of items showing dif (0%, 15%, and 30%) and the effect size of dif (small = .3, medium = .6, and large = .9) were also manipulated. Each of the 432 conditions in the 3 x 2 x 4 x 2 x 3 x 3 design was replicated 500 times. For each replication, a predictive bias analysis was conducted, and the detection of predictive bias against each subgroup was the dependent variable. The percentage of dif and the effect size of dif were hypothesized to influence the detection of predictive bias; hypotheses were also advanced about the influence of sample size and mean subgroup differences on the predictor and criterion. Results indicated that dif was not related to the probability of detecting predictive bias against any subgroup. Results were inconsistent with the notion that measurement bias and predictive bias are mutually supportive, i.e., the presence (or absence) of one type of bias is evidence in support of the presence (or absence) of the other type of bias. Sample size and mean differences on the predictor/criterion had direct and indirect effects on the probability of detecting predictive bias against both reference and focal groups. Implications for future research are discussed.
Show less - Date Issued
- 2004
- Identifier
- CFE0000157, ucf:46160
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000157
- Title
- EVALUATING RAMP METERING AND VARIABLE SPEED LIMITS TO REDUCE CRASH POTENTIAL ON CONGESTED FREEWAYS USING MICRO-SIMULATION.
- Creator
-
Dhindsa, Albinder, Abdel-Aty, Mohamed, University of Central Florida
- Abstract / Description
-
Recent research at UCF into defining surrogate measures for identifying crash prone conditions on freeways has led to the introduction of several statistical models which can flag such conditions with a good degree of accuracy. Outputs from these models have the potential to be used as real-time safety measures on freeways. They may also act as the basis for the evaluation of several intervention strategies that might help in the mitigation of risk of crashes. Ramp Metering and Variable Speed...
Show moreRecent research at UCF into defining surrogate measures for identifying crash prone conditions on freeways has led to the introduction of several statistical models which can flag such conditions with a good degree of accuracy. Outputs from these models have the potential to be used as real-time safety measures on freeways. They may also act as the basis for the evaluation of several intervention strategies that might help in the mitigation of risk of crashes. Ramp Metering and Variable Speed Limits are two approaches which have the potential of becoming effective implementation strategies for improving the safety conditions on congested freeways. This research evaluates both these strategies in different configurations and attempts to quantify their effect on risk of crash on a 9-mile section of Interstate-4 in the Orlando metropolitan region. The section consists of 17 Loop Detector stations, 11 On-ramps and 10 off-ramps. PARAMICS micro-simulation is used as the tool for modeling the freeway section. The simulated network is calibrated and validated for 5 minute average flows and speeds using loop detector data. Feedback Ramp Metering algorithm, ALINEA, is used for controlling access from up to 7 on-ramps. Variable Speed Limits are implemented based on real-time speed conditions prevailing in the whole 9-mile section. Both these strategies are tested separately as well as collectively to determine the individual effects of all the parameters involved. The results have been used to formulate and recommend the best possible strategy for minimizing the risk of crashes on the corridor. The study concluded that Ramp Metering improves the conditions on the freeway in terms of safety by decreasing variance in speeds and decreasing average occupancy. A safety benefit index was developed for quantifying the reduction in crash risk and it indicated that an optimal implementation strategy might produce benefits of up to 55%. The condition on the freeway section improved with increase in the number of metered ramps. It was also observed that shorter signal cycles for metered ramps were more suitable for metering multiple ramps. Ramp Metering at multiple locations also decreased the segment wide travel-times by 5% and was even able to offset the delays incurred by drivers at the metered on-ramps. Variable Speed Limits (VSL) were individually not as effective as ramp metering but when implemented along with ramp metering, they were found to further improve the safety on the freeway section under consideration. By means of a detailed experimental design it was observed that the best strategy for introducing speed limit changes was to raise the speed limits downstream of the location of interest by 5 mph and not affecting the speed limits upstream. A coordinated strategy - involving simultaneous application of VSL and Ramp Metering - provided safety benefits of up to 56 % for the study section according to the safety benefit index. It also improved the average speeds on the network besides decreasing the overall network travel time by as much as 21%.
Show less - Date Issued
- 2005
- Identifier
- CFE0000913, ucf:46741
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000913
- Title
- Thermomechanical Fatigue Life Prediction of Notched 304 Stainless Steel.
- Creator
-
Karl, Justin, Gordon, Ali, Bai, Yuanli, Raghavan, Seetha, Nicholson, David, University of Central Florida
- Abstract / Description
-
The behavior of materials as they are subjected to combined thermal and mechanical fatigue loads is an area of research that carries great significance in a number of engineering applications. Power generation, petrochemical, and aerospace industries operate machinery with expensive components that undergo repeated applications of force while simultaneously being exposed to variable temperature working fluids. A case of considerable importance is found in steam turbines, which subject blades...
Show moreThe behavior of materials as they are subjected to combined thermal and mechanical fatigue loads is an area of research that carries great significance in a number of engineering applications. Power generation, petrochemical, and aerospace industries operate machinery with expensive components that undergo repeated applications of force while simultaneously being exposed to variable temperature working fluids. A case of considerable importance is found in steam turbines, which subject blades to cyclic loads from rotation as well as the passing of heated gases. The complex strain and temperature histories from this type of operation, combined with the geometric profile of the blades, make accurate prediction of service life for such components challenging. Development of a deterministic life prediction model backed by physical data would allow design and operation of turbines with higher efficiency and greater regard for reliability. The majority of thermomechanical fatigue (TMF) life prediction modeling research attempts to correlate basic material property data with simplistic strain and thermal histories. With the exception of very limited cases, these types of efforts have been insufficient and imprecise in their capabilities. Early researchers did not account for the multiple damage mechanisms that operate and interact within a material during TMF loads, and did not adequately address the extent of the relationship between smooth and notched parts. More recent research that adequately recognizes the multivariate nature of TMF develops models that handle life reduction through summation of constitutive damage terms. It is feasible that a modification to the damage-based approach can sufficiently include cases that involve complex geometry. The focus of this research is to construct an experimentally-backed extension of the damage-based approach that improves handling of geometric discontinuities. Smooth and notched specimens of Type 304 stainless steel were subjected to several types of idealized fatigue conditions to assemble a clear picture of the types of damage occurring in a steam turbine and similarly-loaded mechanical systems. These results were compared with a number of idealized TMF experiments, and supplemented by numerical simulation and microscopic observation. A non-uniform damage-summation constitutive model was developed primarily based on physical observations. An additional simplistic model was developed based on phenomenological effect. Findings from this study will be applicable to life prediction efforts in other similar material and load cases.
Show less - Date Issued
- 2013
- Identifier
- CFE0004870, ucf:49666
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004870
- Title
- DELAY MODELING AND LONG-RANGE PREDICTIVE CONTROL OF CZOCHRALSKI GROWTH PROCESS.
- Creator
-
Shah, Dhaval, Klemenz, Christine, University of Central Florida
- Abstract / Description
-
This work presents the Czochralski growth dynamics as time-varying delay based model, applied to the growth of La3Ga5.5Ta0.5O14 (LGT) piezoelectric crystals. The growth of high-quality large-diameter oxides by Czochralski technique requires the theoretical understanding and optimization of all relevant process parameters, growth conditions, and melts chemistry. Presently, proportional-integral- derivative (PID) type controllers are widely accepted for constant-diameter crystal growth by...
Show moreThis work presents the Czochralski growth dynamics as time-varying delay based model, applied to the growth of La3Ga5.5Ta0.5O14 (LGT) piezoelectric crystals. The growth of high-quality large-diameter oxides by Czochralski technique requires the theoretical understanding and optimization of all relevant process parameters, growth conditions, and melts chemistry. Presently, proportional-integral- derivative (PID) type controllers are widely accepted for constant-diameter crystal growth by Czochralski. Such control systems, however, do not account for aspects such as the transportation delay of the heat from crucible wall to the crystal solidification front, heat radiated from the crucible wall above the melt surface, and varying melt level. During crystal growth, these time delays play a dominant role, and pose a significant challenge to the control design. In this study, a time varying linear delay model was applied to the identification of nonlinearities of the growth dynamics. Initial results reveled the benefits of this model with actual growth results. These results were used to develop a long-range model predictive control system design. Two different control techniques using long range prediction are studied for the comparative study. Development and testing of the new control system on real time growth system are discussed in detail. The results are promising and suggest future work in this direction. Other discussion about the problems during the crystal growth, optimization of crystal growth parameters are also studied along with the control system design.
Show less - Date Issued
- 2009
- Identifier
- CFE0002581, ucf:48250
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002581
- Title
- Data Representation in Machine Learning Methods with its Application to Compilation Optimization and Epitope Prediction.
- Creator
-
Sher, Yevgeniy, Zhang, Shaojie, Dechev, Damian, Leavens, Gary, Gonzalez, Avelino, Zhi, Degui, University of Central Florida
- Abstract / Description
-
In this dissertation we explore the application of machine learning algorithms to compilation phase order optimization, and epitope prediction. The common thread running through these two disparate domains is the type of data being dealt with. In both problem domains we are dealing with categorical data, with its representation playing a significant role in the performance of classification algorithms.We first present a neuroevolutionary approach which orders optimization phases to generate...
Show moreIn this dissertation we explore the application of machine learning algorithms to compilation phase order optimization, and epitope prediction. The common thread running through these two disparate domains is the type of data being dealt with. In both problem domains we are dealing with categorical data, with its representation playing a significant role in the performance of classification algorithms.We first present a neuroevolutionary approach which orders optimization phases to generate compiled programs with performance superior to those compiled using LLVM's -O3 optimization level. Performance improvements calculated as the speed of the compiled program's execution ranged from 27% for the ccbench program, to 40.8% for bzip2.This dissertation then explores the problem of data representation of 3D biological data, such as amino acids. A new approach for distributed representation of 3D biological data through the process of embedding is proposed and explored. Analogously to word embedding, we developed a system that uses atomic and residue coordinates to generate distributed representation for residues, which we call 3D Residue BioVectors. Preliminary results are presented which demonstrate that even the low dimensional 3D Residue BioVectors can be used to predict conformational epitopes and protein-protein interactions, with promising proficiency. The generation of such 3D BioVectors, and the proposed methodology, opens the door for substantial future improvements, and application domains.The dissertation then explores the problem domain of linear B-Cell epitope prediction. This problem domain deals with predicting epitopes based strictly on the protein sequence. We present the DRREP system, which demonstrates how an ensemble of shallow neural networks can be combined with string kernels and analytical learning algorithm to produce state of the art epitope prediction results. DRREP was tested on the SARS subsequence, the HIV, Pellequer, AntiJen datasets, and the standard SEQ194 test dataset. AUC improvements achieved over the state of the art ranged from 3% to 8%.Finally, we present the SEEP epitope classifier, which is a multi-resolution SMV ensemble based classifier which uses conjoint triad feature representation, and produces state of the art classification results. SEEP leverages the domain specific knowledge based protein sequence encoding developed within the protein-protein interaction research domain. Using an ensemble of multi-resolution SVMs, and a sliding window based pre and post processing pipeline, SEEP achieves an AUC of 91.2 on the standard SEQ194 test dataset, a 24% improvement over the state of the art.
Show less - Date Issued
- 2017
- Identifier
- CFE0006793, ucf:51829
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006793
- Title
- Learning Dynamic Network Models for Complex Social Systems.
- Creator
-
Hajibagheri, Alireza, Sukthankar, Gita, Turgut, Damla, Chatterjee, Mainak, Lakkaraju, Kiran, University of Central Florida
- Abstract / Description
-
Human societies are inherently complex and highly dynamic, resulting in rapidly changing social networks, containing multiple types of dyadic interactions. Analyzing these time-varying multiplex networks with approaches developed for static, single layer networks often produces poor results. To address this problem, our approach is to explicitly learn the dynamics of these complex networks. This dissertation focuses on five problems: 1) learning link formation rates; 2) predicting changes in...
Show moreHuman societies are inherently complex and highly dynamic, resulting in rapidly changing social networks, containing multiple types of dyadic interactions. Analyzing these time-varying multiplex networks with approaches developed for static, single layer networks often produces poor results. To address this problem, our approach is to explicitly learn the dynamics of these complex networks. This dissertation focuses on five problems: 1) learning link formation rates; 2) predicting changes in community membership; 3) using time series to predict changes in network structure; 4) modeling coevolution patterns across network layers and 5) extracting information from negative layers of a multiplex network.To study these problems, we created a rich dataset extracted from observing social interactions in the massively multiplayer online game Travian. Most online social media platforms are optimized to support a limited range of social interactions, primarily focusing on communication and information sharing. In contrast, relations in massively-multiplayer online games (MMOGs) are often formed during the course of gameplay and evolve as the game progresses. To analyze the players' behavior, we constructed multiplex networks with link types for raid, communication, and trading.The contributions of this dissertation include 1) extensive experiments on the dynamics of networks formed from diverse social processes; 2) new game theoretic models for community detection in dynamic networks; 3) supervised and unsupervised methods for link prediction in multiplex coevolving networks for both positive and negative links. We demonstrate that our holistic approach for modeling network dynamics in coevolving, multiplex networks outperforms factored methods that separately consider temporal and cross-layer patterns.
Show less - Date Issued
- 2017
- Identifier
- CFE0006598, ucf:51306
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006598
- Title
- A PREDICTIVE MODEL FOR BENCHMARKING ACADEMIC PROGRAMS (PBAP)USING U.S. NEWS RANKING DATA FOR ENGINEERING COLLEGES OFFERING GRADUATE PROGRAMS.
- Creator
-
Chuck, Lisa, Tubbs, LeVester, University of Central Florida
- Abstract / Description
-
Improving national ranking is an increasingly important issue for university administrators. While research has been conducted on performance measures in higher education, research designs have lacked a predictive quality. Studies on the U.S. News college rankings have provided insight into the methodology; however, none of them have provided a model to predict what change in variable values would likely cause an institution to improve its standing in the rankings. The purpose of this study...
Show moreImproving national ranking is an increasingly important issue for university administrators. While research has been conducted on performance measures in higher education, research designs have lacked a predictive quality. Studies on the U.S. News college rankings have provided insight into the methodology; however, none of them have provided a model to predict what change in variable values would likely cause an institution to improve its standing in the rankings. The purpose of this study was to develop a predictive model for benchmarking academic programs (pBAP) for engineering colleges. The 2005 U.S. News ranking data for graduate engineering programs were used to create a four-tier predictive model (pBAP). The pBAP model correctly classified 81.9% of the cases in their respective tier. To test the predictive accuracy of the pBAP model, the 2005 U.S .News data were entered into the pBAP variate developed using the 2004 U.S. News data. The model predicted that 88.9% of the institutions would remain in the same ranking tier in the 2005 U.S. News rankings (compared with 87.7% in the actual data), and 11.1% of the institutions would demonstrate tier movement (compared with an actual 12.3% movement in the actual data). The likelihood of improving an institution's standing in the rankings was greater when increasing the values of 3 of the 11 variables in the U.S. News model: peer assessment score, recruiter assessment score, and research expenditures.
Show less - Date Issued
- 2005
- Identifier
- CFE0000431, ucf:46377
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000431
- Title
- EXAMINING DYNAMIC VARIABLE SPEED LIMIT STRATEGIES FOR THE REDUCTION OF REAL-TIME CRASH RISK ON FREEWAYS.
- Creator
-
Cunningham, Ryan, Abdel-Aty, Mohamed, University of Central Florida
- Abstract / Description
-
Recent research at the University of Central Florida involving crashes on Interstate-4 in Orlando, Florida has led to the creation of new statistical models capable of determining the crash risk on the freeway (Abdel-Aty et al., 2004; 2005, Pande and Abdel-Aty, 2006). These models are able to calculate the rear-end and lane-change crash risks along the freeway in real-time through the use of static information at various locations along the freeway as well as the real-time traffic data...
Show moreRecent research at the University of Central Florida involving crashes on Interstate-4 in Orlando, Florida has led to the creation of new statistical models capable of determining the crash risk on the freeway (Abdel-Aty et al., 2004; 2005, Pande and Abdel-Aty, 2006). These models are able to calculate the rear-end and lane-change crash risks along the freeway in real-time through the use of static information at various locations along the freeway as well as the real-time traffic data obtained by loop detectors. Since these models use real-time traffic data, they are capable of calculating rear-end and lane-change crash risk values as the traffic flow conditions are changing on the freeway. The objective of this study is to examine the potential benefits of variable speed limit implementation techniques for reducing the crash risk along the freeway. Variable speed limits is an ITS strategy that is typically used upstream of a queue in order to reduce the effects of congestion. By lowering the speeds of the vehicles approaching a queue, more time is given for the queue to dissipate from the front before it continues to grow from the back. This study uses variable speed limit strategies in a corridor-wide attempt to reduce rear-end and lane-change crash risks where speed differences between upstream and downstream vehicles are high. The idea of homogeneous speed zones was also introduced in this study to determine the distance over which variable speed limits should be implemented from a station of interest. This is unique since it is the first time a dynamic distance has been considered for variable speed limit implementation. Several VSL strategies were found to successfully reduce the rear-end and lane-change crash risks at low-volume traffic conditions (60% and 80% loading conditions). In every case, the most successful treatments involved the lowering of upstream speed limits by 5 mph and the raising of downstream speed limits by 5 mph. In the free-flow condition (60% loading), the best treatments involved the more liberal threshold for defining homogeneous speed zones (5 mph) and the more liberal implementation distance (entire speed zone), as well as a minimum time period of 10 minutes. This treatment was actually shown to significantly reduce the network travel time by 0.8%. It was also shown that this particular implementation strategy (lowering upstream, raising downstream) is wholly resistant to the effects of crash migration in the 60% loading scenario. In the condition approaching congestion (80% loading), the best treatment again involved the more liberal threshold for homogeneous speed zones (5 mph), yet the more conservative implementation distance (half the speed zone), along with a minimum time period of 5 minutes. This particular treatment arose as the best due to its unique capability to resist the increasing effects of crash migration in the 80% loading scenario. It was shown that the treatments implementing over half the speed zone were more robust against crash migration than other treatments. The best treatment exemplified the greatest benefit in reduced sections and the greatest resistance to crash migration in other sections. In the 80% loading scenario, the best treatment increased the network travel time by less than 0.4%, which is deemed acceptable. No treatment was found to successfully reduce the rear-end and lane-change crash risks in the congested traffic condition (90% loading). This is attributed to the fact that, in the congested state, the speed of vehicles is subject to the surrounding traffic conditions and not to the posted speed limit. Therefore, changing the posted speed limit does not affect the speed of vehicles in a desirable manner. These conclusions agree with Dilmore (2005).
Show less - Date Issued
- 2007
- Identifier
- CFE0001723, ucf:47309
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001723
- Title
- STOCHASTIC RESOURCE CONSTRAINED PROJECT SCHEDULING WITH STOCHASTIC TASK INSERTION PROBLEMS.
- Creator
-
Archer, Sandra, Armacost, Robert, University of Central Florida
- Abstract / Description
-
The area of focus for this research is the Stochastic Resource Constrained Project Scheduling Problem (SRCPSP) with Stochastic Task Insertion (STI). The STI problem is a specific form of the SRCPSP, which may be considered to be a cross between two types of problems in the general form: the Stochastic Project Scheduling Problem, and the Resource Constrained Project Scheduling Problem. The stochastic nature of this problem is in the occurrence/non-occurrence of tasks with deterministic...
Show moreThe area of focus for this research is the Stochastic Resource Constrained Project Scheduling Problem (SRCPSP) with Stochastic Task Insertion (STI). The STI problem is a specific form of the SRCPSP, which may be considered to be a cross between two types of problems in the general form: the Stochastic Project Scheduling Problem, and the Resource Constrained Project Scheduling Problem. The stochastic nature of this problem is in the occurrence/non-occurrence of tasks with deterministic duration. Researchers Selim (2002) and Grey (2007) laid the groundwork for the research on this problem. Selim (2002) developed a set of robustness metrics and used these to evaluate two initial baseline (predictive) scheduling techniques, optimistic (0% buffer) and pessimistic (100% buffer), where none or all of the stochastic tasks were scheduled, respectively. Grey (2007) expanded the research by developing a new partial buffering strategy for the initial baseline predictive schedule for this problem and found the partial buffering strategy to be superior to Selim's "extreme" buffering approach. The current research continues this work by focusing on resource aspects of the problem, new buffering approaches, and a new rescheduling method. If resource usage is important to project managers, then a set of metrics that describes changes to the resource flow would be important to measure between the initial baseline predictive schedule and the final "as-run" schedule. Two new sets of resource metrics were constructed regarding resource utilization and resource flow. Using these new metrics, as well as the Selim/Grey metrics, a new buffering approach was developed that used resource information to size the buffers. The resource-sized buffers did not show to have significant improvement over Grey's 50% buffer used as a benchmark. The new resource metrics were used to validate that the 50% buffering strategy is superior to the 0% or 100% buffering by Selim. Recognizing that partial buffers appear to be the most promising initial baseline development approach for STI problems, and understanding that experienced project managers may be able to predict stochastic probabilities based on prior projects, the next phase of the research developed a new set of buffering strategies where buffers are inserted that are proportional to the probability of occurrence. The results of this proportional buffering strategy were very positive, with the majority of the metrics (both robustness and resource), except for stability metrics, improved by using the proportional buffer. Finally, it was recognized that all research thus far for the SRCPSP with STI focused solely on the development of predictive schedules. Therefore, the final phase of this research developed a new reactive strategy that tested three different rescheduling points during schedule eventuation when a complete rescheduling of the latter portion of the schedule would occur. The results of this new reactive technique indicate that rescheduling improves the schedule performance in only a few metrics under very specific network characteristics (those networks with the least restrictive parameters). This research was conducted with extensive use of Base SAS v9.2 combined with SAS/OR procedures to solve project networks, solve resource flow problems, and implement reactive scheduling heuristics. Additionally, Base SAS code was paired with Visual Basic for Applications in Excel 2003 to implement an automated Gantt chart generator that provided visual inspection for validation of the repair heuristics. The results of this research when combined with the results of Selim and Grey provide strong guidance for project managers regarding how to develop baseline predictive schedules and how to reschedule the project as stochastic tasks (e.g. unplanned work) do or do not occur. Specifically, the results and recommendations are provided in a summary tabular format that describes the recommended initial baseline development approach if a project manager has a good idea of the level and location of the stochasticity for the network, highlights two cases where rescheduling during schedule eventuation may be beneficial, and shows when buffering proportional to the probability of occurrence is recommended, or not recommended, or the cases where the evidence is inconclusive.
Show less - Date Issued
- 2008
- Identifier
- CFE0002491, ucf:47673
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002491
- Title
- A PREDICTIVE MODEL FOR BENCHMARKING ACADEMIC PROGRAMS (PBAP) USING U.S. NEWS RANKING DATA FOR ENGINEERING COLLEGES OFFERING GRADUATE PROGRAMS.
- Creator
-
Chuck, Lisa, Tubbs, LeVester, University of Central Florida
- Abstract / Description
-
Improving national ranking is an increasingly important issue for university administrators. While research has been conducted on performance measures in higher education, research designs have lacked a predictive quality. Studies on the U.S. News college rankings have provided insight into the methodology; however, none of them have provided a model to predict what change in variable values would likely cause an institution to improve its standing in the rankings. The purpose of this study...
Show moreImproving national ranking is an increasingly important issue for university administrators. While research has been conducted on performance measures in higher education, research designs have lacked a predictive quality. Studies on the U.S. News college rankings have provided insight into the methodology; however, none of them have provided a model to predict what change in variable values would likely cause an institution to improve its standing in the rankings. The purpose of this study was to develop a predictive model for benchmarking academic programs (pBAP) for engineering colleges. The 2005 U.S. News ranking data for graduate engineering programs were used to create a four-tier predictive model (pBAP). The pBAP model correctly classified 81.9% of the cases in their respective tier. To test the predictive accuracy of the pBAP model, the 2005 U.S .News data were entered into the pBAP variate developed using the 2004 U.S. News data. The model predicted that 88.9% of the institutions would remain in the same ranking tier in the 2005 U.S. News rankings (compared with 87.7% in the actual data), and 11.1% of the institutions would demonstrate tier movement (compared with an actual 12.3% movement in the actual data). The likelihood of improving an institution's standing in the rankings was greater when increasing the values of 3 of the 11 variables in the U.S. News model: peer assessment score, recruiter assessment score, and research expenditures.
Show less - Date Issued
- 2005
- Identifier
- CFE0000576, ucf:46422
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000576
- Title
- MACROSCOPIC TRAFFIC SAFETY ANALYSIS BASED ON TRIP GENERATION CHARACTERISTICS.
- Creator
-
Siddiqui, Chowdhury, Abdel-Aty, Mohamed, University of Central Florida
- Abstract / Description
-
Recent research has shown that incorporating roadway safety in transportation planning has been considered one of the active approaches to improve safety. Aggregate level analysis for predicting crash frequencies had been contemplated to be an important step in this process. As seen from the previous studies various categories of predictors at macro level (census blocks, traffic analysis zones, census tracts, wards, counties and states) have been exhausted to find appropriate correlation with...
Show moreRecent research has shown that incorporating roadway safety in transportation planning has been considered one of the active approaches to improve safety. Aggregate level analysis for predicting crash frequencies had been contemplated to be an important step in this process. As seen from the previous studies various categories of predictors at macro level (census blocks, traffic analysis zones, census tracts, wards, counties and states) have been exhausted to find appropriate correlation with crashes. This study contributes to this ongoing macro level road safety research by investigating various trip productions and attractions along with roadway characteristics within traffic analysis zones (TAZs) of four counties in the state of Florida. Crashes occurring in one thousand three hundred and forty-nine TAZs in Hillsborough, Citrus, Pasco, and Hernando counties during the years 2005 and 2006 were examined in this study. Selected counties were representative from both urban and rural environments. To understand the prevalence of various trip attraction and production rates per TAZ the Euclidian distances between the centroid of a TAZ containing a particular crash and the centroid of the ZIP area containing the at fault driver's home address for that particular crash was calculated. It was found that almost all crashes in Hernando and Citrus County for the years 2005-2006 took place in about 27 miles radius centering at the at-fault drivers' home. Also about sixty-two percent of crashes occurred approximately at a distance of between 2 and 10 miles from the homes of drivers who were at fault in those crashes. These results gave an indication that home based trips may be more associated with crashes and later trip related model estimates which were found significant at 95% confidence level complied with this hypothesized idea. Previous aggregate level road safety studies widely addressed negative binomial distribution of crashes. Properties like non-negative integer counts, non-normal distribution, over-dispersion in the data have increased suitability of applying the negative binomial technique and has been selected to build crash prediction models in this research. Four response variables which were aggregated at TAZ-level were total number of crashes, severe (fatal and severe injury) crashes, total crashes during peak hours, and pedestrian and bicycle related crashes. For each response separate models were estimated using four different sets of predictors which are i) various trip variables, ii) total trip production and total trip attraction, iii) road characteristics, and iv) finally considering all predictors into the model. It was found that the total crash model and peak hour crash model were best estimated by the total trip productions and total trip attractions. On the basis of log-likelihoods, deviance value/degree of freedom, and Pearson Chi-square value/degree of freedom, the severe crash model was best fit by the trip related variables only and pedestrian and bicycle related crash model was best fit by the road related variables only. The significant trip related variables in the severe crash models were home-based work attractions, home-based shop attractions, light truck productions, heavy truck productions, and external-internal attractions. Only two variables- sum of roadway segment lengths with 35 mph speed limit and number of intersections per TAZ were found significant for pedestrian and bicycle related crash model developed using road characteristics only. The 1349 TAZs were grouped into three different clusters based on the quartile distribution of the trip generations and were termed as less-tripped, moderately-tripped, and highly-tripped TAZs. It was hypothesized that separate models developed for these clusters would provide a better fit as the clustering process increases the homogeneity within a cluster. The cluster models were re-run using the significant predictors attained from the joint models and were compared with the previous sets of models. However, the differences in the model fits (in terms of Alkaike's Information Criterion values) were not significant. This study points to different approaches when predicting crashes at the zonal level. This research is thought to add to the literature on macro level crash modeling research by considering various trip related data into account as previous studies in zone level safety have not explicitly considered trip data as explanatory covariates.
Show less - Date Issued
- 2009
- Identifier
- CFE0002871, ucf:48029
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002871
- Title
- INVESTIGATING AND MODELING THE IMPACTS OF ILLEGAL U-TURN VIOLATIONS AT MEDIANS LOCATED ON FLORIDA'S LIMITED ACCESS HIGHWAYS.
- Creator
-
Al-Sahili, Omar, Al-Deek, Haitham, Hasan, Samiul, Mantzaris, Alexander, University of Central Florida
- Abstract / Description
-
Illegal U-turn violations are considered part of the Wrong-Way Driving (WWD) maneuvers that could result in head-on crashes and severe injuries, which are often severe because of the high speed of the approaching traffic and limited time to avoid such crash. Therefore, reviewing this type of violation and understanding the contributing factors that may lead drivers to commit such illegal maneuver would help officials foresee and consequently minimize the potential risks that could lead to WWD...
Show moreIllegal U-turn violations are considered part of the Wrong-Way Driving (WWD) maneuvers that could result in head-on crashes and severe injuries, which are often severe because of the high speed of the approaching traffic and limited time to avoid such crash. Therefore, reviewing this type of violation and understanding the contributing factors that may lead drivers to commit such illegal maneuver would help officials foresee and consequently minimize the potential risks that could lead to WWD crashes. The purpose of this thesis is to investigate the illegal U-turn maneuvers on limited access facilities and find the significant contributing factors that encourage or discourage drivers to commit this type of violation. The study area included the Central Florida area (CF), and the South Florida (SF) area. About 6 crossover crashes and 620 citations were found at the median facilities in the study areas from year 2011 to 2016.The modeling methodology for this thesis had three goals: predicting the number of illegal U-turn violations across the traversable grass median sections per year using a Poisson regression model, selecting the most effective variables in predicting the illegal U-turn violations using the least absolute shrinkage and selection operator (LASSO) variable selection method, and estimating the probability of an illegal U-turn violation occurrence at a paved median opening for official use only per year, using a logistic regression model. To determine the variables that influence the illegal U-turn violations, 9 geometric design and 2 traffic conditions exploratory variables were analyzed in the models mentioned earlier. Several variables were found significant from the Poisson model such as the distance to the nearest interchange, the length of the median segment, the number of access points in the segment, the median design, and the speed limit. Afterwards, the LASSO method concluded that the most effective variables found were the median design and the distance of to the nearest interchange. The logistic regression model in the CF area indicated that the speed limit and the AADT as the significant contributing factors. However, in the SF area the significant variables were the distance to the nearest access point and the spacing between the median openings. The variation in results indicates a considerable difference between the two study areas that should be accounted for during the planning phases for allocating the median countermeasures. The significant variables found in the mentioned modeling approach provide a first attempt to understand the illegal U-turn violations on limited access highways, and interpret the variables which influence drivers' behavior in performing such illegal maneuver. Along with required design guidelines, the models found could be used as effective planning tools to select the appreciate locations for installing new median openings and reevaluating the existing median openings to identify locations with the lowest potential risk.Other modeling techniques that include additional factors could be tested in future research so that appropriate countermeasures can be installed to reduce or eliminate these illegal U-turns. Furthermore, the methodology could be extended to arterials (or roads with partially controlled access).
Show less - Date Issued
- 2017
- Identifier
- CFE0006708, ucf:51905
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006708