Current Search: Bootstrap (x)


Title

SIZE AND POWER PROPERTY OF THE ASYMPTOTIC TESTS AND THE BOOTSTRAP TESTS IN THE PROBIT MODEL: SIMULATION RESULTS.

Creator

Shen, Xiaobin, Im, Kyung, University of Central Florida

Abstract / Description

This paper compares the size and power properties of the asymptotic tests based on the asymptotic standard errors with the bootstrap tests based on the bootstrap confidence interval in the Probit model. The asymptotic tests work surprisingly well even when the sample size is quite small (e.g., n = 30) for the test of exclusion hypothesis β = 0. The bootstrap tests work similarly well. It shares essentially the same size and power property of the asymptotic tests when the null hypothesis...
Show moreThis paper compares the size and power properties of the asymptotic tests based on the asymptotic standard errors with the bootstrap tests based on the bootstrap confidence interval in the Probit model. The asymptotic tests work surprisingly well even when the sample size is quite small (e.g., n = 30) for the test of exclusion hypothesis β = 0. The bootstrap tests work similarly well. It shares essentially the same size and power property of the asymptotic tests when the null hypothesis is β = 0. However, the small sample probit estimators can be seriously biased when β/σ is large. Consequently, when we are interested in the nonexclusion hypothesis such as β/σ = 1, the conventional asymptotic tests can suffer size distortion and low power. But, following our simulation results, the size of the bootstrap tests is quite robust to the presence of the bias and the power is much better. Therefore, the bootstrap approach has some limited usefulness in practice when we are interested in the nonexclusion tests such as β/σ = 1 in the probit model.
Show less

Date Issued

2006

Identifier

CFE0001222, ucf:46914

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0001222


Title

PARAMETER ESTIMATION IN LINEAR REGRESSION.

Creator

Ollikainen, Kati, Malone, Linda, University of Central Florida

Abstract / Description

Today increasing amounts of data are available for analysis purposes and often times for resource allocation. One method for analysis is linear regression which utilizes the least squares estimation technique to estimate a model's parameters. This research investigated, from a user's perspective, the ability of linear regression to estimate the parameters' confidence intervals at the usual 95% level for medium sized data sets. A controlled environment using simulation with known...
Show moreToday increasing amounts of data are available for analysis purposes and often times for resource allocation. One method for analysis is linear regression which utilizes the least squares estimation technique to estimate a model's parameters. This research investigated, from a user's perspective, the ability of linear regression to estimate the parameters' confidence intervals at the usual 95% level for medium sized data sets. A controlled environment using simulation with known data characteristics (clean data, bias and or multicollinearity present) was used to show underlying problems exist with confidence intervals not including the true parameter (even though the variable was selected). The Elder/Pregibon rule was used for variable selection. A comparison of the bootstrap Percentile and BCa confidence interval was made as well as an investigation of adjustments to the usual 95% confidence intervals based on the Bonferroni and Scheffe multiple comparison principles. The results show that linear regression has problems in capturing the true parameters in the confidence intervals for the sample sizes considered, the bootstrap intervals perform no better than linear regression, and the Scheffe method is too wide for any application considered. The Bonferroni adjustment is recommended for larger sample sizes and when the tvalue for a selected variable is about 3.35 or higher. For smaller sample sizes all methods show problems with type II errors resulting from confidence intervals being too wide.
Show less

Date Issued

2006

Identifier

CFE0001482, ucf:47081

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0001482


Title

Evaluation of crash modification factors and functions including time trends at intersections.

Creator

Wang, JungHan, AbdelAty, Mohamed, Radwan, Essam, Eluru, Naveen, Lee, JaeYoung, Wang, ChungChing, University of Central Florida

Abstract / Description

Traffic demand has increased as population increased. The US population reached 313,914,040 in 2012 (US Census Bureau, 2015). Increased travel demand may have potential impact on roadway safety and the operational characteristics of roadways. Total crashes and injury crashes at intersections accounted for 40% and 44% of traffic crashes, respectively, on US roadways in 2007 according to the Intersection Safety Issue Brief (FHWA, 2009). Traffic researchers and engineers have developed a...
Show moreTraffic demand has increased as population increased. The US population reached 313,914,040 in 2012 (US Census Bureau, 2015). Increased travel demand may have potential impact on roadway safety and the operational characteristics of roadways. Total crashes and injury crashes at intersections accounted for 40% and 44% of traffic crashes, respectively, on US roadways in 2007 according to the Intersection Safety Issue Brief (FHWA, 2009). Traffic researchers and engineers have developed a quantitative measure of the safety effectiveness of treatments in the form of crash modification factors (CMF). Based on CMFs from multiple studies, the Highway Safety Manual (HSM) Part D (AASHTO, 2010) provides CMFs which can be used to determine the expected number of crash reduction or increase after treatments were installed. Even though CMFs have been introduced in the HSM, there are still limitations that require to be investigated. One important potential limitation is that the HSM provides various CMFs as fixed values, rather than CMFs under different configurations. In this dissertation, the CMFs were estimated using the observational beforeafter study to show that the CMFs vary across different traffic volume levels when signalizing intersections. Besides screening the effect of traffic volume, previous studies showed that CMFs could vary over time after the treatment was implemented. Thus, in this dissertation, the trends of CMFs for the signalization and adding red light running cameras (RLCs) were evaluated. CMFs for these treatments were measured in each month and 90 day moving windows using the time series ARMA model. The results of the signalization show that the CMFs for rearend crashes were lower at the early phase after the signalization but gradually increased from the 9th month. Besides, it was also found that the safety effectiveness is significantly worse 18 months after installing RLCs.Although efforts have been made to seek reliable CMFs, the best estimate of CMFs is still widely debated. Since CMFs are nonzero estimates, the population of all CMFs does not follow normal distributions and even if it did, the true mean of CMFs at some intersections may be different than that at others. Therefore, a bootstrap method was proposed to estimate CMFs that makes no distributional assumptions. Through examining the distribution of CMFs estimated by bootstrapped resamples, a CMF precision rating method is suggested to evaluate the reliability of the estimated CMFs. The result shows that the estimated CMF for angle+leftturn crashes after signalization has the highest precision, while estimates of the CMF for rearend crashes are extremely unreliable. The CMFs for KABCO, KABC, and KAB crashes proved to be reliable for the majority of intersections, but the estimated effect of signalization may not be accurate at some sites.In addition, the bootstrap method provides a quantitative measure to identify the reliability of CMFs, however, the CMF transferability is questionable. Since the development of CMFs requires safety performance functions (SPFs), could CMFs be developed using the SPFs from other states in the United States? This research applies the empirical Bayes method to develop CMFs using several SPFs from different jurisdictions and adjusted by calibration factors. After examination, it is found that applying SPFs from other jurisdictions is not desired when developing CMFs.The process of estimating CMFs using beforeafter studies requires the understanding of multiple statistical principles. In order to simplify the process of CMF estimation and make the CMFs research reproducible. This dissertation includes an open source statistics package built in R (R, 2013) to make the estimation accessible and reproducible. With this package, authorities are able to estimate reliable CMFs following the procedure suggested by FHWA. In addition, this software package equips a graphical interface which integrates the algorithm of calculating CMFs so that users can perform CMF calculation with minimum programming prerequisite. Expected contributions of this study are to 1) propose methodologies for CMFs to assess the variation of CMFs with different characteristics among treated sites, 2) suggest new objective criteria to judge the reliability of safety estimation, 3) examine the transferability of SPFs when developing CMF using beforeafter studies, and 4) develop a statistics software to calculate CMFs. Finally, potential relevant applications beyond the scope of this research, but worth investigation in the future are discussed in this dissertation.
Show less

Date Issued

2016

Identifier

CFE0006413, ucf:51454

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0006413


Title

APPLICATION OF THE EMPIRICAL LIKELIHOOD METHOD IN PROPORTIONAL HAZARDS MODEL.

Creator

HE, BIN, Ren, JianJian, University of Central Florida

Abstract / Description

In survival analysis, proportional hazards model is the most commonly used and the Cox model is the most popular. These models are developed to facilitate statistical analysis frequently encountered in medical research or reliability studies. In analyzing real data sets, checking the validity of the model assumptions is a key component. However, the presence of complicated types of censoring such as double censoring and partly intervalcensoring in survival data makes model assessment...
Show moreIn survival analysis, proportional hazards model is the most commonly used and the Cox model is the most popular. These models are developed to facilitate statistical analysis frequently encountered in medical research or reliability studies. In analyzing real data sets, checking the validity of the model assumptions is a key component. However, the presence of complicated types of censoring such as double censoring and partly intervalcensoring in survival data makes model assessment difficult, and the existing tests for goodnessoffit do not have direct extension to these complicated types of censored data. In this work, we use empirical likelihood (Owen, 1988) approach to construct goodnessoffit test and provide estimates for the Cox model with various types of censored data.Specifically, the problems under consideration are the twosample Cox model and stratified Cox model with right censored data, doubly censored data and partly intervalcensored data. Related computational issues are discussed, and some simulation results are presented. The procedures developed in the work are applied to several real data sets with some discussion.
Show less

Date Issued

2006

Identifier

CFE0001099, ucf:46780

Format

Document (PDF)

PURL

http://purl.flvc.org/ucf/fd/CFE0001099