You are here
A Priori Analysis of Error and Bias in Value-Added Models
- Date Issued:
- 2016
- Abstract/Description:
- Over the past 20 years, value-added models (VAMs) have become increasingly popular in educational assessment and accountability policies because of the sophisticated statistical controls these models use to purportedly isolate the effect of a single teacher on the learning gains of his or her students. The present research uses a Monte Carlo simulation study design in order to investigate whether VAMs are able to provide accurate estimates of teacher effectiveness when all assumptions are met and to determine how robust the models are to endogenous peer effects and nonrandom assignment of students to classroom. The researcher generates three years of simulated achievement data for 18,750 students taught by 125 teachers, and analyzes this data with a linear mixed model similar to the SAS(&)#174; EVAAS(&)#174; Multivariate Response Model (MRM; M1), a basic covariate adjustment model (M2), and variations on these models designed to estimate random classroom effects. Findings indicate that the modified EVAAS may be too computationally onerous to be of practical use, and that modified covariate adjustment models do not perform significantly differently than the basic covariate adjustment model. When all assumptions are met, M1 is more accurate than M2, but both models perform reasonably well, misclassifying fewer than 5% of teachers on average. M1 is more robust to endogenous peer effects than M2, however both models misclassified more teachers than when all assumptions are met. M2 is more robust to nonrandom assignment of students than M1. Assigning teachers a balanced schedule of nonrandom classes with low, medium, and high prior achievement seemed to mitigate the problems that nonrandom assignment caused for M1, but made M2 less accurate. Implications for practice and future research are discussed.
Title: | A Priori Analysis of Error and Bias in Value-Added Models. |
49 views
22 downloads |
---|---|---|
Name(s): |
Lavery, Matthew, Author Hahs-Vaughn, Debbie, Committee Chair Sivo, Stephen, Committee Member Bai, Haiyan, Committee Member Amrein-Beardsley, Audrey, Committee Member University of Central Florida, Degree Grantor |
|
Type of Resource: | text | |
Date Issued: | 2016 | |
Publisher: | University of Central Florida | |
Language(s): | English | |
Abstract/Description: | Over the past 20 years, value-added models (VAMs) have become increasingly popular in educational assessment and accountability policies because of the sophisticated statistical controls these models use to purportedly isolate the effect of a single teacher on the learning gains of his or her students. The present research uses a Monte Carlo simulation study design in order to investigate whether VAMs are able to provide accurate estimates of teacher effectiveness when all assumptions are met and to determine how robust the models are to endogenous peer effects and nonrandom assignment of students to classroom. The researcher generates three years of simulated achievement data for 18,750 students taught by 125 teachers, and analyzes this data with a linear mixed model similar to the SAS(&)#174; EVAAS(&)#174; Multivariate Response Model (MRM; M1), a basic covariate adjustment model (M2), and variations on these models designed to estimate random classroom effects. Findings indicate that the modified EVAAS may be too computationally onerous to be of practical use, and that modified covariate adjustment models do not perform significantly differently than the basic covariate adjustment model. When all assumptions are met, M1 is more accurate than M2, but both models perform reasonably well, misclassifying fewer than 5% of teachers on average. M1 is more robust to endogenous peer effects than M2, however both models misclassified more teachers than when all assumptions are met. M2 is more robust to nonrandom assignment of students than M1. Assigning teachers a balanced schedule of nonrandom classes with low, medium, and high prior achievement seemed to mitigate the problems that nonrandom assignment caused for M1, but made M2 less accurate. Implications for practice and future research are discussed. | |
Identifier: | CFE0006344 (IID), ucf:51568 (fedora) | |
Note(s): |
2016-08-01 Ph.D. Education and Human Performance, Dean's Office EDUC Doctoral This record was generated from author submitted information. |
|
Subject(s): | Value-Added Models -- Teacher Evaluation -- Standardized Tests -- High Stakes Tests -- Teacher Effectiveness | |
Persistent Link to This Record: | http://purl.flvc.org/ucf/fd/CFE0006344 | |
Restrictions on Access: | campus 2021-08-15 | |
Host Institution: | UCF |