You are here
An Empirical Evaluation of an Instrument to Determine the Relationship Between Second-Year Medical Students' Perceptions of NERVE VP Design Effectiveness and Students' Ability to Learn and Transfer Skills from NERVE
- Date Issued:
- 2016
- Abstract/Description:
- Meta-analyses and systematic reviews of literature comparing the use of virtual patients (VPs) to traditional educational methods support the efficacy of VPs (Cook, Erwin, (&) Triola, 2010; Cook (&) Triola, 2009; McGaghie, Issenberg, Cohen, Barsuk, (&) Wayne, 2011). However, VP design research has produced a variety of design features (Bateman, Allen, Samani, Kidd, (&) Davies, 2013; Botezatu, Hult, (&) Fors, 2010a; Huwendiek (&) De Leng, 2010), frameworks (Huwendiek et al., 2009b) and principles (Huwendiek et al., 2009a) that are similar in nature, but appear to lack consensus. Consequently, researchers are not sure which VP design principles to apply and few validated guidelines are available. To address this situation, Huwendiek et al. (2014) validated an instrument to evaluate the design of VP simulations that focuses on fostering clinical reasoning. This dissertation examines the predictive validity of one instrument proposed by Huwendiek et al. (2014) that examines VP design features. Empirical research provides evidence for the reliability and validity of the VP design effectiveness measure. However, the relationship between the design features evaluated by the instrument to criterion-referenced measures of student learning and performance remains to be examined. This study examines the predictive validity of Huwendiek et al.'s (2014) VP design effectiveness measurement instrument by determining if the design factors evaluated by the instrument are correlated to medical students' performance in: (a) quizzes and VP cases embedded in Neurological Examination Rehearsal Virtual Environment (NERVE), and (b) NERVE-assisted virtual patient/standardized patient (VP/SP) differential diagnosis and SP checklists. It was hypothesized that students' perceptions of effectiveness of NERVE VP design are significantly correlated to the achievement of higher student learning and transfer outcomes in NERVE.The confirmatory factor analyses revealed the effectiveness of NERVE VP design was significantly correlated to student learning and transfer. Significant correlations were found between key design features evaluated by the instrument and students' performance on quizzes and VP cases embedded in NERVE. In addition, significant correlations were found between the NERVE VP design factors evaluated by Huwendiek et al.'s (2014) instrument and students' performance in SP checklists. Findings provided empirical evidence supporting the reliability and predictive validity of Huwendiek et al.'s (2014) instrument.Future research should examine additional sources of validity for Huwendiek et al.'s (2014) VP design effectiveness instrument using larger samples and from other socio-cultural backgrounds and continue to examine the predictive validity of Huwendiek et al.'s (2014) instrument at Level 2 (Learning) and Level 3 (Application) of Kirkpatrick's (1975) four-level model of training evaluation.
Title: | An Empirical Evaluation of an Instrument to Determine the Relationship Between Second-Year Medical Students' Perceptions of NERVE VP Design Effectiveness and Students' Ability to Learn and Transfer Skills from NERVE. |
49 views
25 downloads |
---|---|---|
Name(s): |
Reyes, Ramsamooj, Author Hirumi, Atsusi, Committee Chair Sivo, Stephen, Committee CoChair Campbell, Laurie, Committee Member Cendan, Juan, Committee Member University of Central Florida, Degree Grantor |
|
Type of Resource: | text | |
Date Issued: | 2016 | |
Publisher: | University of Central Florida | |
Language(s): | English | |
Abstract/Description: | Meta-analyses and systematic reviews of literature comparing the use of virtual patients (VPs) to traditional educational methods support the efficacy of VPs (Cook, Erwin, (&) Triola, 2010; Cook (&) Triola, 2009; McGaghie, Issenberg, Cohen, Barsuk, (&) Wayne, 2011). However, VP design research has produced a variety of design features (Bateman, Allen, Samani, Kidd, (&) Davies, 2013; Botezatu, Hult, (&) Fors, 2010a; Huwendiek (&) De Leng, 2010), frameworks (Huwendiek et al., 2009b) and principles (Huwendiek et al., 2009a) that are similar in nature, but appear to lack consensus. Consequently, researchers are not sure which VP design principles to apply and few validated guidelines are available. To address this situation, Huwendiek et al. (2014) validated an instrument to evaluate the design of VP simulations that focuses on fostering clinical reasoning. This dissertation examines the predictive validity of one instrument proposed by Huwendiek et al. (2014) that examines VP design features. Empirical research provides evidence for the reliability and validity of the VP design effectiveness measure. However, the relationship between the design features evaluated by the instrument to criterion-referenced measures of student learning and performance remains to be examined. This study examines the predictive validity of Huwendiek et al.'s (2014) VP design effectiveness measurement instrument by determining if the design factors evaluated by the instrument are correlated to medical students' performance in: (a) quizzes and VP cases embedded in Neurological Examination Rehearsal Virtual Environment (NERVE), and (b) NERVE-assisted virtual patient/standardized patient (VP/SP) differential diagnosis and SP checklists. It was hypothesized that students' perceptions of effectiveness of NERVE VP design are significantly correlated to the achievement of higher student learning and transfer outcomes in NERVE.The confirmatory factor analyses revealed the effectiveness of NERVE VP design was significantly correlated to student learning and transfer. Significant correlations were found between key design features evaluated by the instrument and students' performance on quizzes and VP cases embedded in NERVE. In addition, significant correlations were found between the NERVE VP design factors evaluated by Huwendiek et al.'s (2014) instrument and students' performance in SP checklists. Findings provided empirical evidence supporting the reliability and predictive validity of Huwendiek et al.'s (2014) instrument.Future research should examine additional sources of validity for Huwendiek et al.'s (2014) VP design effectiveness instrument using larger samples and from other socio-cultural backgrounds and continue to examine the predictive validity of Huwendiek et al.'s (2014) instrument at Level 2 (Learning) and Level 3 (Application) of Kirkpatrick's (1975) four-level model of training evaluation. | |
Identifier: | CFE0006166 (IID), ucf:51150 (fedora) | |
Note(s): |
2016-05-01 Ph.D. Education and Human Performance, Dean's Office EDUC Doctoral This record was generated from author submitted information. |
|
Subject(s): | Virtual Patients -- Medical Simulations -- Medical Education -- Clinical Reasoning -- Learning -- Skills Transfer -- Instructional Design -- Design Effectiveness | |
Persistent Link to This Record: | http://purl.flvc.org/ucf/fd/CFE0006166 | |
Restrictions on Access: | public 2016-05-15 | |
Host Institution: | UCF |