Current Search: Billings, Deborah (x)
View All Items
- Title
- ADAPTIVE FEEDBACK IN SIMULATION-BASED TRAINING.
- Creator
-
Billings, Deborah, Gilson, Richard, University of Central Florida
- Abstract / Description
-
Feedback is essential to guide performance in simulation-based training (SBT) and to refine learning. Generally outcomes improve when feedback is delivered with personalized tutoring that tailors specific guidance and adapts feedback to the learner in a one-to-on environment. Therefore, emulating by automation these adaptive aspects of human tutors in SBT systems should be an effective way to train individuals. This study investigates the efficacy of automating different types of feedback in...
Show moreFeedback is essential to guide performance in simulation-based training (SBT) and to refine learning. Generally outcomes improve when feedback is delivered with personalized tutoring that tailors specific guidance and adapts feedback to the learner in a one-to-on environment. Therefore, emulating by automation these adaptive aspects of human tutors in SBT systems should be an effective way to train individuals. This study investigates the efficacy of automating different types of feedback in a SBT system. These include adaptive bottom-up feedback (i.e., detailed feedback, changing to general as proficiency develops) and adaptive top-down feedback (i.e., general feedback, changing to detailed if performance fails to improve). Other types of non-adaptive feedback were included for performance comparisons as well as to examine the overall cognitive load. To test hypotheses, 130 participants were randomly assigned to five conditions. Two feedback conditions employed adaptive approaches (bottom-up and top-down), two used non-adaptive approaches (constant detailed and constant general), and one functioned as a control group (i.e., only a performance score was given). After preliminary training on the simulator system, participants completed four simulated search and rescue missions (three training missions and one transfer mission). After each training mission, all participants received feedback relative to the condition they were assigned. Overall performance on missions, knowledge post-test scores, and subjective cognitive load were measured and analyzed to determine the effectiveness of the type of feedback. Results indicate that: (1) feedback generally improves performance, confirming prior research; (2) performance for the two adaptive approaches (bottom-up vs. top-down did not differ significantly at the end of training, but the bottom-up group achieved higher performance levels significantly sooner; (3) performance for the bottom-up and constant detailed groups did not differ significantly, although the trend suggests that adaptive bottom-up feedback may yield significant results in further studies. Overall, these results have implications for the implementation of feedback in SBT and beyond for other computer-based training systems.
Show less - Date Issued
- 2010
- Identifier
- CFE0003225, ucf:48555
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003225
- Title
- The Perception and Measurement of Human-Robot Trust.
- Creator
-
Schaefer, Kristin, Hancock, Peter, Jentsch, Florian, Kincaid, John, Reinerman, Lauren, Billings, Deborah, Lee, John, University of Central Florida
- Abstract / Description
-
As robots penetrate further into the everyday environments, trust in these robots becomes a crucial issue. The purpose of this work was to create and validate a reliable scale that could measure changes in an individual's trust in a robot. Assessment of current trust theory identified measurable antecedents specific to the human, the robot, and the environment. Six experiments subsumed the development of the 40 item trust scale. Scale development included the creation of a 172 item pool. Two...
Show moreAs robots penetrate further into the everyday environments, trust in these robots becomes a crucial issue. The purpose of this work was to create and validate a reliable scale that could measure changes in an individual's trust in a robot. Assessment of current trust theory identified measurable antecedents specific to the human, the robot, and the environment. Six experiments subsumed the development of the 40 item trust scale. Scale development included the creation of a 172 item pool. Two experiments identified the robot features and perceived functional characteristics that were related to the classification of a machine as a robot for this item pool. Item pool reduction techniques and subject matter expert (SME) content validation were used to reduce the scale to 40 items. The two final experiments were then conducted to validate the scale. The finalized 40 item pre-post interaction trust scale was designed to measure trust perceptions specific to HRI. The scale measured trust on a 0-100% rating scale and provides a percentage trust score. A 14 item sub-scale of this final version of the test recommended by SMEs may be sufficient for some HRI tasks, and the implications of this proposition were discussed.
Show less - Date Issued
- 2013
- Identifier
- CFE0004931, ucf:49634
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004931