Current Search: Johnson, Cheryl (x)
View All Items
- Title
- Getting the Upper Hand: Natural Gesture Interfaces Improve Instructional Efficiency on a Conceptual Computer Lesson.
- Creator
-
Bailey, Shannon, Sims, Valerie, Jentsch, Florian, Bowers, Clint, Johnson, Cheryl, University of Central Florida
- Abstract / Description
-
As gesture-based interactions with computer interfaces become more technologically feasible for educational and training systems, it is important to consider what interactions are best for the learner. Computer interactions should not interfere with learning nor increase the mental effort of completing the lesson. The purpose of the current set of studies was to determine whether natural gesture-based interactions, or instruction of those gestures, help the learner in a computer lesson by...
Show moreAs gesture-based interactions with computer interfaces become more technologically feasible for educational and training systems, it is important to consider what interactions are best for the learner. Computer interactions should not interfere with learning nor increase the mental effort of completing the lesson. The purpose of the current set of studies was to determine whether natural gesture-based interactions, or instruction of those gestures, help the learner in a computer lesson by increasing learning and reducing mental effort. First, two studies were conducted to determine what gestures were considered natural by participants. Then, those gestures were implemented in an experiment to compare type of gesture and type of gesture instruction on learning conceptual information from a computer lesson. The goal of these studies was to determine the instructional efficiency (-) that is, the extent of learning taking into account the amount of mental effort (-) of implementing gesture-based interactions in a conceptual computer lesson. To test whether the type of gesture interaction affects conceptual learning in a computer lesson, the gesture-based interactions were either naturally- or arbitrarily-mapped to the learning material on the fundamentals of optics. The optics lesson presented conceptual information about reflection and refraction, and participants used the gesture-based interactions during the lesson to manipulate on-screen lenses and mirrors in a beam of light. The beam of light refracted/reflected at the angle corresponding with type of lens/mirror. The natural gesture-based interactions were those that mimicked the physical movement used to manipulate the lenses and mirrors in the optics lesson, while the arbitrary gestures were those that did not match the movement of the lens or mirror being manipulated. The natural gestures implemented in the computer lesson were determined from Study 1, in which participants performed gestures they considered natural for a set of actions, and rated in Study 2 as most closely resembling the physical interaction they represent. The arbitrary gestures were rated by participants as most arbitrary for each computer action in Study 2. To test whether the effect of novel gesture-based interactions depends on how they are taught, the way the gestures were instructed was varied in the main experiment by using either video- or text-based tutorials. Results of the experiment support that natural gesture-based interactions were better for learning than arbitrary gestures, and instruction of the gestures largely did not affect learning and amount of mental effort felt during the task. To further investigate the factors affecting instructional efficiency in using gesture-based interactions for a computer lesson, individual differences of the learner were taken into account. Results indicated that the instructional efficiency of the gestures and their instruction depended on an individual's spatial ability, such that arbitrary gesture interactions taught with a text-based tutorial were particularly inefficient for those with lower spatial ability. These findings are explained in the context of Embodied Cognition and Cognitive Load Theory, and guidelines are provided for instructional design of computer lessons using natural user interfaces. The theoretical frameworks of Embodied Cognition and Cognitive Load Theory were used to explain why gesture-based interactions and their instructions impacted the instructional efficiency of these factors in a computer lesson. Gesture-based interactions that are natural (i.e., mimic the physical interaction by corresponding to the learning material) were more instructionally efficient than arbitrary gestures because natural gestures may help schema development of conceptual information through physical enactment of the learning material. Furthermore, natural gestures resulted in lower cognitive load than arbitrary gestures, because arbitrary gestures that do not match the learning material may increase the working memory processing not associated with the learning material during the lesson. Additionally, the way in which the gesture-based interactions were taught was varied by either instructing the gestures with video- or text-based tutorials, and it was hypothesized that video-based tutorials would be a better way to instruct gesture-based interactions because the videos may help the learner to visualize the interactions and create a more easily recalled sensorimotor representation for the gestures; however, this hypothesis was not supported and there was not strong evidence that video-based tutorials were more instructionally efficient than text-based instructions. The results of the current set of studies can be applied to educational and training systems that incorporate a gesture-based interface. The finding that more natural gestures are better for learning efficiency, cognitive load, and a variety of usability factors should encourage instructional designers and researchers to keep the user in mind when developing gesture-based interactions.
Show less - Date Issued
- 2017
- Identifier
- CFE0007278, ucf:52192
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007278
- Title
- Examining the Effects of Interactive Dynamic Multimedia and Direct Touch Input on Performance of a Procedural Motor Task.
- Creator
-
Marraffino, Matthew, Sims, Valerie, Chin, Matthew, Mouloua, Mustapha, Johnson, Cheryl, University of Central Florida
- Abstract / Description
-
Ownership of mobile devices, such as tablets and smartphones, has quickly risen in the last decade. Unsurprisingly, they are now being integrated into the training and classroom setting. Specifically, the U.S. Army has mapped out a plan in the Army Learning Model of 2015 to utilize mobile devices for training purposes. However, before these tools can be used effectively, it is important to identify how the tablets' unique properties can be leveraged. For this dissertation, the touch interface...
Show moreOwnership of mobile devices, such as tablets and smartphones, has quickly risen in the last decade. Unsurprisingly, they are now being integrated into the training and classroom setting. Specifically, the U.S. Army has mapped out a plan in the Army Learning Model of 2015 to utilize mobile devices for training purposes. However, before these tools can be used effectively, it is important to identify how the tablets' unique properties can be leveraged. For this dissertation, the touch interface and the interactivity that tablets afford were investigated using a procedural-motor task. The procedural motor task was the disassembly procedures of a M4 carbine. This research was motivated by cognitive psychology theories, including Cognitive Load Theory and Embodied Cognition. In two experiments, novices learned rifle disassembly procedures in a narrated multimedia presentation presented on a tablet and then were tested on what they learned during the multimedia training involving a virtual rifle by performing a rifle disassembly on a physical rifle, reassembling the rifle, and taking a written recall test about the disassembly procedures. Spatial ability was also considered as a subject variable.Experiment 1 examined two research questions. The primary research question was whether including multiple forms of interactivity in a multimedia presentation resulted in higher learning outcomes. The secondary research question in Experiment 1 was whether dynamic multimedia fostered better learning outcomes than equivalent static multimedia. To examine the effects of dynamism and interactivity on learning, four multimedia conditions of varying levels of interactivity and dynamism were used. One condition was a 2D phase diagram depicting the before and after of the step with no animation or interactivity. Another condition utilized a non-interactive animation in which participants passively watched an animated presentation of the disassembly procedures. A third condition was the interactive animation in which participants could control the pace of the presentation by tapping a button. The last condition was a rifle disassembly simulation in which participants interacted with a virtual rifle to learn the disassembly procedures. A comparison of the conditions by spatial ability yielded the following results. Interactivity, overall, improved outcomes on the performance measures. However, high spatials outperformed low spatials in the simulation condition and the 2D phase diagram condition. High spatials seemed to be able to compensate for low interactivity and dynamism in the 2D phase diagram condition while enhancing their performance in the rifle disassembly simulation condition.In Experiment 2, the touchscreen interface was examined by investigating how gestures and input modality affected learning the disassembly procedures. Experiment 2 had two primary research questions. The first was whether gestures facilitate learning a procedural-motor task through embodied learning. The second was whether direct touch input using resulted in higher learning outcomes than indirect mouse input. To examine the research questions, three different variations of the rifle disassembly simulation were used. One was identical to that of Experiment 1. Another incorporated gestures to initiate the animation whereby participants traced a gesture arrow representing the motion of the component to learn the procedures. The third condition utilized the same interface as the initial rifle disassembly simulation but included (")dummy(") gesture arrows that displayed only visual information but did not respond to gesture. This condition was included to see the effects (if any) of the gesture arrows in isolation of the gesture component. Furthermore, direct touch input was compared to indirect mouse input. Once again, spatial ability also was considered. Results from Experiment 2 were inconclusive as no significant effects were found. This may have been due to a ceiling effect of performance. However, spatial ability was a significant predictor of performance across all conditions. Overall, the results of the two experiments support the use of multimedia on a tablet to train a procedural-motor task. In line with vision of ALM 2015, the research support incorporating tablets into U.S. Army training curriculum.
Show less - Date Issued
- 2014
- Identifier
- CFE0005376, ucf:50467
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005376