Current Search: Stanney, Kay (x)
View All Items
- Title
- ENHANCING SITUATIONAL AWARENESS THROUGH HAPTICS INTERACTION IN VIRTUAL ENVIRONMENT TRAINING SYSTMES.
- Creator
-
Hale, Kelly, Stanney, Kay, University of Central Florida
- Abstract / Description
-
Virtual environment (VE) technology offers a viable training option for developing knowledge, skills and attitudes (KSA) within domains that have limited live training opportunities due to personnel safety and cost (e.g., live fire exercises). However, to ensure these VE training systems provide effective training and transfer, designers of such systems must ensure that training goals and objectives are clearly defined and VEs are designed to support development of KSAs required. Perhaps the...
Show moreVirtual environment (VE) technology offers a viable training option for developing knowledge, skills and attitudes (KSA) within domains that have limited live training opportunities due to personnel safety and cost (e.g., live fire exercises). However, to ensure these VE training systems provide effective training and transfer, designers of such systems must ensure that training goals and objectives are clearly defined and VEs are designed to support development of KSAs required. Perhaps the greatest benefit of VE training is its ability to provide a multimodal training experience, where trainees can see, hear and feel their surrounding environment, thus engaging them in training scenarios to further their expertise. This work focused on enhancing situation awareness (SA) within a training VE through appropriate use of multimodal cues. The Multimodal Optimization of Situation Awareness (MOSA) model was developed to identify theoretical benefits of various environmental and individual multimodal cues on SA components. Specific focus was on benefits associated with adding cues that activated the haptic system (i.e., kinesthetic/cutaneous sensory systems) or vestibular system in a VE. An empirical study was completed to evaluate the effectiveness of adding two independent spatialized tactile cues to a Military Operations on Urbanized Terrain (MOUT) VE training system, and how head tracking (i.e., addition of rotational vestibular cues) impacted spatial awareness and performance when tactile cues were added during training. Results showed tactile cues enhanced spatial awareness and performance during both repeated training and within a transfer environment, yet there were costs associated with including two cues together during training, as each cue focused attention on a different aspect of the global task. In addition, the results suggest that spatial awareness benefits from a single point indicator (i.e., spatialized tactile cues) may be impacted by interaction mode, as performance benefits were seen when tactile cues were paired with head tracking. Future research should further examine theoretical benefits outlined in the MOSA model, and further validate that benefits can be realized through appropriate activation of multimodal cues for targeted training objectives during training, near transfer and far transfer (i.e., real world performance).
Show less - Date Issued
- 2006
- Identifier
- CFE0001414, ucf:47034
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001414
- Title
- THE EFFECTS OF VIDEO FRAME DELAY AND SPATIAL ABILITY ON THE OPERATION OF MULTIPLE SEMIAUTONOMOUS AND TELE-OPERATED ROBOTS.
- Creator
-
Sloan, Jared, Stanney, Kay, University of Central Florida
- Abstract / Description
-
The United States Army has moved into the 21st century with the intent of redesigning not only the force structure but also the methods by which we will fight and win our nation's wars. Fundamental in this restructuring is the development of the Future Combat Systems (FCS). In an effort to minimize exposure of front line soldiers the future Army will utilize unmanned assets for both information gathering and when necessary engagements. Yet this must be done judiciously, as the bandwidth for...
Show moreThe United States Army has moved into the 21st century with the intent of redesigning not only the force structure but also the methods by which we will fight and win our nation's wars. Fundamental in this restructuring is the development of the Future Combat Systems (FCS). In an effort to minimize exposure of front line soldiers the future Army will utilize unmanned assets for both information gathering and when necessary engagements. Yet this must be done judiciously, as the bandwidth for net-centric warfare is limited. The implication is that the FCS must be designed to leverage bandwidth in a manner that does not overtax computational resources. In this study alternatives for improving human performance during operation of teleoperated and semi-autonomous robots were examined. It was predicted that when operating both types of robots, frame delay of the semi-autonomous robot would improve performance because it would allow operators to concentrate on the constant workload imposed by the teleoperated while only allocating resources to the semi-autonomous during critical tasks. An additional prediction was that operators with high spatial ability would perform better than those with low spatial ability, especially when operating an aerial vehicle. The results can not confirm that frame delay has a positive effect on operator performance, though power may have been an issue, but clearly show that spatial ability is a strong predictor of performance on robotic asset control, particularly with aerial vehicles. In operating the UAV, the high spatial group was, on average, 30% faster, lazed 12% more targets, and made 43% more location reports than the low spatial group. The implications of this study indicate that system design should judiciously manage workload and capitalize on individual ability to improve performance and are relevant to system designers, especially in the military community.
Show less - Date Issued
- 2005
- Identifier
- CFE0000430, ucf:46379
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000430
- Title
- OPTIMIZING THE DESIGN OF MULTIMODAL USER INTERFACES.
- Creator
-
Reeves, Leah, Stanney, Kay, University of Central Florida
- Abstract / Description
-
Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information...
Show moreDue to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator's information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments.
Show less - Date Issued
- 2007
- Identifier
- CFE0001636, ucf:47237
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001636
- Title
- DESIGN FOR AUDITORY DISPLAYS: IDENTIFYING TEMPORAL AND SPATIAL INFORMATION CONVEYANCE PRINCIPLES.
- Creator
-
Ahmad, Ali, Stanney, Kay, University of Central Florida
- Abstract / Description
-
Designing auditory interfaces is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to use sounds in today's visually-rich graphical user interfaces. This dissertation provided a framework for guiding the design of audio interfaces to enhance human-systems performance. This doctoral research involved reviewing the literature on conveying temporal and spatial information using audio, using this knowledge to build...
Show moreDesigning auditory interfaces is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to use sounds in today's visually-rich graphical user interfaces. This dissertation provided a framework for guiding the design of audio interfaces to enhance human-systems performance. This doctoral research involved reviewing the literature on conveying temporal and spatial information using audio, using this knowledge to build three theoretical models to aid the design of auditory interfaces, and empirically validating select components of the models. The three models included an audio integration model that outlines an end-to-end process for adding sounds to interactive interfaces, a temporal audio model that provides a framework for guiding the timing for integration of these sounds to meet human performance objectives, and a spatial audio model that provides a framework for adding spatialization cues to interface sounds. Each model is coupled with a set of design guidelines theorized from the literature, thus combined, the developed models put forward a structured process for integrating sounds in interactive interfaces. The developed models were subjected to a three phase validation process that included review by Subject Matter Experts (SMEs) to assess the face validity of the developed models and two empirical studies. For the SME review, which assessed the utility of the developed models and identified opportunities for improvement, a panel of three audio experts was selected to respond to a Strengths, Weaknesses, Opportunities, and Threats (SWOT) validation questionnaire. Based on the SWOT analysis, the main strengths of the models included that they provide a systematic approach to auditory display design and that they integrate a wide variety of knowledge sources in a concise manner. The main weaknesses of the models included the lack of a structured process for amending the models with new principles, some branches were not considered parallel or completely distinct, and lack of guidance on selecting interface sounds. The main opportunity identified by the experts was the ability of the models to provide a seminal body of knowledge that can be used for building and validating auditory display designs. The main threats identified by the experts were that users may not know where to start and end with each model, the models may not provide comprehensive coverage of all uses of auditory displays, and the models may act as a restrictive influence on designers or they may be used inappropriately. Based on the SWOT analysis results, several changes were made to the models prior to the empirical studies. Two empirical evaluation studies were conducted to test the theorized design principles derived from the revised models. The first study focused on assessing the utility of audio cues to train a temporal pacing task and the second study combined both temporal (i.e., pace) and spatial audio information, with a focus on examining integration issues. In the pace study, there were four different auditory conditions used for training pace: 1) a metronome, 2) non-spatial auditory earcons, 3) a spatialized auditory earcon, and 4) no audio cues for pace training. Sixty-eight people participated in the study. A pre- post between subjects experimental design was used, with eight training trials. The measure used for assessing pace performance was the average deviation from a predetermined desired pace. The results demonstrated that a metronome was not effective in training participants to maintain a desired pace, while, spatial and non-spatial earcons were effective strategies for pace training. Moreover, an examination of post-training performance as compared to pre-training suggested some transfer of learning. Design guidelines were extracted for integrating auditory cues for pace training tasks in virtual environments. In the second empirical study, combined temporal (pacing) and spatial (location of entities within the environment) information were presented. There were three different spatialization conditions used: 1) high fidelity using subjective selection of a "best-fit" head related transfer function, 2) low fidelity using a generalized head-related transfer function, and 3) no spatialization. A pre- post between subjects experimental design was used, with eight training trials. The performance measures were average deviation from desired pace and time and accuracy to complete the task. The results of the second study demonstrated that temporal, non-spatial auditory cues were effective in influencing pace while other cues were present. On the other hand, spatialized auditory cues did not result in significantly faster task completion. Based on these results, a set of design guidelines was proposed that can be used to direct the integration of spatial and temporal auditory cues for supporting training tasks in virtual environments. Taken together, the developed models and the associated guidelines provided a theoretical foundation from which to direct user-centered design of auditory interfaces.
Show less - Date Issued
- 2007
- Identifier
- CFE0001719, ucf:47317
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001719
- Title
- THE INTEGRATION OF AUDIO INTO MULTIMODAL INTERFACES: GUIDELINES AND APPLICATIONS OF INTEGRATING SPEECH, EARCONS, AUDITORY ICONS, AND SPATIAL AUDIO (SEAS).
- Creator
-
Jones, David, Stanney, Kay, University of Central Florida
- Abstract / Description
-
The current research is directed at providing validated guidelines to direct the integration of audio into human-system interfaces. This work first discusses the utility of integrating audio to support multimodal human-information processing. Next, an auditory interactive computing paradigm utilizing Speech, Earcons, Auditory icons, and Spatial audio (SEAS) cues is proposed and guidelines for the integration of SEAS cues into multimodal systems are presented. Finally, the results of two...
Show moreThe current research is directed at providing validated guidelines to direct the integration of audio into human-system interfaces. This work first discusses the utility of integrating audio to support multimodal human-information processing. Next, an auditory interactive computing paradigm utilizing Speech, Earcons, Auditory icons, and Spatial audio (SEAS) cues is proposed and guidelines for the integration of SEAS cues into multimodal systems are presented. Finally, the results of two studies are presented that evaluate the utility of using SEAS cues, developed following the proposed guidelines, in relieving perceptual and attention processing bottlenecks when conducting Unmanned Air Vehicle (UAV) control tasks. The results demonstrate that SEAS cues significantly enhance human performance on UAV control tasks, particularly response accuracy and reaction time on a secondary monitoring task. The results suggest that SEAS cues may be effective in overcoming perceptual and attentional bottlenecks, with the advantages being most revealing during high workload conditions. The theories and principles provided in this paper should be of interest to audio system designers and anyone involved in the design of multimodal human-computer systems.
Show less - Date Issued
- 2005
- Identifier
- CFE0000810, ucf:46689
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000810