Current Search: head-mounted (x)
View All Items
- Title
- USER-CENTERED VIRTUAL ENVIRONMENT ASSESSMENT AND DESIGN FOR COGNITIVE REHABILITATION APPLICATIONS.
- Creator
-
Fidopiastis, Cali, Rolland, Jannick, University of Central Florida
- Abstract / Description
-
Virtual environment (VE) design for cognitive rehabilitation necessitates a new methodology to ensure the validity of the resulting rehabilitation assessment. We propose that benchmarking the VE system technology utilizing a user-centered approach should precede the VE construction. Further, user performance baselines should be measured throughout testing as a control for adaptive effects that may confound the metrics chosen to evaluate the rehabilitation treatment. To support these claims we...
Show moreVirtual environment (VE) design for cognitive rehabilitation necessitates a new methodology to ensure the validity of the resulting rehabilitation assessment. We propose that benchmarking the VE system technology utilizing a user-centered approach should precede the VE construction. Further, user performance baselines should be measured throughout testing as a control for adaptive effects that may confound the metrics chosen to evaluate the rehabilitation treatment. To support these claims we present data obtained from two modules of a user-centered head-mounted display (HMD) assessment battery, specifically resolution visual acuity and stereoacuity. Resolution visual acuity and stereoacuity assessments provide information about the image quality achieved by an HMD based upon its unique system parameters. When applying a user-centered approach, we were able to quantify limitations in the VE system components (e.g., low microdisplay resolution) and separately point to user characteristics (e.g., changes in dark focus) that may introduce error in the evaluation of VE based rehabilitation protocols. Based on these results, we provide guidelines for calibrating and benchmarking HMDs. In addition, we discuss potential extensions of the assessment to address higher level usability issues. We intend to test the proposed framework within the Human Experience Modeler (HEM), a testbed created at the University of Central Florida to evaluate technologies that may enhance cognitive rehabilitation effectiveness. Preliminary results of a feasibility pilot study conducted with a memory impaired participant showed that the HEM provides the control and repeatability needed to conduct such technology comparisons. Further, the HEM affords the opportunity to integrate new brain imaging technologies (i.e., functional Near Infrared Imaging) to evaluate brain plasticity associated with VE based cognitive rehabilitation.
Show less - Date Issued
- 2006
- Identifier
- CFE0001203, ucf:46946
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001203
- Title
- A WEARABLE HEAD-MOUNTED PROJECTION DISPLAY.
- Creator
-
Martins, Ricardo, Clarke, Thomas, University of Central Florida
- Abstract / Description
-
Conventional head-mounted projection displays (HMPDs) contain of a pair of miniature projection lenses, beamsplitters, and miniature displays mounted on the helmet, as well as a retro-reflective screen placed strategically in the environment. We have extened the HMPD technology integrating the screen into a fully mobile embodiment. Some initial efforts of demonstrating this technology has been captured followed by an investigation of the diffraction effects versus image degradation caused by...
Show moreConventional head-mounted projection displays (HMPDs) contain of a pair of miniature projection lenses, beamsplitters, and miniature displays mounted on the helmet, as well as a retro-reflective screen placed strategically in the environment. We have extened the HMPD technology integrating the screen into a fully mobile embodiment. Some initial efforts of demonstrating this technology has been captured followed by an investigation of the diffraction effects versus image degradation caused by integrating the retro-reflective screen within the HMPD. The key contribution of this research is the conception and development of a mobile-HMPD (M-HMPD). We have included an extensive analysis of macro- and microscopic properties that encompass the retro-reflective screen. Furthermore, an evaluation of the overall performance of the optics will be assessed in both object space for the optical designer and visual space for the possible users of this technology. This research effort will also be focused on conceiving a mobile M-HMPD aimed for dual indoor/outdoor applications. The M-HMPD shares the known advantage such as ultra-lightweight optics (i.e. 8g per eye), unperceptible distortion (i.e. ≤ 2.5%), and lightweight headset (i.e. ≤ 2.5 lbs) compared with eyepiece type head-mounted displays (HMDs) of equal eye relief and field of view. In addition, the M-HMPD also presents an advantage over the preexisting HMPD in that it does not require a retro-reflective screen placed strategically in the environment. This newly developed M-HMPD has the ability to project clear images at three different locations within near- or far-field observation depths without loss of image quality. This particular M-HMPD embodiment was targeted to mixed reality, augmented reality, and wearable display applications.
Show less - Date Issued
- 2010
- Identifier
- CFE0003431, ucf:48390
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003431
- Title
- DIRECT MANIPULATION OF VIRTUAL OBJECTS.
- Creator
-
Nguyen, Long, Malone, Linda, University of Central Florida
- Abstract / Description
-
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world....
Show moreInteracting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities proprioception, haptics, and audition and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
Show less - Date Issued
- 2009
- Identifier
- CFE0002822, ucf:48060
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002822