Current Search: Augmented Reality (x)
View All Items
- Title
- AUGMENTATION IN VISUAL REALITY (AVR).
- Creator
-
Zhang, Yunjun, Hughes, Charles, University of Central Florida
- Abstract / Description
-
Human eyes, as the organs for sensing light and processing visual information, enable us to see the real world. Though invaluable, they give us no way to ``edit'' the received visual stream or to ``switch'' to a different channel. The invention of motion pictures and computer technologies in the last century enables us to add an extra layer of modifications between the real world and our eyes. There are two major approaches to modifications that we consider here, offline...
Show moreHuman eyes, as the organs for sensing light and processing visual information, enable us to see the real world. Though invaluable, they give us no way to ``edit'' the received visual stream or to ``switch'' to a different channel. The invention of motion pictures and computer technologies in the last century enables us to add an extra layer of modifications between the real world and our eyes. There are two major approaches to modifications that we consider here, offline augmentation and online augmentation. The movie industry has pushed offline augmentation to an extreme level; audiences can experience visual surprises that they have never seen in their real lives, even though it may take a few months or years for the production of the special visual effects. On the other hand, online augmentation requires that modifications be performed in real time. This dissertation addresses problems in both offline and online augmentation. The first offline problem addressed here is the generation of plausible video sequences after removing relatively large objects from the original videos. In order to maintain temporal coherence among the frames, a motion layer segmentation method is applied. From this, a set of synthesized layers is generated by applying motion compensation and a region completion algorithm. Finally, a plausibly realistic new video, in which the selected object is removed, is rendered given the synthesized layers and the motion parameters. The second problem we address is to construct a blue screen key for video synthesis or blending for Mixed Reality (MR) applications. As a well researched area, blue screen keying extracts a range of colors, typically in the blue spectrum, from a captured video sequence to enable the compositing of multiple image sources. Under ideal conditions with uniform lighting and background color, a high quality key can be generated through commercial products, even in real time. However, A Mixed Realty application typically involves a head-mounted display (HMD) with poor camera quality. This in turn requires the keying algorithm to be robust in the presence of noise. We conduct a three stage keying algorithm to reduce the noise in the key output. First a standard blue screen keying algorithm is applied to the input to get a noisy key; second the image gradient information and the corresponding region are compared with the result in the first step to remove noise in the blue screen area; and finally a matting approach is applied on the boundary of the key to improve the key quality. Another offline problem we address in this dissertation is the acquisition of correct transformation between the different coordinate frames in a Mixed Reality (MR) application. Typically an MR system includes at least one tracking system. Therefore the 3D coordinate frames that need to be considered include the cameras, the tracker, the tracker system and a world. Accurately deriving the transformation between the head-mounted display camera and the affixed 6-DOF tracker is critical for mixed reality applications. This transformation brings the HMD cameras into the tracking coordinate frame, which in turn overlaps with a virtual coordinate frame to create a plausible mixed visual experience. We carry out a non-linear optimization method to recover the camera-tracker transformation with respect to the image reprojection error. For online applications, we address a problem to extend the luminance range in mixed reality environments. We achieve this by introducing Enhanced Dynamic Range Video, a technique based on differing brightness settings for each eye of a video see-through head mounted display (HMD). We first construct a Video-Driven Time-Stamped Ball Cloud (VDTSBC), which serves as a guideline and a means to store temporal color information for stereo image registration. With the assistance of the VDTSBC, we register each pair of stereo images, taking into account confounding issues of occlusion occurring within one eye but not the other. Finally, we apply luminance enhancement on the registered image pairs to generate an Enhanced Dynamic Range Video.
Show less - Date Issued
- 2007
- Identifier
- CFE0001757, ucf:47285
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001757
- Title
- ORIENTING OF VISUAL-SPATIAL ATTENTION WITH AUGMENTED REALITY: EFFECTS OF SPATIAL AND NON-SPATIAL MULTI-MODAL CUES.
- Creator
-
Jerome, Christian, Mouloua, Mustapha, University of Central Florida
- Abstract / Description
-
Advances in simulation technology have brought about many improvements to the way we train tasks, as well as how we perform tasks in the operational field. Augmented reality (AR) is an example of how to enhance the user's experience in the real world with computer generated information and graphics. Visual search tasks are known to be capacity demanding and therefore may be improved by training in an AR environment. During the experimental task, participants searched for enemies (while...
Show moreAdvances in simulation technology have brought about many improvements to the way we train tasks, as well as how we perform tasks in the operational field. Augmented reality (AR) is an example of how to enhance the user's experience in the real world with computer generated information and graphics. Visual search tasks are known to be capacity demanding and therefore may be improved by training in an AR environment. During the experimental task, participants searched for enemies (while cued from visual, auditory, tactile, combinations of two, or all three modality cues) and tried to shoot them while avoiding shooting the civilians (fratricide) for two 2-minute low-workload scenarios, and two 2-minute high-workload scenarios. The results showed significant benefits of attentional cuing on visual search task performance as revealed by benefits in reaction time and accuracy from the presence of the haptic cues and auditory cues when displayed alone and the combination of the visual and haptic cues together. Fratricide occurrence was shown to be amplified by the presence of the audio cues. The two levels of workload produced differences within individual's task performance for accuracy and reaction time. Accuracy and reaction time were significantly better with the medium cues than all the others and the control condition during low workload and marginally better during high workload. Cue specificity resulted in a non-linear function in terms of performance in the low workload condition. These results are in support of Posner's (1978) theory that, in general, cueing can benefit locating targets in the environment by aligning the attentional system with the visual input pathways. The cue modality does not have to match the target modality. This research is relevant to potential applications of AR technology. Furthermore, the results identify and describe perceptual and/or cognitive issues with the use of displaying computer generated augmented objects and information overlaid upon the real world. The results also serve as a basis for providing a variety of training and design recommendations to direct attention during military operations. Such recommendations include cueing the Soldier to the location of hazards, and mitigating the effects of stress and workload.
Show less - Date Issued
- 2006
- Identifier
- CFE0001481, ucf:47092
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001481
- Title
- AR Physics: Transforming physics diagrammatic representations on paper into interactive simulations.
- Creator
-
Zhou, Yao, Underberg-Goode, Natalie, Lindgren, Robb, Moshell, Jack, Peters, Philip, University of Central Florida
- Abstract / Description
-
A problem representation is a cognitive structure created by the solver in correspondence to the problem. Sketching representative diagrams in the domain of physics encourages a problem solving strategy that starts from 'envisionment' by which one internally simulates the physical events and predicts outcomes. Research studies also show that sketching representative diagrams improves learner's performance in solving physics problems. The pedagogic benefits of sketching representations on...
Show moreA problem representation is a cognitive structure created by the solver in correspondence to the problem. Sketching representative diagrams in the domain of physics encourages a problem solving strategy that starts from 'envisionment' by which one internally simulates the physical events and predicts outcomes. Research studies also show that sketching representative diagrams improves learner's performance in solving physics problems. The pedagogic benefits of sketching representations on paper make this traditional learning strategy remain pivotal and worthwhile to be preserved and integrated into the current digital learning landscape.In this paper, I describe AR Physics, an Augmented Reality based application that intends to facilitate one's learning of physics concepts about objects' linear motion. It affords the verified physics learning strategy of sketching representative diagrams on paper, and explores the capability of Augmented Reality in enhancing visual conceptions. The application converts the diagrams drawn on paper into virtual representations displayed on a tablet screen. As such learners can create physics simulation based on the diagrams and test their (")envisionment(") for the diagrams. Users' interaction with AR Physics consists of three steps: 1) sketching a diagram on paper; 2) capturing the sketch with a tablet camera to generate a virtual duplication of the diagram on the tablet screen, and 3) placing a physics object and configuring relevant parameters through the application interface to construct a physics simulation.A user study about the efficiency and usability of AR Physics was performed with 12 college students. The students interacted with the application, and completed three tasks relevant to the learning material. They were given eight questions afterwards to examine their post-learning outcome. The same questions were also given prior to the use of the application in order to comparewith the post results. System Usability Scale (SUS) was adopted to assess the application's usability and interviews were conducted to collect subjects' opinions about Augmented Reality in general. The results of the study demonstrate that the application can effectively facilitate subjects' understanding the target physics concepts. The overall satisfaction with the application's usability was disclosed by the SUS score. Finally subjects expressed that they gained a clearer idea about Augmented Reality through the use of the application.
Show less - Date Issued
- 2014
- Identifier
- CFE0005566, ucf:50292
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005566
- Title
- TRAINING WAYFINDING: NATURAL MOVEMENT IN MIXED REALITY.
- Creator
-
Savage, Ruthann, Gilson, Richard, University of Central Florida
- Abstract / Description
-
The Army needs a distributed training environment that can be accessed whenever and wherever required for training and mission rehearsal. This paper describes an exploratory experiment designed to investigate the effectiveness of a prototype of such a system in training a navigation task. A wearable computer, acoustic tracking system, and see-through head mounted display (HMD) were used to wirelessly track users' head position and orientation while presenting a graphic representation of...
Show moreThe Army needs a distributed training environment that can be accessed whenever and wherever required for training and mission rehearsal. This paper describes an exploratory experiment designed to investigate the effectiveness of a prototype of such a system in training a navigation task. A wearable computer, acoustic tracking system, and see-through head mounted display (HMD) were used to wirelessly track users' head position and orientation while presenting a graphic representation of their virtual surroundings, through which the user walked using natural movement. As previous studies have shown that virtual environments can be used to train navigation, the ability to add natural movement to a type of virtual environment may enhance that training, based on the proprioceptive feedback gained by walking through the environment. Sixty participants were randomly assigned to one of three conditions: route drawing on printed floor plan, rehearsal in the actual facility, and rehearsal in a mixed reality (MR) environment. Participants, divided equally between male and female in each group, studied verbal directions of route, then performed three rehearsals of the route, with those in the map condition drawing it onto three separate printed floor plans, those in the practice condition walking through the actual facility, and participants in the MR condition walking through a three dimensional virtual environment, with landmarks, waypoints and virtual footprints. A scaling factor was used, with each step in the MR environment equal to three steps in the real environment, with the MR environment also broken into "tiles", like pages in an atlas, through which participant progressed, entering each tile in succession until they completed the entire route. Transfer of training testing that consisted of a timed traversal of the route through the actual facility showed a significant difference in route knowledge based on the total time to complete the route, and the number of errors committed while doing so, with "walkers" performing better than participants in the paper map or MR condition, although the effect was weak. Survey knowledge showed little difference among the three rehearsal conditions. Three standardized tests of spatial abilities did not correlate with route traversal time, or errors, or with 3 of the 4 orientation localization tasks. Within the MR rehearsal condition there was a clear performance improvement over the three rehearsal trials as measured by the time required to complete the route in the MR environment which was accepted as an indication that learning occurred. As measured using the Simulator Sickness Questionnaire, there were no incidents of simulator sickness in the MR environment. Rehearsal in the actual facility was the most effective training condition; however, it is often not an acceptable form of rehearsal given an inaccessible or hostile environment. Performance between participants in the other two conditions were indistinguishable, pointing toward continued experimentation that should include the combined effect of paper map rehearsal with mixed reality, especially as it is likely to be the more realistic case for mission rehearsal, since there is no indication that maps should be eliminated. To walk through the environment beforehand can enhance the Soldiers' understanding of their surroundings, as was evident through the comments from participants as they moved from MR to the actual space: "This looks like I was just here", and "There's that pole I kept having trouble with". Such comments lead one to believe that this is a tool to continue to explore and apply. While additional research on the scaling and tiling factors is likely warranted, to determine if the effect can be applied to other environments or tasks, it should be pointed out that this is not a new task for most adults who have interacted with maps, where a scaling factor of 1 to 15,000 is common in orienteering maps, and 1 to 25,000 in military maps. Rehearsal time spent in the MR condition varied widely, some of which could be blamed on an issue referred to as "avatar excursions", a system anomaly that should be addressed in future research. The proprioceptive feedback in MR was expected to positively impact performance scores. It is very likely that proprioceptive feedback is what led to the lack of simulator sickness among these participants. The design of the HMD may have aided in the minimal reported symptoms as it allowed participants some peripheral vision that provided orientation cues as to their body position and movement. Future research might include a direct comparison between this MR, and a virtual environment system through which users move by manipulating an input device such as a mouse or joystick, while physically remaining stationary. The exploration and confirmation of the training capabilities of MR as is an important step in the development and application of the system to the U.S. Army training mission. This experiment was designed to examine one potential training area in a small controlled environment, which can be used as the foundation for experimentation with more complex tasks such as wayfinding through an urban environment, and or in direct comparison to more established virtual environments to determine strengths, as well as areas for improvement, to make MR as an effective addition to the Army training mission.
Show less - Date Issued
- 2006
- Identifier
- CFE0001288, ucf:46917
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001288
- Title
- MODELING, SIMULATION, AND VISUALIZATION OF 3D LUNG DYNAMICS.
- Creator
-
Santhanam, Anand, Rolland, Jannick, University of Central Florida
- Abstract / Description
-
Medical simulation has facilitated the understanding of complex biological phenomenon through its inherent explanatory power. It is a critical component for planning clinical interventions and analyzing its effect on a human subject. The success of medical simulation is evidenced by the fact that over one third of all medical schools in the United States augment their teaching curricula using patient simulators. Medical simulators present combat medics and emergency providers with video-based...
Show moreMedical simulation has facilitated the understanding of complex biological phenomenon through its inherent explanatory power. It is a critical component for planning clinical interventions and analyzing its effect on a human subject. The success of medical simulation is evidenced by the fact that over one third of all medical schools in the United States augment their teaching curricula using patient simulators. Medical simulators present combat medics and emergency providers with video-based descriptions of patient symptoms along with step-by-step instructions on clinical procedures that alleviate the patient's condition. Recent advances in clinical imaging technology have led to an effective medical visualization by coupling medical simulations with patient-specific anatomical models and their physically and physiologically realistic organ deformation. 3D physically-based deformable lung models obtained from a human subject are tools for representing regional lung structure and function analysis. Static imaging techniques such as Magnetic Resonance Imaging (MRI), Chest x-rays, and Computed Tomography (CT) are conventionally used to estimate the extent of pulmonary disease and to establish available courses for clinical intervention. The predictive accuracy and evaluative strength of the static imaging techniques may be augmented by improved computer technologies and graphical rendering techniques that can transform these static images into dynamic representations of subject specific organ deformations. By creating physically based 3D simulation and visualization, 3D deformable models obtained from subject-specific lung images will better represent lung structure and function. Variations in overall lung deformations may indicate tissue pathologies, thus 3D visualization of functioning lungs may also provide a visual tool to current diagnostic methods. The feasibility of medical visualization using static 3D lungs as an effective tool for endotracheal intubation was previously shown using Augmented Reality (AR) based techniques in one of the several research efforts at the Optical Diagnostics and Applications Laboratory (ODALAB). This research effort also shed light on the potential usage of coupling such medical visualization with dynamic 3D lungs. The purpose of this dissertation is to develop 3D deformable lung models, which are developed from subject-specific high resolution CT data and can be visualized using the AR based environment. A review of the literature illustrates that the techniques for modeling real-time 3D lung dynamics can be roughly grouped into two categories: Geometrically-based and Physically-based. Additional classifications would include considering a 3D lung model as either a volumetric or surface model, modeling the lungs as either a single-compartment or a multi-compartment, modeling either the air-blood interaction or the air-blood-tissue interaction, and considering either a normal or pathophysical behavior of lungs. Validating the simulated lung dynamics is a complex problem and has been previously approached by tracking a set of landmarks on the CT images. An area that needs to be explored is the relationship between the choice of the deformation method for the 3D lung dynamics and its visualization framework. Constraints on the choice of the deformation method and the 3D model resolution arise from the visualization framework. Such constraints of our interest are the real-time requirement and the level of interaction required with the 3D lung models. The work presented here discusses a framework that facilitates a physics-based and physiology-based deformation of a single-compartment surface lung model that maintains the frame-rate requirements of the visualization system. The framework presented here is part of several research efforts at ODALab for developing an AR based medical visualization framework. The framework consists of 3 components, (i) modeling the Pressure-Volume (PV) relation, (ii) modeling the lung deformation using a Green's function based deformation operator, and (iii) optimizing the deformation using state-of-art Graphics Processing Units (GPU). The validation of the results obtained in the first two modeling steps is also discussed for normal human subjects. Disease states such as Pneumothorax and lung tumors are modeled using the proposed deformation method. Additionally, a method to synchronize the instantiations of the deformation across a network is also discussed.
Show less - Date Issued
- 2006
- Identifier
- CFE0001301, ucf:47033
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001301
- Title
- DYNAMIC SHARED STATE MAINTENANCE IN DISTRIBUTED VIRTUAL ENVIRONMENTS.
- Creator
-
Hamza-Lup, Felix George, Hughes, Charles, University of Central Florida
- Abstract / Description
-
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual...
Show moreAdvances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual components. One of the challenges in such environments is maintaining a consistent view of the dynamic shared state in the presence of inevitable network latency and jitter. A consistent view of the shared scene will significantly increase the sense of presence among participants and facilitate their interactive collaboration. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model.A review of the literature illustrates that the techniques for consistency maintenance in distributed Virtual Reality (VR) environments can be roughly grouped into three categories: centralized information management, prediction through dead reckoning algorithms, and frequent state regeneration. Additional resource management methods can be applied across these techniques for shared state consistency improvement. Some of these techniques are related to the systems infrastructure, others are related to the human nature of the participants (e.g., human perceptual limitations, area of interest management, and visual and temporal perception).An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability.
Show less - Date Issued
- 2004
- Identifier
- CFE0000096, ucf:46152
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000096
- Title
- Context-Aware Mobile Augmented Reality Visualization in Construction Engineering Education.
- Creator
-
Shirazi, Arezoo, Behzadan, Amir, Oloufa, Amr, Tatari, Mehmet, University of Central Florida
- Abstract / Description
-
Recent studies suggest that the number of students pursuing science, technology, engineering, and mathematics (STEM) degrees has been generally decreasing. An extensive body of research cites the lack of motivation and engagement in the learning process as a major underlying reason of this decline. It has been discussed that if properly implemented, instructional technology can enhance student engagement and the quality of learning. Therefore, the main goal of this research is to implement...
Show moreRecent studies suggest that the number of students pursuing science, technology, engineering, and mathematics (STEM) degrees has been generally decreasing. An extensive body of research cites the lack of motivation and engagement in the learning process as a major underlying reason of this decline. It has been discussed that if properly implemented, instructional technology can enhance student engagement and the quality of learning. Therefore, the main goal of this research is to implement and assess effectiveness of augmented reality (AR)-based pedagogical tools on student learning. For this purpose, two sets of experiments were designed and implemented in two different construction and civil engineering undergraduate level courses at the University of Central Florida (UCF). The first experiment was designed to systematically assess the effectiveness of a context-aware mobile AR tool (CAM-ART) in real classroom-scale environment. This tool was used to enhance traditional lecture-based instruction and information delivery by augmenting the contents of an ordinary textbook using computer-generated three-dimensional (3D) objects and other virtual multimedia (e.g. sound, video, graphs). The experiment conducted on two separate control and test groups and pre- and post- performance data as well as student perception of using CAM-ART was collected through several feedback questionnaires. In the second experiment, a building design and assembly task competition was designed and conducted using a mobile AR platform. The pedagogical value of mobile AR-based instruction and information delivery to student learning in a large-scale classroom setting was also assessed and investigated. Similar to the first experiment, students in this experiment were divided into two control and test groups. Students' performance data as well as their feedback, suggestions, and workload were systematically collected and analyzed. Data analysis showed that the mobile AR framework had a measurable and positive impact on students' learning. In particular, it was found that students in the test group (who used the AR tool) performed slightly better with respect to certain measures and spent more time on collaboration, communication, and exchanging ideas in both experiments. Overall, students ranked the effectiveness of the AR tool very high and stated that it has a good potential to reform traditional teaching methods.
Show less - Date Issued
- 2014
- Identifier
- CFE0005257, ucf:50609
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005257
- Title
- Personalized Digital Body: Enhancing Body Ownership and Spatial Presence in Virtual Reality.
- Creator
-
Jung, Sungchul, Hughes, Charles, Foroosh, Hassan, Wisniewski, Pamela, Bruder, Gerd, Sandor, Christian, University of Central Florida
- Abstract / Description
-
person's sense of acceptance of a virtual body as his or her own is generally called virtual body ownership (VBOI). Having such a mental model of one's own body transferred to a virtual human surrogate is known to play a critical role in one's sense of presence in a virtual environment. Our focus in this dissertation is on top-down processing based on visual perception in both the visuomotor and the visuotactile domains, using visually personalized body cues. The visual cues we study here...
Show moreperson's sense of acceptance of a virtual body as his or her own is generally called virtual body ownership (VBOI). Having such a mental model of one's own body transferred to a virtual human surrogate is known to play a critical role in one's sense of presence in a virtual environment. Our focus in this dissertation is on top-down processing based on visual perception in both the visuomotor and the visuotactile domains, using visually personalized body cues. The visual cues we study here range from ones that we refer to as direct and others that we classify as indirect. Direct cues are associated with body parts that play a central role in the task we are performing. Such parts typically dominate a person's foveal view and will include one or both of their hands. Indirect body cues come from body parts that are normally seen in our peripheral view, e.g., legs and torso, and that are often observed through some mediation and are not directly associated with the current task.This dissertation studies how and to what degree direct and indirect cues affect a person's sense of VBOI for which they are receiving direct and, sometimes, inaccurate cues, and to investigate the relationship between enhanced virtual body ownership and task performance. Our experiments support the importance of a personalized representation, even for indirect cues. Additionally, we studied gradual versus instantaneous transition between one's own body and a virtual surrogate body, and between one's real-world environment and a virtual environment. We demonstrate that gradual transition has a significant influence on virtual body ownership and presence. In a follow-on study, we increase fidelity by using a personalized hand. Here, we demonstrate that a personalized hand significantly improves dominant visual illusions, resulting in more accurate perception of virtual object sizes.
Show less - Date Issued
- 2018
- Identifier
- CFE0007024, ucf:52033
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007024
- Title
- FIELD OF VIEW EFFECTS ON REFLEXIVE MOTOR RESPONSEIN FLIGHT SIMULATION.
- Creator
-
Covelli, Javier, Rolland, Jannick, University of Central Florida
- Abstract / Description
-
Virtual Reality (VR) and Augmented Reality (AR) Head Mounted Display (HMD) or Head Worn Display (HWD) technology represents low-cost, wide Field of Regard (FOR), deployable systems when compared to traditional simulation facilities. However, given current technological limitations, HWD flight simulator implementations provide a limited effective Field of View (eFOV) far narrower than the normal human 200º horizontal and 135º vertical FOV. Developing a HWD with such a wide FOV is...
Show moreVirtual Reality (VR) and Augmented Reality (AR) Head Mounted Display (HMD) or Head Worn Display (HWD) technology represents low-cost, wide Field of Regard (FOR), deployable systems when compared to traditional simulation facilities. However, given current technological limitations, HWD flight simulator implementations provide a limited effective Field of View (eFOV) far narrower than the normal human 200º horizontal and 135º vertical FOV. Developing a HWD with such a wide FOV is expensive but can increase the aviator's visual stimulus, perception, sense of presence and overall training effectiveness. This research and experimentation test this proposition by manipulating the eFOV of experienced pilots in a flight simulator while measuring their reflexive motor response and task performance. Reflexive motor responses are categorized as information, importance and effort behaviors. Performance metrics taken include runway alignment error (RAE) and vertical track error (VTE). Results indicated a significant and systematic change in visual scan pattern, head movement and flight control performance as the eFOV was sequentially decreased. As FOV decreased, the average visual scan pattern changed to focus less on out-the-window (OTW) and more on the instruments inside the cockpit. The head range of movement significantly increased below 80º horizontal x 54º vertical eFOV as well as significantly decreasing runway alignment and vertical track performance, which occurred below 120° horizontal x 81° vertical eFOV.
Show less - Date Issued
- 2008
- Identifier
- CFE0002002, ucf:47617
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002002
- Title
- Mediated Physicality: Inducing Illusory Physicality of Virtual Humans via Their Interactions with Physical Objects.
- Creator
-
Lee, Myungho, Welch, Gregory, Wisniewski, Pamela, Hughes, Charles, Bruder, Gerd, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
The term virtual human (VH) generally refers to a human-like entity comprised of computer graphics and/or physical body. In the associated research literature, a VH can be further classified as an avatar(-)a human-controlled VH, or an agent(-)a computer-controlled VH. Because of the resemblance with humans, people naturally distinguish them from non-human objects, and often treat them in ways similar to real humans. Sometimes people develop a sense of co-presence or social presence with the...
Show moreThe term virtual human (VH) generally refers to a human-like entity comprised of computer graphics and/or physical body. In the associated research literature, a VH can be further classified as an avatar(-)a human-controlled VH, or an agent(-)a computer-controlled VH. Because of the resemblance with humans, people naturally distinguish them from non-human objects, and often treat them in ways similar to real humans. Sometimes people develop a sense of co-presence or social presence with the VH(-)a phenomenon that is often exploited for training simulations where the VH assumes the role of a human. Prior research associated with VHs has primarily focused on the realism of various visual traits, e.g., appearance, shape, and gestures. However, our sense of the presence of other humans is also affected by other physical sensations conveyed through nearby space or physical objects. For example, we humans can perceive the presence of other individuals via the sound or tactile sensation of approaching footsteps, or by the presence of complementary or opposing forces when carrying a physical box with another person. In my research, I exploit the fact that these sensations, when correlated with events in the shared space, affect one's feeling of social/co-presence with another person. In this dissertation, I introduce novel methods for utilizing direct and indirect physical-virtual interactions with VHs to increase the sense of social/co-presence with the VHs(-)an approach I refer to as mediated physicality. I present results from controlled user studies, in various virtual environment settings, that support the idea that mediated physicality can increase a user's sense of social/co-presence with the VH, and/or induced realistic social behavior. I discuss relationships to prior research, possible explanations for my findings, and areas for future research.
Show less - Date Issued
- 2019
- Identifier
- CFE0007485, ucf:52687
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007485
- Title
- CONFORMAL TRACKING FOR VIRTUAL ENVIRONMENTS.
- Creator
-
Davis, Jr., Larry Dennis, Rolland, Jannick P., University of Central Florida
- Abstract / Description
-
A virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments.Marker-based tracking is a technique...
Show moreA virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments.Marker-based tracking is a technique which employs fiduciary marks to determine the pose of a tracked object. A collection of markers arranged in a rigid configuration is called a tracking probe. The performance of marker-based tracking systems depends upon the fidelity of the pose estimates provided by tracking probes.The realization that tracking performance is linked to probe performance necessitates investigation into the design of tracking probes for proponents of marker-based tracking. The challenges involved with probe design include prediction of the accuracy and precision of a tracking probe, the creation of arbitrarily-shaped tracking probes, and the assessment of the newly created probes.To address these issues, we present a pioneer framework for designing conformal tracking probes. Conformal in this work means to adapt to the shape of the tracked objects and to the environmental constraints. As part of the framework, the accuracy in position and orientation of a given probe may be predicted given the system noise. The framework is a methodology for designing tracking probes based upon performance goals and environmental constraints. After presenting the conformal tracking framework, the elements used for completing the steps of the framework are discussed. We start with the application of optimization methods for determining the probe geometry. Two overall methods for mapping markers on tracking probes are presented, the Intermediary Algorithm and the Viewpoints Algorithm.Next, we examine the method used for pose estimation and present a mathematical model of error propagation used for predicting probe performance in pose estimation. The model uses a first-order error propagation, perturbing the simulated marker locations with Gaussian noise. The marker locations with error are then traced through the pose estimation process and the effects of the noise are analyzed. Moreover, the effects of changing the probe size or the number of markers are discussed.Finally, the conformal tracking framework is validated experimentally. The assessment methods are divided into simulation and post-fabrication methods. Under simulation, we discuss testing of the performance of each probe design. Then, post-fabrication assessment is performed, including accuracy measurements in orientation and position. The framework is validated with four tracking probes. The first probe is a six-marker planar probe. The predicted accuracy of the probe was 0.06 deg and the measured accuracy was 0.083 plus/minus 0.015 deg. The second probe was a pair of concentric, planar tracking probes mounted together. The smaller probe had a predicted accuracy of 0.206 deg and a measured accuracy of 0.282 plus/minus 0.03 deg. The larger probe had a predicted accuracy of 0.039 deg and a measured accuracy of 0.017 plus/minus 0.02 deg. The third tracking probe was a semi-spherical head tracking probe. The predicted accuracy in orientation and position was 0.54 plus/minus 0.24 deg and 0.24 plus/minus 0.1 mm, respectively. The experimental accuracy in orientation and position was 0.60 plus/minus 0.03 deg and 0.225 plus/minus 0.05 mm, respectively. The last probe was an integrated, head-mounted display probe, created using the conformal design process. The predicted accuracy of this probe was 0.032 plus/minus 0.02 degrees in orientation and 0.14 plus/minus 0.08 mm in position. The measured accuracy of the probe was 0.028 plus/minus 0.01 degrees in orientation and 0.11 plus/minus 0.01 mm in position
Show less - Date Issued
- 2004
- Identifier
- CFE0000058, ucf:52856
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000058
- Title
- Towards Real-time Mixed Reality Matting in Natural Scenes.
- Creator
-
Beato, Nicholas, Hughes, Charles, Foroosh, Hassan, Tappen, Marshall, Moshell, Jack, University of Central Florida
- Abstract / Description
-
In Mixed Reality scenarios, background replacement is a common way to immerse a user in a synthetic environment. Properly identifying the background pixels in an image or video is a difficult problem known as matting. In constant color matting, research identifies and replaces a background that is a single color, known as the chroma key color. Unfortunately, the algorithms force a controlled physical environment and favor constant, uniform lighting. More generic approaches, such as natural...
Show moreIn Mixed Reality scenarios, background replacement is a common way to immerse a user in a synthetic environment. Properly identifying the background pixels in an image or video is a difficult problem known as matting. In constant color matting, research identifies and replaces a background that is a single color, known as the chroma key color. Unfortunately, the algorithms force a controlled physical environment and favor constant, uniform lighting. More generic approaches, such as natural image matting, have made progress finding alpha matte solutions in environments with naturally occurring backgrounds. However, even for the quicker algorithms, the generation of trimaps, indicating regions of known foreground and background pixels, normally requires human interaction or offline computation. This research addresses ways to automatically solve an alpha matte for an image in real-time, and by extension video, using a consumer level GPU. It do so even in the context of noisy environments that result in less reliable constraints than found in controlled settings. To attack these challenges, we are particularly interested in automatically generating trimaps from depth buffers for dynamic scenes so that algorithms requiring more dense constraints may be used. We then explore a sub-image based approach to parallelize an existing hierarchical approach on high resolution imagery by taking advantage of local information. We show that locality can be exploited to significantly reduce the memory and compute requirements of previously necessary when computing alpha mattes of high resolution images. We achieve this using a parallelizable scheme that is both independent of the matting algorithm and image features. Combined, these research topics provide a basis for Mixed Reality scenarios using real-time natural image matting on high definition video sources.
Show less - Date Issued
- 2012
- Identifier
- CFE0004515, ucf:49284
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004515
- Title
- USING AUGMENTED REALITY FOR STUDYING LEFT TURN MANEUVER AT UN-SIGNALIZED INTERSECTION AND HORIZONTAL VISIBILITY BLOCKAGE.
- Creator
-
Moussa, Ghada, Radwan, Essam, University of Central Florida
- Abstract / Description
-
Augmented reality "AR" is a promising paradigm that can provide users with real-time, high-quality visualization of a wide variety of information. In AR, virtual objects are added to the real-world view in a real time. Using the AR technology can offer a very realistic environment for driving enhancement as well as driving performance testing under different scenarios. This can be achieved by adding virtual objects (people, vehicles, hazards, and other objects) to the normal view while...
Show moreAugmented reality "AR" is a promising paradigm that can provide users with real-time, high-quality visualization of a wide variety of information. In AR, virtual objects are added to the real-world view in a real time. Using the AR technology can offer a very realistic environment for driving enhancement as well as driving performance testing under different scenarios. This can be achieved by adding virtual objects (people, vehicles, hazards, and other objects) to the normal view while driving in a safe controlled environment. In this dissertation, the feasibility of adapting the AR technology into traffic engineering was investigated. Two AR systems; AR Vehicle "ARV" system and Offline AR Simulator "OARSim" system were built. The systems' outcomes as well as the on-the-road driving under the AR were evaluated. In evaluating systems' outcomes, systems were successfully able to duplicate real scenes and generate new scenes without any visual inconsistency. In evaluating on-the-road driving under the AR, drivers' distance judgment, speed judgment, and level of comfort while driving were evaluated. In addition, our systems were used to conduct two traffic engineering studies; left-turn maneuver at un-signalized intersection, and horizontal visibility blockage when following a light truck vehicle. The results from this work supported the validity of our AR systems to be used as a surrogate to the field-testing for transportation research.
Show less - Date Issued
- 2006
- Identifier
- CFE0001430, ucf:47044
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001430
- Title
- REAL-TIME MONOCULAR VISION-BASED TRACKING FOR INTERACTIVE AUGMENTED REALITY.
- Creator
-
Spencer, Lisa, Guha, Ratan, University of Central Florida
- Abstract / Description
-
The need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous...
Show moreThe need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous behavior, classifying video clips, and measuring athletic performance. In this dissertation we focus on augmented reality, but the methods and conclusions are applicable to a wide variety of other areas. In particular, our work deals with achieving real-time performance while tracking with augmented reality systems using a minimum set of commercial hardware. We have built prototypes that use both existing technologies and new algorithms we have developed. While performance improvements would be possible with additional hardware, such as multiple cameras or parallel processors, we have concentrated on getting the most performance with the least equipment. Tracking is a broad research area, but an essential component of an augmented reality system. Tracking of some sort is needed to determine the location of scene augmentation. First, we investigated the effects of illumination on the pixel values recorded by a color video camera. We used the results to track a simple solid-colored object in our first augmented reality application. Our second augmented reality application tracks complex non-rigid objects, namely human faces. In the color experiment, we studied the effects of illumination on the color values recorded by a real camera. Human perception is important for many applications, but our focus is on the RGB values available to tracking algorithms. Since the lighting in most environments where video monitoring is done is close to white, (e.g., fluorescent lights in an office, incandescent lights in a home, or direct and indirect sunlight outside,) we looked at the response to "white" light sources as the intensity varied. The red, green, and blue values recorded by the camera can be converted to a number of other color spaces which have been shown to be invariant to various lighting conditions, including view angle, light angle, light intensity, or light color, using models of the physical properties of reflection. Our experiments show how well these derived quantities actually remained constant with real materials, real lights, and real cameras, while still retaining the ability to discriminate between different colors. This color experiment enabled us to find color spaces that were more invariant to changes in illumination intensity than the ones traditionally used. The first augmented reality application tracks a solid colored rectangle and replaces the rectangle with an image, so it appears that the subject is holding a picture instead. Tracking this simple shape is both easy and hard; easy because of the single color and the shape that can be represented by four points or four lines, and hard because there are fewer features available and the color is affected by illumination changes. Many algorithms for tracking fixed shapes do not run in real time or require rich feature sets. We have created a tracking method for simple solid colored objects that uses color and edge information and is fast enough for real-time operation. We also demonstrate a fast deinterlacing method to avoid "tearing" of fast moving edges when recorded by an interlaced camera, and optimization techniques that usually achieved a speedup of about 10 from an implementation that already used optimized image processing library routines. Human faces are complex objects that differ between individuals and undergo non-rigid transformations. Our second augmented reality application detects faces, determines their initial pose, and then tracks changes in real time. The results are displayed as virtual objects overlaid on the real video image. We used existing algorithms for motion detection and face detection. We present a novel method for determining the initial face pose in real time using symmetry. Our face tracking uses existing point tracking methods as well as extensions to Active Appearance Models (AAMs). We also give a new method for integrating detection and tracking data and leveraging the temporal coherence in video data to mitigate the false positive detections. While many face tracking applications assume exactly one face is in the image, our techniques can handle any number of faces. The color experiment along with the two augmented reality applications provide improvements in understanding the effects of illumination intensity changes on recorded colors, as well as better real-time methods for detection and tracking of solid shapes and human faces for augmented reality. These techniques can be applied to other real-time video analysis tasks, such as surveillance and video analysis.
Show less - Date Issued
- 2006
- Identifier
- CFE0001075, ucf:46786
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001075
- Title
- Evaluating the utility of a virtual environment for childhood social anxiety disorder.
- Creator
-
Wong, Nina, Beidel, Deborah, Rapport, Mark, Sims, Valerie, University of Central Florida
- Abstract / Description
-
Objective: Two significant challenges for the dissemination of social skills training programs are (a) the need to provide sufficient practice opportunities to assure skill consolidation and (b) the need to assure skill generalization (i.e., use of the skills outside the clinic setting). In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This investigation describes the development of an interactive skills-oriented virtual school...
Show moreObjective: Two significant challenges for the dissemination of social skills training programs are (a) the need to provide sufficient practice opportunities to assure skill consolidation and (b) the need to assure skill generalization (i.e., use of the skills outside the clinic setting). In the case of social anxiety disorder, virtual environments may provide one strategy to address these issues. This investigation describes the development of an interactive skills-oriented virtual school environment and evaluated its utility for the treatment of social anxiety disorder in preadolescent children (Study 1). This environment included both in-clinic and at-home solutions. In addition, a pilot replication/extension study further examined preliminary treatment efficacy between children who received a standard multi-component treatment and children who received the modified treatment with social skills practice in a virtual environment (Study 2). Method: Eleven children with a primary diagnosis of social anxiety disorder between 7 to 12 years old participated in the initial feasibility trial (Study 1). Five additional children participated in the replication/extension study (Study 2). To investigate preliminary treatment efficacy, clinical outcome measures for the Study 2 sample were compared to a comparison sample who received the standard treatment. Results: Overall, the virtual environment program was viewed as acceptable, feasible, and credible treatment components to children, parents, and clinicians alike but modifications would likely improve the current version. Additionally, although preliminary, children who received the modified treatment with virtual environment practice demonstrated significant improvement at post-treatment on clinician ratings but not parent or self-reported measures. Conclusion: Virtual environments are feasible, acceptable, and credible treatment components for clinical use. Future investigations will determine if the addition of this dose-controlled and intensive social skills practice results in treatment outcome equivalent to traditional cognitive-behavioral programs.
Show less - Date Issued
- 2013
- Identifier
- CFE0004962, ucf:49583
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004962
- Title
- Mental rotation: Can familiarity alleviate the effects of complex backgrounds?.
- Creator
-
Selkowitz, Anthony, Sims, Valerie, Jentsch, Florian, Chin, Matthew, Cash, Mason, University of Central Florida
- Abstract / Description
-
This dissertation investigated the effects of complex backgrounds on mental rotation. Stimulus familiarity and background familiarity were manipulated. It systematically explored how familiarizing participants to objects and complex backgrounds affects their performance on a mental rotation task involving complex backgrounds. This study had 113 participants recruited through the UCF Psychology SONA system. Participants were familiarized with a stimulus in a task where they were told to...
Show moreThis dissertation investigated the effects of complex backgrounds on mental rotation. Stimulus familiarity and background familiarity were manipulated. It systematically explored how familiarizing participants to objects and complex backgrounds affects their performance on a mental rotation task involving complex backgrounds. This study had 113 participants recruited through the UCF Psychology SONA system. Participants were familiarized with a stimulus in a task where they were told to distinguish the stimulus from 3 other stimuli. A similar procedure was used to familiarize the backgrounds. The research design was a 2 stimulus familiarity (Familiarized with the Target Stimulus, not familiarized with the Target Stimulus) by 2 background familiarity (Familiarized with Target Background, not familiarized with Target Background 1) by 2 stimulus response condition (Target Stimulus, Non-Target Stimulus) by 3 background response condition (Target Background, Non-Target Background, Blank Background) by 12 degree of rotation (0, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330) mixed design. The study utilized target stimulus and target background familiarity conditions as the between-subjects variables. Background, stimulus, and degree of rotation were within-subjects variables. The participants' performance was measured using reaction time and percent of errors. Reaction time was computed using only the correct responses. After the familiarization task, participants engaged in a mental rotation task featuring stimuli and backgrounds that were present or not present in the familiarization task. A 2 (stimulus familiarization condition) by 2 (background familiarization condition) by 2 (stimulus response condition) by 3 (background response condition) by 12 (degree of rotation) mixed ANOVA was computed utilizing reaction time and percent of errors. Results suggest that familiarity with the Target Background had the largest effect on improving performance across response conditions. The results also suggest that familiarity with both the Target Stimulus and Target Background promoted inefficient mental rotation strategies which resulted in no significant differences between participants familiarized with neither the Target Stimulus nor the Target Background. Theoretical conclusions are drawn about stimulus familiarity and background familiarity. Future studies should investigate the effects of long term familiarity practice on mental rotation and complex backgrounds.
Show less - Date Issued
- 2015
- Identifier
- CFE0005998, ucf:50789
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005998
- Title
- Environmental Physical(-)Virtual Interaction to Improve Social Presence with a Virtual Human in Mixed Reality.
- Creator
-
Kim, Kangsoo, Welch, Gregory, Gonzalez, Avelino, Sukthankar, Gita, Bruder, Gerd, Fiore, Stephen, University of Central Florida
- Abstract / Description
-
Interactive Virtual Humans (VHs) are increasingly used to replace or assist real humans in various applications, e.g., military and medical training, education, or entertainment. In most VH research, the perceived social presence with a VH, which denotes the user's sense of being socially connected or co-located with the VH, is the decisive factor in evaluating the social influence of the VH(-)a phenomenon where human users' emotions, opinions, or behaviors are affected by the VH. The purpose...
Show moreInteractive Virtual Humans (VHs) are increasingly used to replace or assist real humans in various applications, e.g., military and medical training, education, or entertainment. In most VH research, the perceived social presence with a VH, which denotes the user's sense of being socially connected or co-located with the VH, is the decisive factor in evaluating the social influence of the VH(-)a phenomenon where human users' emotions, opinions, or behaviors are affected by the VH. The purpose of this dissertation is to develop new knowledge about how characteristics and behaviors of a VH in a Mixed Reality (MR) environment can affect the perception of and resulting behavior with the VH, and to find effective and efficient ways to improve the quality and performance of social interactions with VHs. Important issues and challenges in real(-)virtual human interactions in MR, e.g., lack of physical(-)virtual interaction, are identified and discussed through several user studies incorporating interactions with VH systems. In the studies, different features of VHs are prototyped and evaluated, such as a VH's ability to be aware of and influence the surrounding physical environment, while measuring objective behavioral data as well as collecting subjective responses from the participants. The results from the studies support the idea that the VH's awareness and influence of the physical environment can improve not only the perceived social presence with the VH, but also the trustworthiness of the VH within a social context. The findings will contribute towards designing more influential VHs that can benefit a wide range of simulation and training applications for which a high level of social realism is important, and that can be more easily incorporated into our daily lives as social companions, providing reliable relationships and convenience in assisting with daily tasks.
Show less - Date Issued
- 2018
- Identifier
- CFE0007340, ucf:52115
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007340
- Title
- High performance liquid crystal devices for augmented reality and virtual reality.
- Creator
-
Talukder, Md Javed Rouf, Wu, Shintson, Moharam, Jim, Amezcua Correa, Rodrigo, Dong, Yajie, University of Central Florida
- Abstract / Description
-
See-through augmented reality and virtual reality displays are emerging due to their widespread applications in education, engineering design, medical, retail, transportation, automotive, aerospace, gaming, and entertainment. For augmented reality and virtual reality displays, high-resolution density, high luminance, fast response time and high ambient contrast ratio are critically needed. High-resolution density helps eliminate the screen-door effect, high luminance and fast response time...
Show moreSee-through augmented reality and virtual reality displays are emerging due to their widespread applications in education, engineering design, medical, retail, transportation, automotive, aerospace, gaming, and entertainment. For augmented reality and virtual reality displays, high-resolution density, high luminance, fast response time and high ambient contrast ratio are critically needed. High-resolution density helps eliminate the screen-door effect, high luminance and fast response time enable low duty ratio operation, which plays a key role for suppressing image blurs. A dimmer placed in front of AR display helps to control the incident background light, which in turn improves the image contrast. In this dissertation, we have focused three crucial display metrics: high luminance, fast motion picture response time (MPRT) and high ambient contrast ratio.We report a fringe-field switching liquid crystal display, abbreviated as d-FFS LCD, by using a low viscosity material and new diamond-shape electrode configuration. Our proposed device shows high transmittance, fast motion picture response time, low operation voltage, wide viewing angle, and indistinguishable color shift and gamma shift. We also investigate the rubbing angle effects on transmittance and response time. When rubbing angle is 0 degree, the virtual wall effect is strong, resulting in fast response time but compromised transmittance. When rubbing angle is greater than 1.2 degree, the virtual walls disappear, as a result, the transmittance increases dramatically, but the tradeoff is in slower response time. We also demonstrate a photo-responsive guest-host liquid crystal (LC) dimmer to enhance the ambient contrast ratio in augmented reality displays. The LC composition consists of photo-stable chiral agent, photosensitive azobenzene, and dichroic dye in a nematic host with negative dielectric anisotropy. In this device, transmittance changes from bright state to dark state by exposing a low intensity UV or blue light. Reversal process can be carried out by red light or thermal effect. Such a polarizer-free photo-activated dimmer can also be used for wide range of applications, such as diffractive photonic devices, portable information system, vehicular head-up displays, and smart window for energy saving purpose. A dual-stimuli polarizer-free dye-doped liquid crystal (LC) device is demonstrated as a dimmer. Upon UV/blue light exposure, the LC directors and dye molecules turn from initially vertical alignment (high transmittance state) to twisted fingerprint structure (low transmittance state). The reversal process is accelerated by combining a longitudinal electric field to unwind the LC directors from twisted fingerprint to homeotropic state, and a red light to transform the cis azobenzene back to trans. Such an electric-field-assisted reversal time can be reduced from ~10s to a few milliseconds, depending on the applied voltage. Considering power consumption, low manufacturing cost, and large fabrication tolerance, this device can be used as a smart dimmer to enhance the ambient contrast ratio for augmented reality displays.
Show less - Date Issued
- 2019
- Identifier
- CFE0007731, ucf:52425
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007731