Current Search: user interface (x)
View All Items
- Title
- Intelligent Selection Techniques For Virtual Environments.
- Creator
-
Cashion, Jeffrey, Laviola II, Joseph, Bassiouni, Mostafa, Hughes, Charles, Bowman, Doug, University of Central Florida
- Abstract / Description
-
Selection in 3D games and simulations is a well-studied problem. Many techniques have been created to address many of the typical scenarios a user could experience. For any single scenario with consistent conditions, there is likely a technique which is well suited. If there isn't, then there is an opportunity for one to be created to best suit the expected conditions of that new scenario. It is critical that the user be given an appropriate technique to interact with their environment....
Show moreSelection in 3D games and simulations is a well-studied problem. Many techniques have been created to address many of the typical scenarios a user could experience. For any single scenario with consistent conditions, there is likely a technique which is well suited. If there isn't, then there is an opportunity for one to be created to best suit the expected conditions of that new scenario. It is critical that the user be given an appropriate technique to interact with their environment. Without it, the entire experience is at risk of becoming burdensome and not enjoyable.With all of the different possible scenarios, it can become problematic when two or more are part of the same program. If they are put closely together, or even intertwined, then the developer is often forced to pick a single technique that works so-so in both, but is likely not optimal for either, or maybe optimal in just one of them. In this case, the user is left to perform selections with a technique that is lacking in one way or another, which can increase errors and frustration.In our research, we have outlined different selection scenarios, all of which were classified by their level of object density (number of objects in scene) and object velocity. We then performed an initial study on how it impacts performance of various selection techniques, including a new selection technique that we developed just for this test, called Expand. Our results showed, among other things, that a standard Raycast technique works well in slow moving and sparse environments, while revealing that our new Expand technique works well in denser environments.With the results from our first study, we sought to develop something that would bridge the gap in performance between those selection techniques tested. Our idea was a framework that could harvest several different selection techniques and determine which was the most optimal at any time. Each selection technique would report how effective it was, given the provided scenario conditions. The framework was responsible for activating the appropriate selection technique when the user made a selection attempt. With this framework in hand, we performed two additional user studies to determine how effective it could be in actual use, and to identify its strengths and weaknesses. Each study compared several selection techniques individually against the framework which utilized them collectively, picking the most suitable. Again, the same scenarios from our first study were reused. From these studies, we gained a deeper understanding of the many challenges associated with automatic selection technique determination. The results from these two studies showed that transitioning between techniques was potentially viable, but rife with design challenges that made its optimization quite difficult.In an effort to sidestep some of the issues surrounding the switching of discrete techniques, we sought to attack the problem from the other direction, and make a single technique act similarly to two techniques, adjusting dynamically to conditions. We performed a user study to analyze the performance of such a technique, with promising results. While the qualitative differences were small, the user feedback did indicate that users preferred this technique over the others, which were static in nature.Finally, we sought to gain a deeper understanding of existing selection techniques that were dynamic in nature, and study how they were designed, and how they could be improved. We scrutinized the attributes of each technique that were already being adjusted dynamically or that could be adjusted and innovated new ways in which the technique could be improved upon. Within this analysis, we also gave thought to how each technique could be best integrated into the Auto-Select framework we proposed earlier. This overall analysis of the latest selection techniques left us with an array of new variants that warrant being created and tested against their existing versions.Our overall research goal was to perform an analysis of selection techniques that intelligently adapt to their environment. We believe that we achieved this by performing several iterative development cycles, including user studies and ultimately leading to innovation in the field of selection. We conclude our research with yet more questions left to be answered. We intend to pursue further research regarding some of these questions, as time permits.
Show less - Date Issued
- 2014
- Identifier
- CFE0005469, ucf:50381
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005469
- Title
- Getting the Upper Hand: Natural Gesture Interfaces Improve Instructional Efficiency on a Conceptual Computer Lesson.
- Creator
-
Bailey, Shannon, Sims, Valerie, Jentsch, Florian, Bowers, Clint, Johnson, Cheryl, University of Central Florida
- Abstract / Description
-
As gesture-based interactions with computer interfaces become more technologically feasible for educational and training systems, it is important to consider what interactions are best for the learner. Computer interactions should not interfere with learning nor increase the mental effort of completing the lesson. The purpose of the current set of studies was to determine whether natural gesture-based interactions, or instruction of those gestures, help the learner in a computer lesson by...
Show moreAs gesture-based interactions with computer interfaces become more technologically feasible for educational and training systems, it is important to consider what interactions are best for the learner. Computer interactions should not interfere with learning nor increase the mental effort of completing the lesson. The purpose of the current set of studies was to determine whether natural gesture-based interactions, or instruction of those gestures, help the learner in a computer lesson by increasing learning and reducing mental effort. First, two studies were conducted to determine what gestures were considered natural by participants. Then, those gestures were implemented in an experiment to compare type of gesture and type of gesture instruction on learning conceptual information from a computer lesson. The goal of these studies was to determine the instructional efficiency (-) that is, the extent of learning taking into account the amount of mental effort (-) of implementing gesture-based interactions in a conceptual computer lesson. To test whether the type of gesture interaction affects conceptual learning in a computer lesson, the gesture-based interactions were either naturally- or arbitrarily-mapped to the learning material on the fundamentals of optics. The optics lesson presented conceptual information about reflection and refraction, and participants used the gesture-based interactions during the lesson to manipulate on-screen lenses and mirrors in a beam of light. The beam of light refracted/reflected at the angle corresponding with type of lens/mirror. The natural gesture-based interactions were those that mimicked the physical movement used to manipulate the lenses and mirrors in the optics lesson, while the arbitrary gestures were those that did not match the movement of the lens or mirror being manipulated. The natural gestures implemented in the computer lesson were determined from Study 1, in which participants performed gestures they considered natural for a set of actions, and rated in Study 2 as most closely resembling the physical interaction they represent. The arbitrary gestures were rated by participants as most arbitrary for each computer action in Study 2. To test whether the effect of novel gesture-based interactions depends on how they are taught, the way the gestures were instructed was varied in the main experiment by using either video- or text-based tutorials. Results of the experiment support that natural gesture-based interactions were better for learning than arbitrary gestures, and instruction of the gestures largely did not affect learning and amount of mental effort felt during the task. To further investigate the factors affecting instructional efficiency in using gesture-based interactions for a computer lesson, individual differences of the learner were taken into account. Results indicated that the instructional efficiency of the gestures and their instruction depended on an individual's spatial ability, such that arbitrary gesture interactions taught with a text-based tutorial were particularly inefficient for those with lower spatial ability. These findings are explained in the context of Embodied Cognition and Cognitive Load Theory, and guidelines are provided for instructional design of computer lessons using natural user interfaces. The theoretical frameworks of Embodied Cognition and Cognitive Load Theory were used to explain why gesture-based interactions and their instructions impacted the instructional efficiency of these factors in a computer lesson. Gesture-based interactions that are natural (i.e., mimic the physical interaction by corresponding to the learning material) were more instructionally efficient than arbitrary gestures because natural gestures may help schema development of conceptual information through physical enactment of the learning material. Furthermore, natural gestures resulted in lower cognitive load than arbitrary gestures, because arbitrary gestures that do not match the learning material may increase the working memory processing not associated with the learning material during the lesson. Additionally, the way in which the gesture-based interactions were taught was varied by either instructing the gestures with video- or text-based tutorials, and it was hypothesized that video-based tutorials would be a better way to instruct gesture-based interactions because the videos may help the learner to visualize the interactions and create a more easily recalled sensorimotor representation for the gestures; however, this hypothesis was not supported and there was not strong evidence that video-based tutorials were more instructionally efficient than text-based instructions. The results of the current set of studies can be applied to educational and training systems that incorporate a gesture-based interface. The finding that more natural gestures are better for learning efficiency, cognitive load, and a variety of usability factors should encourage instructional designers and researchers to keep the user in mind when developing gesture-based interactions.
Show less - Date Issued
- 2017
- Identifier
- CFE0007278, ucf:52192
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007278
- Title
- ADAPTIVE INTELLIGENT USER INTERFACES WITH EMOTION RECOGNITION.
- Creator
-
NASOZ, FATMA, Christine Lisetti, Dr L., University of Central Florida
- Abstract / Description
-
The focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect...
Show moreThe focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect physiological data signals from participants experiencing specific emotions. Algorithms (k-Nearest Neighbor [KNN], Discriminant Function Analysis [DFA], Marquardt-Backpropagation [MBP], and Resilient Backpropagation [RBP]) were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. Emotion Elicitation with Movie Clips Experiment was conducted to elicit Sadness, Anger, Surprise, Fear, Frustration, and Amusement from participants. Overall, the three algorithms: KNN, DFA, and MBP, could recognize emotions with 72.3%, 75.0%, and 84.1% accuracy, respectively. Driving Simulator experiment was conducted to elicit driving-related emotions and states (panic/fear, frustration/anger, and boredom/sleepiness). The KNN, MBP and RBP Algorithms were used to classify the physiological signals by corresponding emotions. Overall, KNN could classify these three emotions with 66.3%, MBP could classify them with 76.7% and RBP could classify them with 91.9% accuracy. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users' negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the User Model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as: personality traits and preferences.
Show less - Date Issued
- 2004
- Identifier
- CFE0000126, ucf:46201
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000126
- Title
- SketChart: A Pen-Based Tool for Chart Generation and Interaction.
- Creator
-
Vargas Gonzalez, Andres, Laviola II, Joseph, Foroosh, Hassan, Hua, Kien, University of Central Florida
- Abstract / Description
-
It has been shown that representing data with the right visualization increases the understanding of qualitative and quantitative information encoded in documents. However, current tools for generating such visualizations involve the use of traditional WIMP techniques, which perhaps makes free interaction and direct manipulation of the content harder. In this thesis, we present a pen-based prototype for data visualization using 10 different types of bar based charts. The prototype lets users...
Show moreIt has been shown that representing data with the right visualization increases the understanding of qualitative and quantitative information encoded in documents. However, current tools for generating such visualizations involve the use of traditional WIMP techniques, which perhaps makes free interaction and direct manipulation of the content harder. In this thesis, we present a pen-based prototype for data visualization using 10 different types of bar based charts. The prototype lets users sketch a chart and interact with the information once the drawing is identified. The prototype's user interface consists of an area to sketch and touch based elements that will be displayed depending on the context and nature of the outline. Brainstorming and live presentations can benefit from the prototype due to the ability to visualize and manipulate data in real time. We also perform a short, informal user study to measure effectiveness of the tool while recognizing sketches and users acceptance while interacting with the system. Results show SketChart strengths and weaknesses and areas for improvement.
Show less - Date Issued
- 2014
- Identifier
- CFE0005434, ucf:50405
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005434
- Title
- The Impact of User-Generated Interfaces on the Participation of Users with a Disability in Virtual Environments: Blizzard Entertainment's World of Warcraft Model.
- Creator
-
Merritt, Donald, McDaniel, Rudy, Zemliansky, Pavel, Mauer, Barry, Kim, Si Jung, University of Central Florida
- Abstract / Description
-
When discussing games and the experience of gamers those with disabilities are often overlooked. This has left a gap in our understanding of the experience of players with disabilities in virtual game worlds. However there are examples of players with disabilities being very successful in the virtual world video game World of Warcraft, suggesting that there is an opportunity to study the game for usability insight in creating other virtual world environments. This study surveyed World of...
Show moreWhen discussing games and the experience of gamers those with disabilities are often overlooked. This has left a gap in our understanding of the experience of players with disabilities in virtual game worlds. However there are examples of players with disabilities being very successful in the virtual world video game World of Warcraft, suggesting that there is an opportunity to study the game for usability insight in creating other virtual world environments. This study surveyed World of Warcraft players with disabilities online for insight into how they used interface addons to manage their experience and identity performance in the game. A rubric was also created to study a selection of addons for evidence of the principles of Universal Design for Learning (UDL). The study found that World of Warcraft players with disabilities do not use addons more than able-bodied players, but some of the most popular addons do exhibit many or most of the principles of UDL. UDL principles appear to have emerged organically from addon iterations over time. The study concludes by suggesting that the same approach to user-generated content for the game interface taken by the creators of World of Warcraft, as well as high user investment in the environment, can lead to more accessible virtual world learning environments in the future.
Show less - Date Issued
- 2015
- Identifier
- CFE0005667, ucf:50175
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005667
- Title
- REALNAV: EXPLORING NATURAL USER INTERFACES FOR LOCOMOTION IN VIDEO GAMES.
- Creator
-
Williamson, Brian, LaViola, Joseph, University of Central Florida
- Abstract / Description
-
We present an exploration into realistic locomotion interfaces in video games using spatially convenient input hardware. In particular, we use Nintendo Wii Remotes to create natural mappings between user actions and their representation in a video game. Targeting American Football video games, we used the role of the quarterback as an exemplar since the game player needs to maneuver effectively in a small area, run down the field, and perform evasive gestures such as spinning, jumping, or the...
Show moreWe present an exploration into realistic locomotion interfaces in video games using spatially convenient input hardware. In particular, we use Nintendo Wii Remotes to create natural mappings between user actions and their representation in a video game. Targeting American Football video games, we used the role of the quarterback as an exemplar since the game player needs to maneuver effectively in a small area, run down the field, and perform evasive gestures such as spinning, jumping, or the "juke". In our study, we developed three locomotion techniques. The first technique used a single Wii Remote, placed anywhere on the user's body, using only the acceleration data. The second technique just used the Wii Remote's infrared sensor and had to be placed on the user's head. The third technique combined a Wii Remote's acceleration and infrared data using a Kalman filter. The Wii Motion Plus was also integrated to add the orientation of the user into the video game. To evaluate the different techniques, we compared them with a cost effective six degree of freedom (6DOF) optical tracker and two Wii Remotes placed on the user's feet. Experiments were performed comparing each to this technique. Finally, a user study was performed to determine if a preference existed among these techniques. The results showed that the second and third technique had the same location accuracy as the cost effective 6DOF tracker, but the first was too inaccurate for video game players. Furthermore, the range of the Wii remote infrared and Motion Plus exceeded the optical tracker of the comparison technique. Finally, the user study showed that video game players preferred the third method over the second, but were split on the use of the Motion Plus when the tasks did not require it.
Show less - Date Issued
- 2009
- Identifier
- CFE0002938, ucf:47956
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002938
- Title
- Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling (&) Recovery.
- Creator
-
Koh, Senglee, Laviola II, Joseph, Foroosh, Hassan, Zhang, Shaojie, Kim, Si Jung, University of Central Florida
- Abstract / Description
-
State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling...
Show moreState-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors.In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with `pick-and-place' tasks in an ideal `Blocks World' environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic `Object' and `Location' grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control.
Show less - Date Issued
- 2018
- Identifier
- CFE0007209, ucf:52292
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007209
- Title
- Multi-Modal Interfaces for Sensemaking of Graph-Connected Datasets.
- Creator
-
Wehrer, Anthony, Hughes, Charles, Wisniewski, Pamela, Pattanaik, Sumanta, Specht, Chelsea, Lisle, Curtis, University of Central Florida
- Abstract / Description
-
The visualization of hypothesized evolutionary processes is often shown through phylogenetic trees. Given evolutionary data presented in one of several widely accepted formats, software exists to render these data into a tree diagram. However, software packages commonly in use by biologists today often do not provide means to dynamically adjust and customize these diagrams for studying new hypothetical relationships, and for illustration and publication purposes. Even where these options are...
Show moreThe visualization of hypothesized evolutionary processes is often shown through phylogenetic trees. Given evolutionary data presented in one of several widely accepted formats, software exists to render these data into a tree diagram. However, software packages commonly in use by biologists today often do not provide means to dynamically adjust and customize these diagrams for studying new hypothetical relationships, and for illustration and publication purposes. Even where these options are available, there can be a lack of intuitiveness and ease-of-use. The goal of our research is, thus, to investigate more natural and effective means of sensemaking of the data with different user input modalities. To this end, we experimented with different input modalities, designing and running a series of prototype studies, ultimately focusing our attention on pen-and-touch. Through several iterations of feedback and revision provided with the help of biology experts and students, we developed a pen-and-touch phylogenetic tree browsing and editing application called PhyloPen. This application expands on the capabilities of existing software with visualization techniques such as overview+detail, linked data views, and new interaction and manipulation techniques using pen-and-touch. To determine its impact on phylogenetic tree sensemaking, we conducted a within-subject comparative summative study against the most comparable and commonly used state-of-the-art mouse-based software system, Mesquite. Conducted with biology majors at the University of Central Florida, each used both software systems on a set number of exercise tasks of the same type. Determining effectiveness by several dependent measures, the results show PhyloPen was significantly better in terms of usefulness, satisfaction, ease-of-learning, ease-of-use, and cognitive load and relatively the same in variation of completion time. These results support an interaction paradigm that is superior to classic mouse-based interaction, which could have the potential to be applied to other communities that employ graph-based representations of their problem domains.
Show less - Date Issued
- 2019
- Identifier
- CFE0007872, ucf:52788
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007872
- Title
- MARKERLESS TRACKING USING POLAR CORRELATION OF CAMERA OPTICAL FLOW.
- Creator
-
Gupta, Prince, da Vitoria Lobo, Niels, University of Central Florida
- Abstract / Description
-
We present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom...
Show moreWe present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom including direction of translation and angular velocity. Experiments comparing our device with an electromagnetic tracker show that its average tracking accuracy is 80% over 185 frames, and it is able to track large range motions even in outdoor settings. We also present how opposing cameras in vision-based inside-looking-out systems can be used for gesture recognition. To demonstrate our approach, we discuss three different algorithms for recovering motion parameters at different levels of complete recovery. We show how optical flow in opposing cameras can be used to recover motion parameters of the multi-camera rig. Experimental results show gesture recognition accuracy of 88.0%, 90.7% and 86.7% for our three techniques, respectively, across a set of 15 gestures.
Show less - Date Issued
- 2010
- Identifier
- CFE0003163, ucf:48611
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003163
- Title
- Bridging the Gap Between Fun and Fitness: Instructional Techniques and Real-World Applications for Full-Body Dance Games.
- Creator
-
Charbonneau, Emiko, Laviola II, Joseph, Hughes, Charles, Tappen, Marshall, Angelopoulos, Theodore, Mueller, Florian, University of Central Florida
- Abstract / Description
-
Full-body controlled games offer the opportunity for not only entertainment, but education and exercise as well. Refined gameplay mechanics and content can boost intrinsic motivation and keep people playing over a long period of time, which is desirable for individuals who struggle with maintaining a regular exercise program. Within this gameplay genre, dance rhythm games have proven to be popular with game console owners. Yet, while other types of games utilize story mechanics that keep...
Show moreFull-body controlled games offer the opportunity for not only entertainment, but education and exercise as well. Refined gameplay mechanics and content can boost intrinsic motivation and keep people playing over a long period of time, which is desirable for individuals who struggle with maintaining a regular exercise program. Within this gameplay genre, dance rhythm games have proven to be popular with game console owners. Yet, while other types of games utilize story mechanics that keep players engaged for dozens of hours, motion-controlled dance games are just beginning to incorporate these elements. In addition, this control scheme is still young, only becoming commercially available in the last few years. Instructional displays and clear real-time feedback remain difficult challenges.This thesis investigates the potential for full-body dance games to be used as tools for entertainment, education, and fitness. We built several game prototypes to investigate visual, aural, and tactile methods for instruction and feedback. We also evaluated the fitness potential of the game Dance Central 2 both by itself and with extra game content which unlocked based on performance.Significant contributions include a framework for running a longitudinal video game study, results indicating high engagement with some fitness potential, and informed discussion of how dance games could make exertion a more enjoyable experience.
Show less - Date Issued
- 2013
- Identifier
- CFE0004829, ucf:49690
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004829
- Title
- DEVELOPMENT OF A GRAPHICAL USER INTERFACE FOR CAL3QHC CALLED CALQCAD.
- Creator
-
Gawalpanchi, Sheetal, Cooper, Charles, University of Central Florida
- Abstract / Description
-
One of the major sources of air pollution in the United States metropolitan areas is due to automobiles. With the huge growth of motor vehicles and, greater dependence on them, air pollution problems have been aggravated. According to the EPA, nearly 95% of carbon monoxide (CO ) (EPA 1999) in urban areas comes from mobile sources, of which 51% is contributed by on road vehicles. It is well known fact that, carbon monoxide is one of the major mobile source pollutants and CO has detrimental...
Show moreOne of the major sources of air pollution in the United States metropolitan areas is due to automobiles. With the huge growth of motor vehicles and, greater dependence on them, air pollution problems have been aggravated. According to the EPA, nearly 95% of carbon monoxide (CO ) (EPA 1999) in urban areas comes from mobile sources, of which 51% is contributed by on road vehicles. It is well known fact that, carbon monoxide is one of the major mobile source pollutants and CO has detrimental effects on the human health. Carbon monoxide is the result of mainly incomplete combustion of gasoline in motor vehicles (FDOT 1996). The National Environmental Policy Act (NEPA) gives important considerations to the actions to be taken. Transportation conformity . The Clean Air Act Amendments (CAAA, 1970) was an important step in meeting the National Ambient Air Quality Standards In order to evaluate the effects of CO and Particulate Matter (PM) impacts based on the criteria for NAAQS standards, it is necessary to conduct dispersion modeling of emissions for mobile source emissions. Design of transportation engineering systems (roadway design) should take care of both the flow of the traffic as well as the air pollution aspects involved. Roadway projects need to conform to the State Implementation Plan (SIP) and meet the NAAQS. EPA guidelines for air quality modeling on such roadway intersections recommend the use of CAL3QHC. The model has embedded in it CALINE 3.0 (Benson 1979) a line source dispersion model based on the Gaussian equation. The model requires parameters with respect to the roadway geometry, fleet volume, averaging time, surface roughness, emission factors, etc. The CAL3QHC model is a DOS based model which requires the modeling parameters to be fed into an input file. The creation of input the file is a tedious job. Previous work at UCF, resulted in the development of CALQVIEW, which expedites this process of creating input files, but the task of extracting the coordinates still has to be done manually. The main aim of the thesis is to reduce the analysis time for modeling emissions from roadway intersections, by expediting the process of extracting the coordinates required for the CAL3QHC model. Normally, transportation engineers design and model intersections for the traffic flow utilizing tools such as AutoCAD, Microstation etc. This thesis was to develop advanced software allowing graphical editing and coordinates capturing from an AutoCAD file. This software was named as CALQCAD. This advanced version will enable the air quality analyst to capture the coordinates from an AutoCAD 2004 file. This should expedite the process of modeling intersections and decrease analyst time from a few days to few hours. The model helps to assure the air quality analyst to retain accuracy during the modeling process. The idea to create the standalone interface was to give the AutoCAD user full functionality of AutoCAD tools in case editing is required to the main drawing. It also provides the modeler with a separate graphical user interface (GUI).
Show less - Date Issued
- 2005
- Identifier
- CFE0000483, ucf:46364
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000483
- Title
- OPTIMIZING THE DESIGN OF MULTIMODAL USER INTERFACES.
- Creator
-
Reeves, Leah, Stanney, Kay, University of Central Florida
- Abstract / Description
-
Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information...
Show moreDue to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator's information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments.
Show less - Date Issued
- 2007
- Identifier
- CFE0001636, ucf:47237
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001636
- Title
- Exploring 3D User Interface Technologies for Improving the Gaming Experience.
- Creator
-
Kulshreshth, Arun, Laviola II, Joseph, Hughes, Charles, Da Vitoria Lobo, Niels, Masuch, Maic, University of Central Florida
- Abstract / Description
-
3D user interface technologies have the potential to make games more immersive (&) engaging and thus potentially provide a better user experience to gamers. Although 3D user interface technologies are available for games, it is still unclear how their usage affects game play and if there are any user performance benefits. A systematic study of these technologies in game environments is required to understand how game play is affected and how we can optimize the usage in order to achieve...
Show more3D user interface technologies have the potential to make games more immersive (&) engaging and thus potentially provide a better user experience to gamers. Although 3D user interface technologies are available for games, it is still unclear how their usage affects game play and if there are any user performance benefits. A systematic study of these technologies in game environments is required to understand how game play is affected and how we can optimize the usage in order to achieve better game play experience.This dissertation seeks to improve the gaming experience by exploring several 3DUI technologies. In this work, we focused on stereoscopic 3D viewing (to improve viewing experience) coupled with motion based control, head tracking (to make games more engaging), and faster gesture based menu selection (to reduce cognitive burden associated with menu interaction while playing). We first studied each of these technologies in isolation to understand their benefits for games. We present the results of our experiments to evaluate benefits of stereoscopic 3D (when coupled with motion based control) and head tracking in games. We discuss the reasons behind these findings and provide recommendations for game designers who want to make use of these technologies to enhance gaming experiences. We also present the results of our experiments with finger-based menu selection techniques with an aim to find out the fastest technique. Based on these findings, we custom designed an air-combat game prototype which simultaneously uses stereoscopic 3D, head tracking, and finger-count shortcuts to prove that these technologies could be useful for games if the game is designed with these technologies in mind. Additionally, to enhance depth discrimination and minimize visual discomfort, the game dynamically optimizes stereoscopic 3D parameters (convergence and separation) based on the user's look direction. We conducted a within subjects experiment where we examined performance data and self-reported data on users perception of the game. Our results indicate that participants performed significantly better when all the 3DUI technologies (stereoscopic 3D, head-tracking and finger-count gestures) were available simultaneously with head tracking as a dominant factor. We explore the individual contribution of each of these technologies to the overall gaming experience and discuss the reasons behind our findings.Our experiments indicate that 3D user interface technologies could make gaming experience better if used effectively. The games must be designed to make use of the 3D user interface technologies available in order to provide a better gaming experience to the user. We explored a few technologies as part of this work and obtained some design guidelines for future game designers. We hope that our work will serve as the framework for the future explorations of making games better using 3D user interface technologies.
Show less - Date Issued
- 2015
- Identifier
- CFE0005643, ucf:50190
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005643
- Title
- Facilitating Information Retrieval in Social Media User Interfaces.
- Creator
-
Costello, Anthony, Tang, Yubo, Fiore, Stephen, Goldiez, Brian, University of Central Florida
- Abstract / Description
-
As the amount of computer mediated information (e.g., emails, documents, multi-media) we need to process grows, our need to rapidly sort, organize and store electronic information likewise increases. In order to store information effectively, we must find ways to sort through it and organize it in a manner that facilitates efficient retrieval. The instantaneous and emergent nature of communications across networks like Twitter makes them suitable for discussing events (e.g., natural disasters...
Show moreAs the amount of computer mediated information (e.g., emails, documents, multi-media) we need to process grows, our need to rapidly sort, organize and store electronic information likewise increases. In order to store information effectively, we must find ways to sort through it and organize it in a manner that facilitates efficient retrieval. The instantaneous and emergent nature of communications across networks like Twitter makes them suitable for discussing events (e.g., natural disasters) that are amorphous and prone to rapid changes. It can be difficult for an individual human to filter through and organize the large amounts of information that can pass through these types of social networks when events are unfolding rapidly. A common feature of social networks is the images (e.g., human faces, inanimate objects) that are often used by those who send messages across these networks. Humans have a particularly strong ability to recognize and differentiate between human Faces. This effect may also extend to recalling information associated with each human Face. This study investigated the difference between human Face images, non-human Face images and alphanumeric labels as retrieval cues under different levels of Task Load. Participants were required to recall key pieces of event information as they emerged from a Twitter-style message feed during a simulated natural disaster. A counter-balanced within-subjects design was used for this experiment. Participants were exposed to low, medium and high Task Load while responding to five different types of recall cues: (1) Nickname, (2) Non-Face, (3) Non-Face (&) Nickname, (4) Face and (5) Face (&) Nickname. The task required participants to organize information regarding emergencies (e.g., car accidents) from a Twitter-style message feed. The messages reported various events such as fires occurring around a fictional city. Each message was associated with a different recall cue type, depending on the experimental condition. Following the task, participants were asked to recall the information associated with one of the cues they worked with during the task. Results indicate that under medium and high Task Load, both Non-Face and Face retrieval cues increased recall performance over Nickname alone with Non-Faces resulting in the highest mean recall scores. When comparing medium to high Task Load: Face (&) Nickname and Non-Face significantly outperformed the Face condition. The performance in Non-Face (&) Nickname was significantly better than Face (&) Nickname. No significant difference was found between Non-Faces and Non-Faces (&) Nickname. Subjective Task Load scores indicate that participants experienced lower mental workload when using Non-Face cues than using Nickname or Face cues. Generally, these results indicate that under medium and high Task Load levels, images outperformed alphanumeric nicknames, Non-Face images outperformed Face images, and combining alphanumeric nicknames with images may have offered a significant performance advantage only when the image is that of a Face. Both theoretical and practical design implications are provided from these findings.
Show less - Date Issued
- 2014
- Identifier
- CFE0005318, ucf:50524
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005318