Current Search: human computer interaction (x)
View All Items
- Title
- NONINVASIVE PHYSIOLOGICAL MEASURES AND WORKLOAD TRANSITIONS:AN INVESTIGATION OF THRESHOLDS USING MULTIPLE SYNCHRONIZED SENSORS.
- Creator
-
Sciarini, Lee, Nicholson, Denise, University of Central Florida
- Abstract / Description
-
The purpose of this study is to determine under what conditions multiple minimally intrusive physiological sensors can be used together and validly applied for use in areas which rely on adaptive systems including adaptive automation and augmented cognition. Specifically, this dissertation investigated the physiological transitions of operator state caused by changes in the level of taskload. Three questions were evaluated including (1) Do differences exist between physiological indicators...
Show moreThe purpose of this study is to determine under what conditions multiple minimally intrusive physiological sensors can be used together and validly applied for use in areas which rely on adaptive systems including adaptive automation and augmented cognition. Specifically, this dissertation investigated the physiological transitions of operator state caused by changes in the level of taskload. Three questions were evaluated including (1) Do differences exist between physiological indicators when examined between levels of difficulty? (2) Are differences of physiological indicators (which may exist) between difficulty levels affected by spatial ability? (3) Which physiological indicators (if any) account for variation in performance on a spatial task with varying difficulty levels? The Modular Cognitive State Gauge model was presented and used to determine which basic physiological sensors (EEG, ECG, EDR and eye-tracking) could validly assess changes in the utilization of two-dimensional spatial resources required to perform a spatial ability dependent task. Thirty-six volunteers (20 female, 16 male) wore minimally invasive physiological sensing devices while executing a challenging computer based puzzle task. Specifically, participants were tested with two measures of spatial ability, received training, a practice session, an experimental trial and completed a subjective workload survey. The results of this experiment confirmed that participants with low spatial ability reported higher subjective workload and performed poorer when compared to those with high spatial ability. Additionally, there were significant changes for a majority of the physiological indicators between two difficulty levels and most importantly three measures (EEG, ECG and eye-tracking) were shown to account for variability in performance on the spatial task.
Show less - Date Issued
- 2009
- Identifier
- CFE0002781, ucf:48108
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002781
- Title
- THE EFFECTS OF MULTIMODAL FEEDBACK AND AGE ON A MOUSE POINTING TASK.
- Creator
-
Oakley, Brian, Smither, Janan, University of Central Florida
- Abstract / Description
-
As the beneficial aspects of computers become more apparent to the elderly population and the baby boom generation moves into later adulthood there is opportunity to increase performance for older computer users. Performance decrements that occur naturally to the motor skills of older adults have shown to have a negative effect on interactions with indirect-manipulation devices, such as computer mice (Murata & Iwase, 2005). Although, a mouse will always have the traits of an indirect...
Show moreAs the beneficial aspects of computers become more apparent to the elderly population and the baby boom generation moves into later adulthood there is opportunity to increase performance for older computer users. Performance decrements that occur naturally to the motor skills of older adults have shown to have a negative effect on interactions with indirect-manipulation devices, such as computer mice (Murata & Iwase, 2005). Although, a mouse will always have the traits of an indirect-manipulation interaction, the inclusion of additional sensory feedback likely increases the saliency of the task to the real world resulting in increases in performance (Biocca et al., 2002). There is strong evidence for a bimodal advantage that is present in people of all ages; additionally there is also very strong evidence that older adults are a group that uses extra sensory information to increase their everyday interactions with the environment (Cienkowski & Carney, 2002; Thompson & Malloy, 2004). This study examined the effects of having multimodal feedback (i.e., visual cues, auditory cues, and tactile cues) present during a target acquisition mouse task for young, middle-aged, and older experienced computer users. This research examined the performance and subjective attitudes when performing a mouse based pointing task when different combinations of the modalities were present. The inclusion of audio or tactile cues during the task had the largest positive effect on performance, resulting in significantly quicker task completion for all of the computer users. The presence of audio or tactile cues increased performance for all of the age groups; however the performance of the older adults tended to be positively influenced more than the other age groups due the inclusion of these modalities. Additionally, the presence of visual cues did not have as strong of an effect on overall performance in comparison to the other modalities. Although the presence of audio and tactile feedback both increased performance there was evidence of a speed accuracy trade-off. Both the audio and tactile conditions resulted in a significantly higher number of misses in comparison to having no additional cues or visual cues present. So, while the presence of audio and tactile feedback improved the speed at which the task could be completed this occurred due to a sacrifice in accuracy. Additionally, this study shows strong evidence that audio and tactile cues are undesirable to computer users. The findings of this research are important to consider prior to adding extra sensory modalities to any type of user interface. The idea that additional feedback is always better may not always hold true if the feedback is found to be distracting, annoying, or negatively affects accuracy, as was found in this study with audio and tactile cues.
Show less - Date Issued
- 2009
- Identifier
- CFE0002692, ucf:48188
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002692
- Title
- EFFECT OF A HUMAN-TEACHER VS. A ROBOT-TEACHER ON HUMAN LEARNING: A PILOT STUDY.
- Creator
-
Smith, Melissa, Sims, Valerie, University of Central Florida
- Abstract / Description
-
Studies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the...
Show moreStudies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the participants learned a simple pick-and-place block task via video of a human-teacher and a group where the participants learned the same pick-and-place block task via video from a robotic-teacher. After being the taught the task, the participants performed a 15-minute distracter task and then were timed in their reconstruction of the block configuration. An exit survey asking about their level of comfort learning from robot and computer entities was given upon completion. Results showed that there was no significant difference in the rebuild scores of the two groups, but there was a marginally significant difference in the rebuild times of the two groups. Exit survey results, research implications, and future work are discussed.
Show less - Date Issued
- 2011
- Identifier
- CFH0004068, ucf:44809
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004068
- Title
- ATTRIBUTIONS OF BLAME IN A HUMAN-ROBOT INTERACTION SCENARIO.
- Creator
-
Scholcover, Federico, Sims, Valerie, University of Central Florida
- Abstract / Description
-
This thesis worked towards answering the following question: Where, if at all, do the beliefs and behaviors associated with interacting with a nonhuman agent deviate from how we treat a human? This was done by exploring the inter-related fields of Human-Computer and Human-Robot Interaction in the literature review, viewing them through the theoretical lens of anthropomorphism. A study was performed which looked at how 104 participants would attribute blame in a robotic surgery scenario, as...
Show moreThis thesis worked towards answering the following question: Where, if at all, do the beliefs and behaviors associated with interacting with a nonhuman agent deviate from how we treat a human? This was done by exploring the inter-related fields of Human-Computer and Human-Robot Interaction in the literature review, viewing them through the theoretical lens of anthropomorphism. A study was performed which looked at how 104 participants would attribute blame in a robotic surgery scenario, as detailed in a vignette. A majority of results were statistically non-significant, however, some results emerged which may imply a diffusion of responsibility in human-robot collaboration scenarios.
Show less - Date Issued
- 2014
- Identifier
- CFH0004587, ucf:45224
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004587
- Title
- AFFECTIVE DESIGN IN TECHNICAL COMMUNICATION.
- Creator
-
Rosen, Michael, Kitalong, Karla, University of Central Florida
- Abstract / Description
-
Traditional human-computer interaction (HCI) is based on 'cold' models of user cognition; that is, models of users as purely rational beings based on the information processing metaphor; however, an emerging perspective suggests that for the field of HCI to mature, its practitioners must adopt models of users that consider broader human needs and capabilities. Affective design is an umbrella term for research and practice being conducted in diverse domains, all with the common thread of...
Show moreTraditional human-computer interaction (HCI) is based on 'cold' models of user cognition; that is, models of users as purely rational beings based on the information processing metaphor; however, an emerging perspective suggests that for the field of HCI to mature, its practitioners must adopt models of users that consider broader human needs and capabilities. Affective design is an umbrella term for research and practice being conducted in diverse domains, all with the common thread of integrating emotional aspects of use into the creation of information products. This thesis provides a review of the current state of the art in affective design research and practice to technical communicators and others involved in traditional HCI and usability enterprises. This paper is motivated by the developing technologies and the growing complexity of interaction that demand a more robust notion of HCI that incorporates affect in an augmented and holistic representation of the user and situated use.
Show less - Date Issued
- 2005
- Identifier
- CFE0000590, ucf:46474
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000590
- Title
- MULTI-TOUCH FOR GENERAL-PURPOSE COMPUTING: AN EXAMINATION OF TEXT ENTRY.
- Creator
-
Varcholik, Paul, Hughes, Charles, University of Central Florida
- Abstract / Description
-
In recent years, multi-touch has been heralded as a revolution in human-computer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as "everyday" computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are...
Show moreIn recent years, multi-touch has been heralded as a revolution in human-computer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as "everyday" computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are: Will the general public adopt these systems as their chief interaction paradigm? Can multi-touch provide such a compelling platform that it displaces the desktop mouse and keyboard? Is multi-touch truly the next revolution in human-computer interaction? As a first step toward answering these questions, we observe that general-purpose computing relies on text input, and ask: "Can multi-touch, without a text entry peripheral, provide a platform for efficient text entry? And, by extension, is such a platform viable for general-purpose computing?" We investigate these questions through four user studies that collected objective and subjective data for text entry and word processing tasks. The first of these studies establishes a benchmark for text entry performance on a multi-touch platform, across a variety of input modes. The second study attempts to improve this performance by examining an alternate input technique. The third and fourth studies include mouse-style interaction for formatting rich-text on a multi-touch platform, in the context of a word processing task. These studies establish a foundation for future efforts in general-purpose computing on a multi-touch platform. Furthermore, this work details deficiencies in tactile feedback with modern multi-touch platforms, and describes an exploration of audible feedback. Finally, the thesis conveys a vision for a general-purpose multi-touch platform, its design and rationale.
Show less - Date Issued
- 2011
- Identifier
- CFE0003711, ucf:48798
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003711
- Title
- THE INTEGRATED USER EXPERIENCE EVALUATION MODEL: A SYSTEMATIC APPROACH TO INTEGRATING USER EXPERIENCE DATA SOURCES.
- Creator
-
Champney, Roberto, Malone, Linda, University of Central Florida
- Abstract / Description
-
Evaluating the user experience (UX) associated with product interaction is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to assess the UX and a paucity of tools to support such evaluation. This dissertation provided a framework and tools for guiding and supporting evaluation of the user experience. This doctoral research involved reviewing the literature on UX, using this knowledge to build first build a...
Show moreEvaluating the user experience (UX) associated with product interaction is a challenge for current human-systems developers. This is largely due to a lack of theoretical guidance for directing how best to assess the UX and a paucity of tools to support such evaluation. This dissertation provided a framework and tools for guiding and supporting evaluation of the user experience. This doctoral research involved reviewing the literature on UX, using this knowledge to build first build a theoretical model of the UX construct and later develop a theoretical model to for the evaluation of UX in order to aid evaluators the integrated User eXperience EValuation (iUXEV), and empirically validating select components of the model through three case studies. The developed evaluation model was subjected to a three phase validation process that included the development and application of different components of the model separately. The first case study focused on developing a tool and method for assessing the affective component of UX which resulted in lessons learned for the integration of the tool and method into the iUXEV model. The second case study focused on integrating several tools that target different components of UX and resulted in a better understanding of how the data could be utilized as well as identify the need for an integration method to bring the data together. The third case study focused on the application of the results of an usability evaluation on an organizational setting which resulted in the identification of challenges and needs faced by practitioners. Taken together, this body of research, from the theoretically-driven iUXEV model to the newly developed emotional assessment tool, extends the user experience / usability body of knowledge and state-of-practice for interaction design practitioners who are challenged with holistic user experience evaluations, thereby advancing the state-of-the-art in UX design and evaluation.
Show less - Date Issued
- 2009
- Identifier
- CFE0002761, ucf:48098
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002761
- Title
- Mediated Physicality: Inducing Illusory Physicality of Virtual Humans via Their Interactions with Physical Objects.
- Creator
-
Lee, Myungho, Welch, Gregory, Wisniewski, Pamela, Hughes, Charles, Bruder, Gerd, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
The term virtual human (VH) generally refers to a human-like entity comprised of computer graphics and/or physical body. In the associated research literature, a VH can be further classified as an avatar(-)a human-controlled VH, or an agent(-)a computer-controlled VH. Because of the resemblance with humans, people naturally distinguish them from non-human objects, and often treat them in ways similar to real humans. Sometimes people develop a sense of co-presence or social presence with the...
Show moreThe term virtual human (VH) generally refers to a human-like entity comprised of computer graphics and/or physical body. In the associated research literature, a VH can be further classified as an avatar(-)a human-controlled VH, or an agent(-)a computer-controlled VH. Because of the resemblance with humans, people naturally distinguish them from non-human objects, and often treat them in ways similar to real humans. Sometimes people develop a sense of co-presence or social presence with the VH(-)a phenomenon that is often exploited for training simulations where the VH assumes the role of a human. Prior research associated with VHs has primarily focused on the realism of various visual traits, e.g., appearance, shape, and gestures. However, our sense of the presence of other humans is also affected by other physical sensations conveyed through nearby space or physical objects. For example, we humans can perceive the presence of other individuals via the sound or tactile sensation of approaching footsteps, or by the presence of complementary or opposing forces when carrying a physical box with another person. In my research, I exploit the fact that these sensations, when correlated with events in the shared space, affect one's feeling of social/co-presence with another person. In this dissertation, I introduce novel methods for utilizing direct and indirect physical-virtual interactions with VHs to increase the sense of social/co-presence with the VHs(-)an approach I refer to as mediated physicality. I present results from controlled user studies, in various virtual environment settings, that support the idea that mediated physicality can increase a user's sense of social/co-presence with the VH, and/or induced realistic social behavior. I discuss relationships to prior research, possible explanations for my findings, and areas for future research.
Show less - Date Issued
- 2019
- Identifier
- CFE0007485, ucf:52687
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007485
- Title
- EFFECT OF OPERATOR CONTROL CONFIGURATION ON UNMANNED AERIAL SYSTEM TRAINABILITY.
- Creator
-
Neumann, John, Kincaid, Peter, University of Central Florida
- Abstract / Description
-
Unmanned aerial systems (UAS) carry no pilot on board, yet they still require live operators to handle critical functions such as mission planning and execution. Humans also interpret the sensor information provided by these platforms. This applies to all classes of unmanned aerial vehicles (UAV's), including the smaller portable systems used for gathering real-time reconnaissance during military operations in urban terrain. The need to quickly and reliably train soldiers to control small...
Show moreUnmanned aerial systems (UAS) carry no pilot on board, yet they still require live operators to handle critical functions such as mission planning and execution. Humans also interpret the sensor information provided by these platforms. This applies to all classes of unmanned aerial vehicles (UAV's), including the smaller portable systems used for gathering real-time reconnaissance during military operations in urban terrain. The need to quickly and reliably train soldiers to control small UAS operations demands that the human-system interface be intuitive and easy to master. In this study, participants completed a series of tests of spatial ability and were then trained (in simulation) to teleoperate a micro-unmanned aerial vehicle equipped with forward and downward fixed cameras. Three aspects of the human-system interface were manipulated to assess the effects on manual control mastery and target detection. One factor was the input device. Participants used either a mouse or a specially programmed game controller (similar to that used with the Sony Playstation 2 video game console). A second factor was the nature of the flight control displays as either continuous or discrete (analog v. digital). The third factor involved the presentation of sensor imagery. The display could either provide streaming video from one camera at a time, or present the imagery from both cameras simultaneously in separate windows. The primary dependent variables included: 1) time to complete assigned missions, 2) number of collisions, 3) number of targets detected, and 4) operator workload. In general, operator performance was better with the game controller than with the mouse, but significant improvement in time to complete occurred over repeated trials regardless of the device used. Time to complete missions was significantly faster with the game controller, and operators also detected more targets without any significant differences in workload compared to mouse users. Workload on repeated trials decreased with practice, and spatial ability was a significant covariate of workload. Lower spatial ability associated with higher workload scores. In addition, demographic data including computer usage and video gaming experience were collected and analyzed, and correlated with performance. Higher video gaming experience was also associated with lower workload.
Show less - Date Issued
- 2006
- Identifier
- CFE0001496, ucf:47080
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001496
- Title
- PERFORMANCE SUPPORT AND USABILITY:AN EXPERIMENTAL STUDY OFELECTRONIC PERFORMANCE SUPPORT INTERFACES.
- Creator
-
Rawls, Charles, Hirumi, Atsusi, University of Central Florida
- Abstract / Description
-
This study evaluated the usability of two types of performance-support interfaces that were designed using informational and experiential approaches. The experiment sought to determine whether there is a relationship between usability and the informational and experiential approaches. The general population under study was undergraduate education major students from the University of Central Florida. From the general population of three educational technology instructor-led classes, 83...
Show moreThis study evaluated the usability of two types of performance-support interfaces that were designed using informational and experiential approaches. The experiment sought to determine whether there is a relationship between usability and the informational and experiential approaches. The general population under study was undergraduate education major students from the University of Central Florida. From the general population of three educational technology instructor-led classes, 83 students were solicited to participate in the study by completing a class activity. From the general population, a total of 63 students participated in the study. By participating in the study, the students completed a task and a questionnaire. Students were predominantly English-speaking Caucasian female education majors between the ages of 19 and 20; most of them were sophomores or juniors working part time. They possessed moderately low to high computer skills and most considered themselves to have intermediate or expert Internet skills. An experimental posttest-only comparison group research design was used to test the hypotheses posited for this study. The participants were randomly assigned to either the informational interface group (X1) or the experiential interface group (X2), and the experiment was conducted electronically via a Web-based Content Management System (CMS). The observed data consisted of five outcome measures: efficiency, errors, intuitiveness, satisfaction, and student performance. Two instruments--a checklist and an online usability questionnaire--were used to measure the five dependent variables: efficiency, intuitiveness, errors, satisfaction, and student performance. The CMS was used as the vehicle to distribute and randomize the two interfaces, obtain informed consent, distribute the instructions, distribute the online questionnaire, and collect data. First, a checklist was used to assess the students' performance completing their task, which was a copyright issue request letter. The checklist was designed as a performance criterion tool for the researcher, instructor, and participants to use. The researcher and instructor constructed the checklist to grade copyright request letters and determine students' performance. The participants had the opportunity to use the checklist as a performance criterion to create the task document (copyright request letter). The checklist consisted of ten basic yet critical sections of a successful copyright request letter. Second, an online usability questionnaire was constructed based on the Purdue Usability Testing Questionnaire (PUTQ) questions to measure interface efficiency, intuitiveness, errors, and satisfaction. While these test items have been deemed important for testing the usability of a particular system, for purposes of this study, test items were modified, deleted, and added to ensure content validity. The new survey, University of Central Florida Usability Questionnaire (UCFUQ), consisting of 20 items, was implemented in a pilot study to ensure reliability and content validity. Changes to the PUTQ were modified to fulfill a blueprint. A pilot study of the instrument yielded a reliability coefficient of .9450, and the final online usability instrument yielded a reliability coefficient of .9321. This study tested two approaches to user interface design for the Electronic Performance Support (EPS) using two HTML interface templates and the information from an existing training module. There were two interventions consisting of two interface types: informational and experiential. The SPSS Graduate Pack 10.0 for Windows was used for data analysis and statistical reporting in this study. A t test was conducted to determine if a difference existed between the two interface means. ANOVA was conducted to determine if there was an interaction between the interface group means and the demographic data factored among the five dependent variables. Results of this study indicated that students at the University of Central Florida reported no differences between the two interface types. It was postulated that the informational interface would yield a higher mean score because of its implementation of HCI guidelines, conventions, and standards. However, it was concluded that the informational interface may not be a more usable interface. Users may be as inclined to use the experiential interface as the informational interface.
Show less - Date Issued
- 2005
- Identifier
- CFE0000807, ucf:46678
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000807
- Title
- Multi-Modal Interfaces for Sensemaking of Graph-Connected Datasets.
- Creator
-
Wehrer, Anthony, Hughes, Charles, Wisniewski, Pamela, Pattanaik, Sumanta, Specht, Chelsea, Lisle, Curtis, University of Central Florida
- Abstract / Description
-
The visualization of hypothesized evolutionary processes is often shown through phylogenetic trees. Given evolutionary data presented in one of several widely accepted formats, software exists to render these data into a tree diagram. However, software packages commonly in use by biologists today often do not provide means to dynamically adjust and customize these diagrams for studying new hypothetical relationships, and for illustration and publication purposes. Even where these options are...
Show moreThe visualization of hypothesized evolutionary processes is often shown through phylogenetic trees. Given evolutionary data presented in one of several widely accepted formats, software exists to render these data into a tree diagram. However, software packages commonly in use by biologists today often do not provide means to dynamically adjust and customize these diagrams for studying new hypothetical relationships, and for illustration and publication purposes. Even where these options are available, there can be a lack of intuitiveness and ease-of-use. The goal of our research is, thus, to investigate more natural and effective means of sensemaking of the data with different user input modalities. To this end, we experimented with different input modalities, designing and running a series of prototype studies, ultimately focusing our attention on pen-and-touch. Through several iterations of feedback and revision provided with the help of biology experts and students, we developed a pen-and-touch phylogenetic tree browsing and editing application called PhyloPen. This application expands on the capabilities of existing software with visualization techniques such as overview+detail, linked data views, and new interaction and manipulation techniques using pen-and-touch. To determine its impact on phylogenetic tree sensemaking, we conducted a within-subject comparative summative study against the most comparable and commonly used state-of-the-art mouse-based software system, Mesquite. Conducted with biology majors at the University of Central Florida, each used both software systems on a set number of exercise tasks of the same type. Determining effectiveness by several dependent measures, the results show PhyloPen was significantly better in terms of usefulness, satisfaction, ease-of-learning, ease-of-use, and cognitive load and relatively the same in variation of completion time. These results support an interaction paradigm that is superior to classic mouse-based interaction, which could have the potential to be applied to other communities that employ graph-based representations of their problem domains.
Show less - Date Issued
- 2019
- Identifier
- CFE0007872, ucf:52788
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007872
- Title
- The WOZ Recognizer: A Tool For Understanding User Perceptions of Sketch-Based Interfaces.
- Creator
-
Bott, Jared, Laviola II, Joseph, Hughes, Charles, Foroosh, Hassan, Lank, Edward, University of Central Florida
- Abstract / Description
-
Sketch recognition has the potential to be an important input method for computers in the coming years; however, designing and building an accurate and sophisticated sketch recognition system is a time consuming and daunting task. Since sketch recognition is still at a level where mistakes are common, it is important to understand how users perceive and tolerate recognition errors and other user interface elements with these imperfect systems. A problem in performing this type of research is...
Show moreSketch recognition has the potential to be an important input method for computers in the coming years; however, designing and building an accurate and sophisticated sketch recognition system is a time consuming and daunting task. Since sketch recognition is still at a level where mistakes are common, it is important to understand how users perceive and tolerate recognition errors and other user interface elements with these imperfect systems. A problem in performing this type of research is that we cannot easily control aspects of recognition in order to rigorously study the systems. We performed a study examining user perceptions of three pen-based systems for creating logic gate diagrams: a sketch-based interface, a WIMP-based interface, and a hybrid interface that combined elements of sketching and WIMP. We found that users preferred the sketch-based interface and we identified important criteria for pen-based application design. This work exposed the issue of studying recognition systems without fine-grained control over accuracy, recognition mode, and other recognizer properties. In order to solve this problem, we developed a Wizard of Oz sketch recognition tool, the WOZ Recognizer, that supports controlled symbol and position accuracy and batch and streaming recognition modes for a variety of sketching domains. We present the design of the WOZ Recognizer, modeling recognition domains using graphs, symbol alphabets, and grammars; and discuss the types of recognition errors we included in its design. Further, we discuss how the WOZ Recognizer simulates sketch recognition, controlling the WOZ Recognizer, and how users interact with it. In addition, we present an evaluative user study of the WOZ Recognizer and the lessons we learned.We have used the WOZ Recognizer to perform two user studies examining user perceptions of sketch recognition; both studies focused on mathematical sketching. In the first study, we examined whether users prefer recognition feedback now (real-time recognition) or later (batch recognition) in relation to different recognition accuracies and sketch complexities. We found that participants displayed a preference for real-time recognition in some situations (multiple expressions, low accuracy), but no statistical preference in others. In our second study, we examined whether users displayed a greater tolerance for recognition errors when they used mathematical sketching applications they found interesting or useful compared to applications they found less interesting. Participants felt they had a greater tolerance for the applications they preferred, although our statistical analysis did not positively support this.In addition to the research already performed, we propose several avenues for future research into user perceptions of sketch recognition that we believe will be of value to sketch recognizer researchers and application designers.
Show less - Date Issued
- 2016
- Identifier
- CFE0006077, ucf:50945
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006077
- Title
- Bridging the Gap Between Fun and Fitness: Instructional Techniques and Real-World Applications for Full-Body Dance Games.
- Creator
-
Charbonneau, Emiko, Laviola II, Joseph, Hughes, Charles, Tappen, Marshall, Angelopoulos, Theodore, Mueller, Florian, University of Central Florida
- Abstract / Description
-
Full-body controlled games offer the opportunity for not only entertainment, but education and exercise as well. Refined gameplay mechanics and content can boost intrinsic motivation and keep people playing over a long period of time, which is desirable for individuals who struggle with maintaining a regular exercise program. Within this gameplay genre, dance rhythm games have proven to be popular with game console owners. Yet, while other types of games utilize story mechanics that keep...
Show moreFull-body controlled games offer the opportunity for not only entertainment, but education and exercise as well. Refined gameplay mechanics and content can boost intrinsic motivation and keep people playing over a long period of time, which is desirable for individuals who struggle with maintaining a regular exercise program. Within this gameplay genre, dance rhythm games have proven to be popular with game console owners. Yet, while other types of games utilize story mechanics that keep players engaged for dozens of hours, motion-controlled dance games are just beginning to incorporate these elements. In addition, this control scheme is still young, only becoming commercially available in the last few years. Instructional displays and clear real-time feedback remain difficult challenges.This thesis investigates the potential for full-body dance games to be used as tools for entertainment, education, and fitness. We built several game prototypes to investigate visual, aural, and tactile methods for instruction and feedback. We also evaluated the fitness potential of the game Dance Central 2 both by itself and with extra game content which unlocked based on performance.Significant contributions include a framework for running a longitudinal video game study, results indicating high engagement with some fitness potential, and informed discussion of how dance games could make exertion a more enjoyable experience.
Show less - Date Issued
- 2013
- Identifier
- CFE0004829, ucf:49690
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004829
- Title
- OPTIMIZING THE DESIGN OF MULTIMODAL USER INTERFACES.
- Creator
-
Reeves, Leah, Stanney, Kay, University of Central Florida
- Abstract / Description
-
Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information...
Show moreDue to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator's information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments.
Show less - Date Issued
- 2007
- Identifier
- CFE0001636, ucf:47237
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001636
- Title
- The User Experience of Disney Infinity: Do Smart Toys Matter?.
- Creator
-
Welch, Shelly, Smith, Peter, McDaniel, Rudy, Vie, Stephanie, University of Central Florida
- Abstract / Description
-
?This study investigated what factors come into play when looking at the user experience involved with the commercial video game Disney Infinity (2.0 Edition), and sought to determine if the unique combination between sandbox and smart toy based gameplay present in gameplay offers an additional level of immersion.This study analyzed the effect of Disney Infinity (2.0 Edition) on immersion utilizing a Game Immersion Questionnaire modified to analyze play preference as well as video game...
Show more?This study investigated what factors come into play when looking at the user experience involved with the commercial video game Disney Infinity (2.0 Edition), and sought to determine if the unique combination between sandbox and smart toy based gameplay present in gameplay offers an additional level of immersion.This study analyzed the effect of Disney Infinity (2.0 Edition) on immersion utilizing a Game Immersion Questionnaire modified to analyze play preference as well as video game experience. The study methodology analyzed 48 users while playing in (")Toy Box(") mode both with and without the associated smart toys, or Disney characters.Results show that while there was no significant difference in immersion for either group, nor were there any significant correlations between variables, there was a preference for playing the game with the associated smart toys in both groups. Recommendations were made for continued research building on modifications to this study as well as future research exploring the potential for smart toys in other areas.
Show less - Date Issued
- 2015
- Identifier
- CFE0005904, ucf:50890
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005904
- Title
- Load Estimation, Structural Identification and Human Comfort Assessment of Flexible Structures.
- Creator
-
Celik, Ozan, Catbas, Necati, Yun, Hae-Bum, Makris, Nicos, Kauffman, Jeffrey L., University of Central Florida
- Abstract / Description
-
Stadiums, pedestrian bridges, dance floors, and concert halls are distinct from other civil engineering structures due to several challenges in their design and dynamic behavior. These challenges originate from the flexible inherent nature of these structures coupled with human interactions in the form of loading. The investigations in past literature on this topic clearly state that the design of flexible structures can be improved with better load modeling strategies acquired with reliable...
Show moreStadiums, pedestrian bridges, dance floors, and concert halls are distinct from other civil engineering structures due to several challenges in their design and dynamic behavior. These challenges originate from the flexible inherent nature of these structures coupled with human interactions in the form of loading. The investigations in past literature on this topic clearly state that the design of flexible structures can be improved with better load modeling strategies acquired with reliable load quantification, a deeper understanding of structural response, generation of simple and efficient human-structure interaction models and new measurement and assessment criteria for acceptable vibration levels. In contribution to these possible improvements, this dissertation taps into three specific areas: the load quantification of lively individuals or crowds, the structural identification under non-stationary and narrowband disturbances and the measurement of excessive vibration levels for human comfort. For load quantification, a computer vision based approach capable of tracking both individual and crowd motion is used. For structural identification, a noise-assisted Multivariate Empirical Mode Decomposition (MEMD) algorithm is incorporated into the operational modal analysis. The measurement of excessive vibration levels and the assessment of human comfort are accomplished through computer vision based human and object tracking, which provides a more convenient means for measurement and computation. All the proposed methods are tested in the laboratory environment utilizing a grandstand simulator and in the field on a pedestrian bridge and on a football stadium. Findings and interpretations from the experimental results are presented. The dissertation is concluded by highlighting the critical findings and the possible future work that may be conducted.
Show less - Date Issued
- 2017
- Identifier
- CFE0006863, ucf:51752
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006863
- Title
- Facilitating Information Retrieval in Social Media User Interfaces.
- Creator
-
Costello, Anthony, Tang, Yubo, Fiore, Stephen, Goldiez, Brian, University of Central Florida
- Abstract / Description
-
As the amount of computer mediated information (e.g., emails, documents, multi-media) we need to process grows, our need to rapidly sort, organize and store electronic information likewise increases. In order to store information effectively, we must find ways to sort through it and organize it in a manner that facilitates efficient retrieval. The instantaneous and emergent nature of communications across networks like Twitter makes them suitable for discussing events (e.g., natural disasters...
Show moreAs the amount of computer mediated information (e.g., emails, documents, multi-media) we need to process grows, our need to rapidly sort, organize and store electronic information likewise increases. In order to store information effectively, we must find ways to sort through it and organize it in a manner that facilitates efficient retrieval. The instantaneous and emergent nature of communications across networks like Twitter makes them suitable for discussing events (e.g., natural disasters) that are amorphous and prone to rapid changes. It can be difficult for an individual human to filter through and organize the large amounts of information that can pass through these types of social networks when events are unfolding rapidly. A common feature of social networks is the images (e.g., human faces, inanimate objects) that are often used by those who send messages across these networks. Humans have a particularly strong ability to recognize and differentiate between human Faces. This effect may also extend to recalling information associated with each human Face. This study investigated the difference between human Face images, non-human Face images and alphanumeric labels as retrieval cues under different levels of Task Load. Participants were required to recall key pieces of event information as they emerged from a Twitter-style message feed during a simulated natural disaster. A counter-balanced within-subjects design was used for this experiment. Participants were exposed to low, medium and high Task Load while responding to five different types of recall cues: (1) Nickname, (2) Non-Face, (3) Non-Face (&) Nickname, (4) Face and (5) Face (&) Nickname. The task required participants to organize information regarding emergencies (e.g., car accidents) from a Twitter-style message feed. The messages reported various events such as fires occurring around a fictional city. Each message was associated with a different recall cue type, depending on the experimental condition. Following the task, participants were asked to recall the information associated with one of the cues they worked with during the task. Results indicate that under medium and high Task Load, both Non-Face and Face retrieval cues increased recall performance over Nickname alone with Non-Faces resulting in the highest mean recall scores. When comparing medium to high Task Load: Face (&) Nickname and Non-Face significantly outperformed the Face condition. The performance in Non-Face (&) Nickname was significantly better than Face (&) Nickname. No significant difference was found between Non-Faces and Non-Faces (&) Nickname. Subjective Task Load scores indicate that participants experienced lower mental workload when using Non-Face cues than using Nickname or Face cues. Generally, these results indicate that under medium and high Task Load levels, images outperformed alphanumeric nicknames, Non-Face images outperformed Face images, and combining alphanumeric nicknames with images may have offered a significant performance advantage only when the image is that of a Face. Both theoretical and practical design implications are provided from these findings.
Show less - Date Issued
- 2014
- Identifier
- CFE0005318, ucf:50524
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005318
- Title
- EPISODIC MEMORY MODEL FOR EMBODIED CONVERSATIONAL AGENTS.
- Creator
-
Elvir, Miguel, Gonzalez, Avelino, University of Central Florida
- Abstract / Description
-
Embodied Conversational Agents (ECA) form part of a range of virtual characters whose intended purpose include engaging in natural conversations with human users. While works in literature are ripe with descriptions of attempts at producing viable ECA architectures, few authors have addressed the role of episodic memory models in conversational agents. This form of memory, which provides a sense of autobiographic record-keeping in humans, has only recently been peripherally integrated into...
Show moreEmbodied Conversational Agents (ECA) form part of a range of virtual characters whose intended purpose include engaging in natural conversations with human users. While works in literature are ripe with descriptions of attempts at producing viable ECA architectures, few authors have addressed the role of episodic memory models in conversational agents. This form of memory, which provides a sense of autobiographic record-keeping in humans, has only recently been peripherally integrated into dialog management tools for ECAs. In our work, we propose to take a closer look at the shared characteristics of episodic memory models in recent examples from the field. Additionally, we propose several enhancements to these existing models through a unified episodic memory model for ECAÃÂ's. As part of our research into episodic memory models, we present a process for determining the prevalent contexts in the conversations obtained from the aforementioned interactions. The process presented demonstrates the use of statistical and machine learning services, as well as Natural Language Processing techniques to extract relevant snippets from conversations. Finally, mechanisms to store, retrieve, and recall episodes from previous conversations are discussed. A primary contribution of this research is in the context of contemporary memory models for conversational agents and cognitive architectures. To the best of our knowledge, this is the first attempt at providing a comparative summary of existing works. As implementations of ECAs become more complex and encompass more realistic conversation engines, we expect that episodic memory models will continue to evolve and further enhance the naturalness of conversations.
Show less - Date Issued
- 2010
- Identifier
- CFE0003353, ucf:48443
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003353
- Title
- TOWARD BUILDING A SOCIAL ROBOT WITH AN EMOTION-BASED INTERNAL CONTROL AND EXTERNAL COMMUNICATION TO ENHANCE HUMAN-ROBOT INTERACTION.
- Creator
-
Marpaung, Andreas, Lisetti, Christine, University of Central Florida
- Abstract / Description
-
In this thesis, we aim at modeling some aspects of the functional role of emotions on an autonomous embodied agent. We begin by describing our robotic prototype, Cherry--a robot with the task of being a tour guide and an office assistant for the Computer Science Department at the University of Central Florida. Cherry did not have a formal emotion representation of internal states, but did have the ability to express emotions through her multimodal interface. The thesis presents the results of...
Show moreIn this thesis, we aim at modeling some aspects of the functional role of emotions on an autonomous embodied agent. We begin by describing our robotic prototype, Cherry--a robot with the task of being a tour guide and an office assistant for the Computer Science Department at the University of Central Florida. Cherry did not have a formal emotion representation of internal states, but did have the ability to express emotions through her multimodal interface. The thesis presents the results of a survey we performed via our social informatics approach where we found that: (1) the idea of having emotions in a robot was warmly accepted by Cherry's users, and (2) the intended users were pleased with our initial interface design and functionalities. Guided by these results, we transferred our previous code to a human-height and more robust robot--Petra, the PeopleBot--where we began to build a formal emotion mechanism and representation for internal states to correspond to the external expressions of Cherry's interface. We describe our overall three-layered architecture, and propose the design of the sensory motor level (the first layer of the three-layered architecture) inspired by the Multilevel Process Theory of Emotion on one hand, and hybrid robotic architecture on the other hand. The sensory-motor level receives and processes incoming stimuli with fuzzy logic and produces emotion-like states without any further willful planning or learning. We will discuss how Petra has been equipped with sonar and vision for obstacle avoidance as well as vision for face recognition, which are used when she roams around the hallway to engage in social interactions with humans. We hope that the sensory motor level in Petra could serve as a foundation for further works in modeling the three-layered architecture of the Emotion State Generator.
Show less - Date Issued
- 2004
- Identifier
- CFE0000286, ucf:46228
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000286