Current Search: Hughes, Charles (x)
View All Items
Pages
- Title
- A SPARSE PROGRAM DEPENDENCE GRAPH FOR OBJECT ORIENTED PROGRAMMING LANGUAGES.
- Creator
-
Garfield, Keith, Hughes, Charles, University of Central Florida
- Abstract / Description
-
The Program Dependence Graph (PDG) has achieved widespread acceptance as a useful tool for software engineering, program analysis, and automated compiler optimizations. This thesis presents the Sparse Object Oriented Program Dependence Graph (SOOPDG), a formalism that contains elements of traditional PDG's adapted to compactly represent programs written in object-oriented languages such as Java. This formalism is called sparse because, in contrast to other OO and Java-specific adaptations...
Show moreThe Program Dependence Graph (PDG) has achieved widespread acceptance as a useful tool for software engineering, program analysis, and automated compiler optimizations. This thesis presents the Sparse Object Oriented Program Dependence Graph (SOOPDG), a formalism that contains elements of traditional PDG's adapted to compactly represent programs written in object-oriented languages such as Java. This formalism is called sparse because, in contrast to other OO and Java-specific adaptations of PDG's, it introduces few node types and no new edge types beyond those used in traditional dependence-based representations. This results in correct program representations using smaller graph structures and simpler semantics when compared to other OO formalisms. We introduce the Single Flow to Use (SFU) property which requires that exactly one definition of each variable be available for each use. We demonstrate that the SOOPDG, with its support for the SFU property coupled with a higher order rewriting semantics, is sufficient to represent static Java-like programs and dynamic program behavior. We present algorithms for creating SOOPDG representations from program text, and describe graph rewriting semantics. We also present algorithms for common static analysis techniques such as program slicing, inheritance analysis, and call chain analysis. We contrast the SOOPDG with two previously published OO graph structures, the Java System Dependence Graph and the Java Software Dependence Graph. The SOOPDG results in comparatively smaller static representations of programs, cleaner graph semantics, and potentially more accurate program analysis. Finally, we introduce the Simulation Dependence Graph (SDG). The SDG is a related representation that is developed specifically to represent simulation systems, but is extensible to more general component-based software design paradigms. The SDG allows formal reasoning about issues such as component composition, a property critical to the creation and analysis of complex simulation systems and component-based design systems.
Show less - Date Issued
- 2006
- Identifier
- CFE0001499, ucf:47077
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001499
- Title
- AUGMENTATION IN VISUAL REALITY (AVR).
- Creator
-
Zhang, Yunjun, Hughes, Charles, University of Central Florida
- Abstract / Description
-
Human eyes, as the organs for sensing light and processing visual information, enable us to see the real world. Though invaluable, they give us no way to ``edit'' the received visual stream or to ``switch'' to a different channel. The invention of motion pictures and computer technologies in the last century enables us to add an extra layer of modifications between the real world and our eyes. There are two major approaches to modifications that we consider here, offline...
Show moreHuman eyes, as the organs for sensing light and processing visual information, enable us to see the real world. Though invaluable, they give us no way to ``edit'' the received visual stream or to ``switch'' to a different channel. The invention of motion pictures and computer technologies in the last century enables us to add an extra layer of modifications between the real world and our eyes. There are two major approaches to modifications that we consider here, offline augmentation and online augmentation. The movie industry has pushed offline augmentation to an extreme level; audiences can experience visual surprises that they have never seen in their real lives, even though it may take a few months or years for the production of the special visual effects. On the other hand, online augmentation requires that modifications be performed in real time. This dissertation addresses problems in both offline and online augmentation. The first offline problem addressed here is the generation of plausible video sequences after removing relatively large objects from the original videos. In order to maintain temporal coherence among the frames, a motion layer segmentation method is applied. From this, a set of synthesized layers is generated by applying motion compensation and a region completion algorithm. Finally, a plausibly realistic new video, in which the selected object is removed, is rendered given the synthesized layers and the motion parameters. The second problem we address is to construct a blue screen key for video synthesis or blending for Mixed Reality (MR) applications. As a well researched area, blue screen keying extracts a range of colors, typically in the blue spectrum, from a captured video sequence to enable the compositing of multiple image sources. Under ideal conditions with uniform lighting and background color, a high quality key can be generated through commercial products, even in real time. However, A Mixed Realty application typically involves a head-mounted display (HMD) with poor camera quality. This in turn requires the keying algorithm to be robust in the presence of noise. We conduct a three stage keying algorithm to reduce the noise in the key output. First a standard blue screen keying algorithm is applied to the input to get a noisy key; second the image gradient information and the corresponding region are compared with the result in the first step to remove noise in the blue screen area; and finally a matting approach is applied on the boundary of the key to improve the key quality. Another offline problem we address in this dissertation is the acquisition of correct transformation between the different coordinate frames in a Mixed Reality (MR) application. Typically an MR system includes at least one tracking system. Therefore the 3D coordinate frames that need to be considered include the cameras, the tracker, the tracker system and a world. Accurately deriving the transformation between the head-mounted display camera and the affixed 6-DOF tracker is critical for mixed reality applications. This transformation brings the HMD cameras into the tracking coordinate frame, which in turn overlaps with a virtual coordinate frame to create a plausible mixed visual experience. We carry out a non-linear optimization method to recover the camera-tracker transformation with respect to the image reprojection error. For online applications, we address a problem to extend the luminance range in mixed reality environments. We achieve this by introducing Enhanced Dynamic Range Video, a technique based on differing brightness settings for each eye of a video see-through head mounted display (HMD). We first construct a Video-Driven Time-Stamped Ball Cloud (VDTSBC), which serves as a guideline and a means to store temporal color information for stereo image registration. With the assistance of the VDTSBC, we register each pair of stereo images, taking into account confounding issues of occlusion occurring within one eye but not the other. Finally, we apply luminance enhancement on the registered image pairs to generate an Enhanced Dynamic Range Video.
Show less - Date Issued
- 2007
- Identifier
- CFE0001757, ucf:47285
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001757
- Title
- CONCEPT LEARNING BY EXAMPLE DECOMPOSITION.
- Creator
-
Joshi, Sameer, Hughes, Charles, University of Central Florida
- Abstract / Description
-
For efficient understanding and prediction in natural systems, even in artificially closed ones, we usually need to consider a number of factors that may combine in simple or complex ways. Additionally, many modern scientific disciplines face increasingly large datasets from which to extract knowledge (for example, genomics). Thus to learn all but the most trivial regularities in the natural world, we rely on different ways of simplifying the learning problem. One simplifying technique that...
Show moreFor efficient understanding and prediction in natural systems, even in artificially closed ones, we usually need to consider a number of factors that may combine in simple or complex ways. Additionally, many modern scientific disciplines face increasingly large datasets from which to extract knowledge (for example, genomics). Thus to learn all but the most trivial regularities in the natural world, we rely on different ways of simplifying the learning problem. One simplifying technique that is highly pervasive in nature is to break down a large learning problem into smaller ones; to learn the smaller, more manageable problems; and then to recombine them to obtain the larger picture. It is widely accepted in machine learning that it is easier to learn several smaller decomposed concepts than a single large one. Though many machine learning methods exploit it, the process of decomposition of a learning problem has not been studied adequately from a theoretical perspective. Typically such decomposition of concepts is achieved in highly constrained environments, or aided by human experts. In this work, we investigate concept learning by example decomposition in a general probably approximately correct (PAC) setting for Boolean learning. We develop sample complexity bounds for the different steps involved in the process. We formally show that if the cost of example partitioning is kept low then it is highly advantageous to learn by example decomposition. To demonstrate the efficacy of this framework, we interpret the theory in the context of feature extraction. We discover that many vague concepts in feature extraction, starting with what exactly a feature is, can be formalized unambiguously by this new theory of feature extraction. We analyze some existing feature learning algorithms in light of this theory, and finally demonstrate its constructive nature by generating a new learning algorithm from theoretical results.
Show less - Date Issued
- 2009
- Identifier
- CFE0002504, ucf:47694
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002504
- Title
- DYNAMIC SHARED STATE MAINTENANCE IN DISTRIBUTED VIRTUAL ENVIRONMENTS.
- Creator
-
Hamza-Lup, Felix George, Hughes, Charles, University of Central Florida
- Abstract / Description
-
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual...
Show moreAdvances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual components. One of the challenges in such environments is maintaining a consistent view of the dynamic shared state in the presence of inevitable network latency and jitter. A consistent view of the shared scene will significantly increase the sense of presence among participants and facilitate their interactive collaboration. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model.A review of the literature illustrates that the techniques for consistency maintenance in distributed Virtual Reality (VR) environments can be roughly grouped into three categories: centralized information management, prediction through dead reckoning algorithms, and frequent state regeneration. Additional resource management methods can be applied across these techniques for shared state consistency improvement. Some of these techniques are related to the systems infrastructure, others are related to the human nature of the participants (e.g., human perceptual limitations, area of interest management, and visual and temporal perception).An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability.
Show less - Date Issued
- 2004
- Identifier
- CFE0000096, ucf:46152
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000096
- Title
- APPEARANCE-DRIVEN MATERIAL DESIGN.
- Creator
-
Colbert, Mark, Hughes, Charles, University of Central Florida
- Abstract / Description
-
In the computer graphics production environment, artists often must tweak specific lighting and material parameters to match a mind's eye vision of the appearance of a 3D scene. However, the interaction between a material and a lighting environment is often too complex to cognitively predict without visualization. Therefore, artists operate in a design cycle, where they tweak the parameters, wait for a visualization, and repeat, seeking to obtain a desired look. We propose the use of...
Show moreIn the computer graphics production environment, artists often must tweak specific lighting and material parameters to match a mind's eye vision of the appearance of a 3D scene. However, the interaction between a material and a lighting environment is often too complex to cognitively predict without visualization. Therefore, artists operate in a design cycle, where they tweak the parameters, wait for a visualization, and repeat, seeking to obtain a desired look. We propose the use of appearance-driven material design. Here, artists directly design the appearance of reflected light for a specific view, surface point, and time. In this thesis, we discuss several methods for appearance-driven design with homogeneous materials, spatially-varying materials, and appearance-matching materials, where each uses a unique modeling and optimization paradigm. Moreover, we present a novel treatment of the illumination integral using sampling theory that can utilize the computational power of the graphics processing unit (GPU) to provide real-time visualization of the appearance of various materials illuminated by complex environment lighting. As a system, the modeling, optimization and rendering steps all operate on arbitrary geometry and in detailed lighting environments, while still providing instant feedback to the designer. Thus, our approach allows materials to play an active role in the process of set design and story-telling, a capability that was, until now, difficult to achieve due to the unavailability of interactive tools appropriate for artists.
Show less - Date Issued
- 2008
- Identifier
- CFE0002217, ucf:47913
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002217
- Title
- MULTI-TOUCH FOR GENERAL-PURPOSE COMPUTING: AN EXAMINATION OF TEXT ENTRY.
- Creator
-
Varcholik, Paul, Hughes, Charles, University of Central Florida
- Abstract / Description
-
In recent years, multi-touch has been heralded as a revolution in human-computer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as "everyday" computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are...
Show moreIn recent years, multi-touch has been heralded as a revolution in human-computer interaction. Multi-touch provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as "everyday" computer interaction devices; that is, multi-touch has not been applied to general-purpose computing. The questions this thesis seeks to address are: Will the general public adopt these systems as their chief interaction paradigm? Can multi-touch provide such a compelling platform that it displaces the desktop mouse and keyboard? Is multi-touch truly the next revolution in human-computer interaction? As a first step toward answering these questions, we observe that general-purpose computing relies on text input, and ask: "Can multi-touch, without a text entry peripheral, provide a platform for efficient text entry? And, by extension, is such a platform viable for general-purpose computing?" We investigate these questions through four user studies that collected objective and subjective data for text entry and word processing tasks. The first of these studies establishes a benchmark for text entry performance on a multi-touch platform, across a variety of input modes. The second study attempts to improve this performance by examining an alternate input technique. The third and fourth studies include mouse-style interaction for formatting rich-text on a multi-touch platform, in the context of a word processing task. These studies establish a foundation for future efforts in general-purpose computing on a multi-touch platform. Furthermore, this work details deficiencies in tactile feedback with modern multi-touch platforms, and describes an exploration of audible feedback. Finally, the thesis conveys a vision for a general-purpose multi-touch platform, its design and rationale.
Show less - Date Issued
- 2011
- Identifier
- CFE0003711, ucf:48798
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003711
- Title
- An analytical model for evaluating database update schemes.
- Creator
-
Kinsley, Kathryn C., Hughes, Charles E., Arts and Sciences
- Abstract / Description
-
University of Central Florida College of Arts and Sciences Thesis; A methodology is presented for evaluating the performance of database update schemes. The methodology uses the M/Hr/1 queueing model as a basis for this analysis and makes use of the history of how data is used in the database. Parameters have been introduced which can be set based on the characteristics of a specific system. These include update to retrieval ratio, average file size, overhead, block size and the expected...
Show moreUniversity of Central Florida College of Arts and Sciences Thesis; A methodology is presented for evaluating the performance of database update schemes. The methodology uses the M/Hr/1 queueing model as a basis for this analysis and makes use of the history of how data is used in the database. Parameters have been introduced which can be set based on the characteristics of a specific system. These include update to retrieval ratio, average file size, overhead, block size and the expected number of items in the database. The analysis is specifically directed toward the support of derived data within the relational model. Three support methods are analyzed. These are first examined in a central database system. The analysis is then extended in order to measure performance in a distributed system. Because concurrency is a major problem in a distributive system, the support of derived data is analyzed with respect to three distributive concurrency control techniques -- master/slave, distributed and synchronized. In addition to its use as a performance predictor, the development of the methodology serves to demonstrate how queueing theory may be used to investigate other related database problems. This is an important benefit due to this lack of fundamental results in the area of using queueing theory to analyze database performance.
Show less - Date Issued
- 1983
- Identifier
- CFR0011600, ucf:53041
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFR0011600
- Title
- Realtime Editing in Virtual Reality for Room Scale Scans.
- Creator
-
Greenwood, Charles, Laviola II, Joseph, Hughes, Charles, Heinrich, Mark, University of Central Florida
- Abstract / Description
-
This work presents a system for the design and implementation of tools that support the editing of room-scale scans within a virtual reality environment, in real time. The moniker REVRRSS ((")reverse(")) thus stands for Real-time Editing (in) Virtual Reality (of) Room Scale Scans. The tools were evaluated for usefulness based upon whether they meet the criterion of real time usability. Users evaluated the editing experience with traditional keyboard-video-mouse compared to a head mounted...
Show moreThis work presents a system for the design and implementation of tools that support the editing of room-scale scans within a virtual reality environment, in real time. The moniker REVRRSS ((")reverse(")) thus stands for Real-time Editing (in) Virtual Reality (of) Room Scale Scans. The tools were evaluated for usefulness based upon whether they meet the criterion of real time usability. Users evaluated the editing experience with traditional keyboard-video-mouse compared to a head mounted display and hand-held controllers for Virtual Reality. Results show that users prefer the VR approach. The quality of the finished product when using VR is comparable to that of traditional desktop controls. The architecture developed here can be adapted to innumerable future projects and tools.
Show less - Date Issued
- 2019
- Identifier
- CFE0007463, ucf:52678
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007463
- Title
- Examining Users' Application Permissions On Android Mobile Devices.
- Creator
-
Safi, Muhammad, Wisniewski, Pamela, Leavens, Gary, Hughes, Charles, University of Central Florida
- Abstract / Description
-
Mobile devices have become one of the most important computing platforms. The platform's portability and highly customized nature raises several privacy concerns. Therefore, understanding and predicting user privacy behavior has become very important if one is to design software which respects the privacy concerns of users. Various studies have been carried out to quantify user perceptions and concerns [23,36] and user characteristics which may predict privacy behavior [21,22,25]. Even though...
Show moreMobile devices have become one of the most important computing platforms. The platform's portability and highly customized nature raises several privacy concerns. Therefore, understanding and predicting user privacy behavior has become very important if one is to design software which respects the privacy concerns of users. Various studies have been carried out to quantify user perceptions and concerns [23,36] and user characteristics which may predict privacy behavior [21,22,25]. Even though significant research exists regarding factors which affect user privacy behavior, there is gap in the literature when it comes to correlating these factors to objectively collected data from user devices. We designed an Android application which administered surveys to collect various perceived measures, and to scrape past behavioral data from the phone. Our goal was to discover variables which help in predicting user location sharing decisions by correlating what we collected from surveys with the user's decision to share their location with our study application. We carried out logistic regression analysis with multiple measured variables and found that perceived measures and past behavioral data alone were poor predictors of user location sharing decisions. Instead, we discovered that perceived measures in the context of past behavior helped strengthen prediction models. Asking users to reflect on whether they were comfortable sharing their location with apps that were already installed on their mobile device was a stronger predictor of location sharing behavior than general measures regarding privacy concern or past behavioral data scraped from their phones. This work contributes to the field by correlating existing privacy measures with objective data, and uncovering a new predictor of location sharing decisions.
Show less - Date Issued
- 2018
- Identifier
- CFE0007363, ucf:52085
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007363
- Title
- Supporting Learning in Educational 3D Virtual Environments: The Impact of Intergenerational Joint Media Engagement.
- Creator
-
Michlowitz, Robert, Walters, Lori, Hughes, Charles, Vasquez, Trey, Blumberg, Fran, University of Central Florida
- Abstract / Description
-
Studies have indicated that intergenerational relationships can assist children to learn more efficiently by providing support. As new forms of media have emerged and become pervasive in our society, it is important to understand how children use them to learn. Just as television coviewing has been observed by past researchers to aid youths to learn with parents and grandparents, three-dimensional virtual learning environments (VLE) are being investigated for their potential. This study seeks...
Show moreStudies have indicated that intergenerational relationships can assist children to learn more efficiently by providing support. As new forms of media have emerged and become pervasive in our society, it is important to understand how children use them to learn. Just as television coviewing has been observed by past researchers to aid youths to learn with parents and grandparents, three-dimensional virtual learning environments (VLE) are being investigated for their potential. This study seeks to examine the potential learning impact on children, ages 8 to 13, encountering a three-dimensional virtual learning environment with their grandparents. The primary research question this study examines is whether children exploring a 3D VLE with a grandparent learn the information being conveyed within the environment more effectively. A second aspect of the study considered if the grandparent-child pair would spend a greater amount of time in the virtual environment compared to a child exploring alone. Additionally, this research seeks to determine if there are other benefits a child could gain while interacting with a grandparent while using a VLE. This study used ChronoLeap: The Great World's Fair Adventure, an educational VLE developed at the University of Central Florida under a National Science Foundation Informal Science Education grant. ChronoLeap permits children to explore a virtual representation of the 1964-65 New York World's Fair where they can discover the roots of current technology in their 1960s form and its evolution to the present. This environment affords a child a unique opportunity to encounter a virtual recreation of an era in which their grandparents would have firsthand memories potentially eliciting the grandparent's personal reflections.
Show less - Date Issued
- 2019
- Identifier
- CFE0007837, ucf:52810
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007837
- Title
- Applied Software Tools for Supporting Children with Intellectual Disabilities.
- Creator
-
Abualsamid, Ahmad, Hughes, Charles, Dieker, Lisa, Sims, Valerie, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
We explored the level of technology utilization in supporting children with cognitive disabilities at schools, speech clinics, and with assistive communication at home. Anecdotal evidence, literature research, and our own survey of special needs educators in Central Florida reveal that use of technology is minimal in classrooms for students with special needs even when scientific research has shown the effectiveness of video modeling in teaching children with special needs new skills and...
Show moreWe explored the level of technology utilization in supporting children with cognitive disabilities at schools, speech clinics, and with assistive communication at home. Anecdotal evidence, literature research, and our own survey of special needs educators in Central Florida reveal that use of technology is minimal in classrooms for students with special needs even when scientific research has shown the effectiveness of video modeling in teaching children with special needs new skills and behaviors. Research also shows that speech and language therapists utilize a manual approach to elicit and analyze language samples from children with special needs. While technology is utilized in augmentative and alternative communication, many caregivers utilize paper-based picture exchange systems, storyboards, and daily schedules when assisting their children with their communication needs. We developed and validated three software frameworks to aid language therapists, teachers, and caregivers in supporting children with cognitive disabilities and related special needs. The Analysis of Social Discourse Framework proposes that language therapists use social media discourse instead of direct elicitation of language samples. The framework presents an easy-to-use approach to analyzing language samples based on natural language processing. We validated the framework by analyzing public social discourse from three unrelated sources. The Applied Interventions for eXceptional-needs (AIX) framework allows classroom teachers to implement and track interventions using easy-to-use smartphone applications. We validated the framework by conducting a sixteen-week pilot case study in a school for students with special needs in Central Florida. The Language Enhancements for eXceptioanl Youth (LEXY) framework allows for the development of a new class of augmentative and alternative communication tools that are based on conversational chatbots that assist children with special needs while utilizing a model of the world curated by their caregivers. We validated the framework by simulating an interaction between a prototype chatbot that we developed, a child with special needs, and the child's caregiver.
Show less - Date Issued
- 2018
- Identifier
- CFE0006964, ucf:52908
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006964
- Title
- Analysis of large-scale population genetic data using efficient algorithms and data structures.
- Creator
-
Naseri, Ardalan, Zhang, Shaojie, Hughes, Charles, Yooseph, Shibu, Zhi, Degui, University of Central Florida
- Abstract / Description
-
With the availability of genotyping data of very large samples, there is an increasing need for tools that can efficiently identify genetic relationships among all individuals in the sample. Modern biobanks cover genotypes up to 0.1%-1% of an entire large population. At this scale, genetic relatedness among samples is ubiquitous. However, current methods are not efficient for uncovering genetic relatedness at such a scale. We developed a new method, Random Projection for IBD Detection (RaPID)...
Show moreWith the availability of genotyping data of very large samples, there is an increasing need for tools that can efficiently identify genetic relationships among all individuals in the sample. Modern biobanks cover genotypes up to 0.1%-1% of an entire large population. At this scale, genetic relatedness among samples is ubiquitous. However, current methods are not efficient for uncovering genetic relatedness at such a scale. We developed a new method, Random Projection for IBD Detection (RaPID), for detecting Identical-by-Descent (IBD) segments, a fundamental concept in genetics in large panels. RaPID detects all IBD segments over a certain length in time linear to the sample size. We take advantage of an efficient population genotype index, Positional BWT (PBWT), by Richard Durbin. PBWT achieves linear time query of perfectly identical subsequences among all samples. However, the original PBWT is not tolerant to genotyping errors which often interrupt long IBD segments into short fragments. The key idea of RaPID is that the problem of approximate high-resolution matching over a long range can be mapped to the problem of exact matching of low-resolution subsampled sequences with high probability. PBWT provides an appropriate data structure for bi-allelic data. With the increasing sample sizes, more multi-allelic sites are expected to be observed. Hence, there is a necessity to handle multi-allelic genotype data. We also introduce a multi-allelic version of the original Positional Burrows-Wheeler Transform (mPBWT).The increasingly large cohorts of whole genome genotype data present an opportunity for searching genetically related people within a large cohort to an individual. At the same time, doing so efficiently presents a challenge. The PBWT algorithm offers constant time matching between one haplotype and an arbitrarily large panel at each position, but only for the maximal matches. We used the PBWT data structure to develop a method to search for all matches of a given query in a panel. The matches larger than a given length correspond to the all shared IBD segments of certain lengths between the query and other individuals in the panel. The time complexity of the proposed method is independent from the number of individuals in the panel. In order to achieve a time complexity independent from the number of haplotypes, additional data structures are introduced.Some regions of genome may be shared by multiple individuals rather than only a pair. Clusters of identical haplotypes could reveal information about the history of intermarriage, isolation of a population and also be medically important. We propose an efficient method to find clusters of identical segments among individuals in a large panel, called cPBWT, using PBWT data structure. The time complexity of finding all clusters of identical matches is linear to the sample size. Human genome harbors several runs of homozygous sites (ROHs) where identical haplotypes are inherited from each parent. We applied cPBWT on UK-Biobank and searched for clusters of ROH region that are shared among multiple. We discovered strong associations between ROH regions and some non-cancerous diseases, specifically auto-immune disorders.
Show less - Date Issued
- 2018
- Identifier
- CFE0007764, ucf:52393
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007764
- Title
- Algorithms for Rendering Optimization.
- Creator
-
Johnson, Jared, Hughes, Charles, Tappen, Marshall, Foroosh, Hassan, Shirley, Peter, University of Central Florida
- Abstract / Description
-
This dissertation explores algorithms for rendering optimization realizable within a modern, complex rendering engine. The first part contains optimized rendering algorithms for ray tracing. Ray tracing algorithms typically provide properties of simplicity and robustness that are highly desirable in computer graphics. We offer several novel contributions to the problem of interactive ray tracing of complex lighting environments. We focus on the problem of maintaining interactivity as both...
Show moreThis dissertation explores algorithms for rendering optimization realizable within a modern, complex rendering engine. The first part contains optimized rendering algorithms for ray tracing. Ray tracing algorithms typically provide properties of simplicity and robustness that are highly desirable in computer graphics. We offer several novel contributions to the problem of interactive ray tracing of complex lighting environments. We focus on the problem of maintaining interactivity as both geometric and lighting complexity grows without effecting the simplicity or robustness of ray tracing. First, we present a new algorithm called occlusion caching for accelerating the calculation of direct lighting from many light sources. We cache light visibility information sparsely across a scene. When rendering direct lighting for all pixels in a frame, we combine cached lighting information to determine whether or not shadow rays are needed. Since light visibility and scene location are highly correlated, our approach precludes the need for most shadow rays. Second, we present improvements to the irradiance caching algorithm. Here we demonstrate a new elliptical cache point spacing heuristic that reduces the number of cache points required by taking into account the direction of irradiance gradients. We also accelerate irradiance caching by efficiently and intuitively coupling it with occlusion caching.In the second part of this dissertation, we present optimizations to rendering algorithms for participating media. Specifically, we explore the implementation and use of photon beams as an efficient, intuitive artistic primitive. We detail our implementation of the photon beams algorithm into PhotoRealistic RenderMan (PRMan). We show how our implementation maintains the benefits of the industry standard Reyes rendering pipeline, with proper motion blur and depth of field. We detail an automatic photon beam generation algorithm, utilizing PRMan shadow maps. We accelerate the rendering of camera-facing photon beams by utilizing Gaussian quadrature for path integrals in place of ray marching. Our optimized implementation allows for incredible versatility and intuitiveness in artistic control of volumetric lighting effects. Finally, we demonstrate the usefulness of photon beams as artistic primitives by detailing their use in a feature-length animated film.
Show less - Date Issued
- 2012
- Identifier
- CFE0004557, ucf:49231
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004557
- Title
- Ray Collection Bounding Volume Hierarchy.
- Creator
-
Rivera, Kris, Pattanaik, Sumanta, Heinrich, Mark, Hughes, Charles, University of Central Florida
- Abstract / Description
-
This thesis presents Ray Collection BVH, an improvement over a current dayRay Tracing acceleration structure to both build and perform the steps necessary toefficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonlyused acceleration structure, which aides in rendering complex scenes in 3D spaceusing Ray Tracing by breaking the scene of triangles into a simple hierarchicalstructure. The algorithm this thesis explores was developed in an attempt ataccelerating the process...
Show moreThis thesis presents Ray Collection BVH, an improvement over a current dayRay Tracing acceleration structure to both build and perform the steps necessary toefficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonlyused acceleration structure, which aides in rendering complex scenes in 3D spaceusing Ray Tracing by breaking the scene of triangles into a simple hierarchicalstructure. The algorithm this thesis explores was developed in an attempt ataccelerating the process of both constructing this structure, and also using it to renderthese complex scenes more efficiently.The idea of using "ray collection" as a data structure was accidentally stumbledupon by the author in testing a theory he had for a class project. The overall scheme ofthe algorithm essentially collects a set of localized rays together and intersects themwith subsequent levels of the BVH at each build step. In addition, only part of theacceleration structure is built on a per-Ray need basis. During this partial build, theRays responsible for creating the scene are partially processed, also saving time on theoverall procedure.Ray tracing is a widely used technique for simple rendering from realistic imagesto making movies. Particularly, in the movie industry, the level of realism brought in tothe animated movies through ray tracing is incredible. So any improvement brought tothese algorithms to improve the speed of rendering would be considered useful and welcome. This thesis makes contributions towards improving the overall speed of scenerendering, and hence may be considered as an important and useful contribution.
Show less - Date Issued
- 2011
- Identifier
- CFE0004160, ucf:49063
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004160
- Title
- Towards Real-time Mixed Reality Matting in Natural Scenes.
- Creator
-
Beato, Nicholas, Hughes, Charles, Foroosh, Hassan, Tappen, Marshall, Moshell, Jack, University of Central Florida
- Abstract / Description
-
In Mixed Reality scenarios, background replacement is a common way to immerse a user in a synthetic environment. Properly identifying the background pixels in an image or video is a difficult problem known as matting. In constant color matting, research identifies and replaces a background that is a single color, known as the chroma key color. Unfortunately, the algorithms force a controlled physical environment and favor constant, uniform lighting. More generic approaches, such as natural...
Show moreIn Mixed Reality scenarios, background replacement is a common way to immerse a user in a synthetic environment. Properly identifying the background pixels in an image or video is a difficult problem known as matting. In constant color matting, research identifies and replaces a background that is a single color, known as the chroma key color. Unfortunately, the algorithms force a controlled physical environment and favor constant, uniform lighting. More generic approaches, such as natural image matting, have made progress finding alpha matte solutions in environments with naturally occurring backgrounds. However, even for the quicker algorithms, the generation of trimaps, indicating regions of known foreground and background pixels, normally requires human interaction or offline computation. This research addresses ways to automatically solve an alpha matte for an image in real-time, and by extension video, using a consumer level GPU. It do so even in the context of noisy environments that result in less reliable constraints than found in controlled settings. To attack these challenges, we are particularly interested in automatically generating trimaps from depth buffers for dynamic scenes so that algorithms requiring more dense constraints may be used. We then explore a sub-image based approach to parallelize an existing hierarchical approach on high resolution imagery by taking advantage of local information. We show that locality can be exploited to significantly reduce the memory and compute requirements of previously necessary when computing alpha mattes of high resolution images. We achieve this using a parallelizable scheme that is both independent of the matting algorithm and image features. Combined, these research topics provide a basis for Mixed Reality scenarios using real-time natural image matting on high definition video sources.
Show less - Date Issued
- 2012
- Identifier
- CFE0004515, ucf:49284
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004515
- Title
- SetPad: A Sketch-Based Tool For Exploring Discrete Math Set Problems.
- Creator
-
Cossairt, Travis, Laviola II, Joseph, Foroosh, Hassan, Hughes, Charles, University of Central Florida
- Abstract / Description
-
We present SetPad, a new application prototype that lets computer science students explore discrete math problems by sketching set expressions using pen-based input. Students can manipulate the expressions interactively with the tool via pen or multi-touch interface. Likewise, discrete mathematics instructors can use SetPad to display and work through set problems via a projector to better demonstrate the solutions to the students. We discuss the implementation and feature set of the...
Show moreWe present SetPad, a new application prototype that lets computer science students explore discrete math problems by sketching set expressions using pen-based input. Students can manipulate the expressions interactively with the tool via pen or multi-touch interface. Likewise, discrete mathematics instructors can use SetPad to display and work through set problems via a projector to better demonstrate the solutions to the students. We discuss the implementation and feature set of the application, as well as results from both an informal perceived usefulness evaluation for students taking a computer science foundation exam in addition to a formal user study measuring the effectiveness of the tool when solving set proof problems. The results indicate that SetPad was well received, allows for efficient solutions to proof problems, and has the potential to have a positive impact when used as as an individual student application or an instructional tool.
Show less - Date Issued
- 2012
- Identifier
- CFE0004240, ucf:49507
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004240
- Title
- STUDY OF HUMAN ACTIVITY IN VIDEO DATA WITH AN EMPHASIS ON VIEW-INVARIANCE.
- Creator
-
Ashraf, Nazim, Foroosh, Hassan, Hughes, Charles, Tappen, Marshall, Moshell, Jack, University of Central Florida
- Abstract / Description
-
The perception and understanding of human motion and action is an important area of research in computer vision that plays a crucial role in various applications such as surveillance, HCI, ergonomics, etc. In this thesis, we focus on the recognition of actions in the case of varying viewpoints and different and unknown camera intrinsic parameters. The challenges to be addressed include perspective distortions, differences in viewpoints, anthropometric variations,and the large degrees of...
Show moreThe perception and understanding of human motion and action is an important area of research in computer vision that plays a crucial role in various applications such as surveillance, HCI, ergonomics, etc. In this thesis, we focus on the recognition of actions in the case of varying viewpoints and different and unknown camera intrinsic parameters. The challenges to be addressed include perspective distortions, differences in viewpoints, anthropometric variations,and the large degrees of freedom of articulated bodies. In addition, we are interested in methods that require little or no training. The current solutions to action recognition usually assume that there is a huge dataset of actions available so that a classifier can be trained. However, thismeans that in order to define a new action, the user has to record a number of videos fromdifferent viewpoints with varying camera intrinsic parameters and then retrain the classifier, which is not very practical from a development point of view. We propose algorithms that overcome these challenges and require just a few instances of the action from any viewpoint with any intrinsic camera parameters. Our first algorithm is based on the rank constraint on the family of planar homographies associated with triplets of body points. We represent action as a sequence of poses, and decompose the pose into triplets. Therefore, the pose transition is brokendown into a set of movement of body point planes. In this way, we transform the non-rigid motion of the body points into a rigid motion of body point planes. We use the fact that the family of homographies associated with two identical poses would have rank 4 to gauge similarity of the pose between two subjects, observed by different perspective cameras and from different viewpoints. This method requires only one instance of the action. We then show that it is possible to extend the concept of triplets to line segments. In particular, we establish that if we look at the movement of line segments instead of triplets, we have more redundancy in data thus leading to better results. We demonstrate this concept on (")fundamental ratios.(") We decompose a human body pose into line segments instead of triplets and look at set of movement of line segments. This method needs only three instances of the action. If a larger dataset is available, we can also apply weighting on line segments for better accuracy. The last method is based onthe concept of (")Projective Depth("). Given a plane, we can find the relative depth of a point relative to the given plane. We propose three different ways of using (")projective depth:(") (i) Triplets - the three points of a triplet along with the epipole defines the plane and the movement of points relative to these body planes can be used to recognize actions; (ii) Ground plane - if we are able to extract the ground plane, we can find the (")projective depth(") of the body points withrespect to it. Therefore, the problem of action recognition would translate to curve matching; and (iii) Mirror person (-) We can use the mirror view of the person to extract mirror symmetric planes. This method also needs only one instance of the action. Extensive experiments are reported on testing view invariance, robustness to noisy localization and occlusions of bodypoints, and action recognition. The experimental results are very promising and demonstrate the efficiency of our proposed invariants.
Show less - Date Issued
- 2012
- Identifier
- CFE0004352, ucf:49449
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004352
- Title
- Automatic Scenario Generation using Procedural Modeling Techniques.
- Creator
-
Martin, Glenn, Hughes, Charles, Moshell, Jack, Fiore, Stephen, Orooji, Ali, University of Central Florida
- Abstract / Description
-
Training typically begins with a pre-existing scenario. The training exercise is performed and then an after action review is sometimes held. This (")training pipeline(") is repeated for each scenario that will be used that day. This approach is used routinely and often effectively, yet it has a number of aspects that can result in poor training. In particular, this process commonly has two associated events that are undesirable. First, scenarios are re-used over and over, which can reduce...
Show moreTraining typically begins with a pre-existing scenario. The training exercise is performed and then an after action review is sometimes held. This (")training pipeline(") is repeated for each scenario that will be used that day. This approach is used routinely and often effectively, yet it has a number of aspects that can result in poor training. In particular, this process commonly has two associated events that are undesirable. First, scenarios are re-used over and over, which can reduce their effectiveness in training. Second, additional responsibility is placed on the individual training facilitator in that the trainer must now track performance improvements between scenarios. Taking both together can result in a multiplicative degradation in effectiveness. Within any simulation training exercise, a scenario definition is the starting point. While these are, unfortunately, re-used and over-used, they can, in fact, be generated from scratch each time. Typically, scenarios include the entire configuration for the simulators such as entities used, time of day, weather effects, entity starting locations and, where applicable, munitions effects. In addition, a background story (exercise briefing) is given to the trainees. The leader often then develops a mission plan that is shared with the trainee group. Given all of these issues, scientists began to explore more purposeful, targeted training. Rather than an ad-hoc creation of a simulation experience, there was an increased focus on the content of the experience and its effects on training. Previous work in scenario generation, interactive storytelling and computational approaches, while providing a good foundation, fall short on addressing the need for adaptive, automatic scenario generation. This dissertation addresses this need by building up a conceptual model to represent scenarios, mapping that conceptual model to a computational model, and then applying a newer procedural modeling technique, known as Functional L-systems, to create scenarios given a training objective, scenario complexity level desired, and sets of baseline and vignette scenario facets.A software package, known as PYTHAGORAS, was built and is presented that incorporates all these contributions into an actual tool for creating scenarios (both manual and automatic approaches are included). This package is then evaluated by subject matter experts in a scenario-based (")Turing Test(") of sorts where both system-generated scenarios and human-generated scenarios are evaluated by independent reviewers. The results are presented from various angles.Finally, a review of how such a tool can affect the training pipeline is included. In addition, a number of areas into which scenario generation can be expanded are reviewed. These focus on additional elements of both the training environment (e.g., buildings, interiors, etc.) and the training process (e.g., scenario write-ups, etc.).
Show less - Date Issued
- 2012
- Identifier
- CFE0004265, ucf:49525
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004265
- Title
- Towards Evolving More Brain-Like Artificial Neural Networks.
- Creator
-
Risi, Sebastian, Stanley, Kenneth, Hughes, Charles, Sukthankar, Gita, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
An ambitious long-term goal for neuroevolution, which studies how artificial evolutionary processes can be driven to produce brain-like structures, is to evolve neurocontrollers with a high density of neurons and connections that can adapt and learn from past experience. Yet while neuroevolution has produced successful results in a variety of domains, the scale of natural brains remains far beyond reach. In this dissertation two extensions to the recently introduced Hypercube-based...
Show moreAn ambitious long-term goal for neuroevolution, which studies how artificial evolutionary processes can be driven to produce brain-like structures, is to evolve neurocontrollers with a high density of neurons and connections that can adapt and learn from past experience. Yet while neuroevolution has produced successful results in a variety of domains, the scale of natural brains remains far beyond reach. In this dissertation two extensions to the recently introduced Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach are presented that are a step towards more brain-like artificial neural networks (ANNs). First, HyperNEAT is extended to evolve plastic ANNs that can learn from past experience. This new approach, called adaptive HyperNEAT, allows not only patterns of weights across the connectivity of an ANN to be generated by a function of its geometry, but also patterns of arbitrary local learning rules. Second, evolvable-substrate HyperNEAT (ES-HyperNEAT) is introduced, which relieves the user from deciding where the hidden nodes should be placed in a geometry that is potentially infinitely dense. This approach not only can evolve the location of every neuron in the network, but also can represent regions of varying density, which means resolution can increase holistically over evolution. The combined approach, adaptive ES-HyperNEAT, unifies for the first time in neuroevolution the abilities to indirectly encode connectivity through geometry, generate patterns of heterogeneous plasticity, and simultaneously encode the density and placement of nodes in space. The dissertation culminates in a major application domain that takes a step towards the general goal of adaptive neurocontrollers for legged locomotion.
Show less - Date Issued
- 2012
- Identifier
- CFE0004287, ucf:49477
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004287
- Title
- 4D-CT Lung Registration and its Application for Lung Radiation Therapy.
- Creator
-
Min, Yugang, Pattanaik, Sumanta, Hughes, Charles, Foroosh, Hassan, Santhanam, Anand, University of Central Florida
- Abstract / Description
-
Radiation therapy has been successful in treating lung cancer patients, but its efficacy is limited by the inability to account for the respiratory motion during treatment planning and radiation dose delivery. Physics-based lung deformation models facilitate the motion computation of both tumor and local lung tissue during radiation therapy. In this dissertation, a novel method is discussed to accurately register 3D lungs across the respiratory phases from 4D-CT datasets, which facilitates...
Show moreRadiation therapy has been successful in treating lung cancer patients, but its efficacy is limited by the inability to account for the respiratory motion during treatment planning and radiation dose delivery. Physics-based lung deformation models facilitate the motion computation of both tumor and local lung tissue during radiation therapy. In this dissertation, a novel method is discussed to accurately register 3D lungs across the respiratory phases from 4D-CT datasets, which facilitates the estimation of the volumetric lung deformation models. This method uses multi-level and multi-resolution optical flow registration coupled with thin plate splines (TPS), to address registration issue of inconsistent intensity across respiratory phases. It achieves higher accuracy as compared to multi-resolution optical flow registration and other commonly used registration methods. Results of validation show that the lung registration is computed with 3 mm Target Registration Error (TRE) and approximately 3 mm Inverse Consistency Error (ICE). This registration method is further implemented in GPU based real time dose delivery simulation to assist radiation therapy planning.
Show less - Date Issued
- 2012
- Identifier
- CFE0004300, ucf:49464
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004300