Current Search: Pattanaik, Sumanta (x)
View All Items
- Title
- FAST ALGORITHMS FOR FRAGMENT BASED COMPLETION IN IMAGES OF NATURAL SCENES.
- Creator
-
Borikar, Siddharth Rajkumar, Pattanaik, Dr.Sumanta, University of Central Florida
- Abstract / Description
-
Textures are used widely in computer graphics to represent fine visual details and produce realistic looking images. Often it is necessary to remove some foreground object from the scene. Removal of the portion creates one or more holes in the texture image. These holes need to be filled to complete the image. Various methods like clone brush strokes and compositing processes are used to carry out this completion. User skill is required in such methods. Texture synthesis can also be used to...
Show moreTextures are used widely in computer graphics to represent fine visual details and produce realistic looking images. Often it is necessary to remove some foreground object from the scene. Removal of the portion creates one or more holes in the texture image. These holes need to be filled to complete the image. Various methods like clone brush strokes and compositing processes are used to carry out this completion. User skill is required in such methods. Texture synthesis can also be used to complete regions where the texture is stationary or structured. Reconstructing methods can be used to fill in large-scale missing regions by interpolation. Inpainting is suitable for relatively small, smooth and non-textured regions. A number of other approaches focus on the edge and contour completion aspect of the problem. In this thesis we present a novel approach for addressing this image completion problem.Our approach focuses on image based completion, with no knowledge of the underlying scene. In natural images there is a strong horizontal orientation of texture/color distribution. We exploit this fact in our proposed algorithm to fill in missing regions from natural images. We follow the principle of figural familiarity and use the image as our training set to complete the image.
Show less - Date Issued
- 2004
- Identifier
- CFE0000053, ucf:46078
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000053
- Title
- IMAGE-SPACE APPROACH TO REAL-TIME REALISTIC RENDERING.
- Creator
-
Shah, Musawir, Pattanaik, Sumanta, University of Central Florida
- Abstract / Description
-
One of the main goals of computer graphics is the fast synthesis of photorealistic image of virtual 3D scenes. The work presented in this thesis addresses this goal of speed and realism. In real-time realistic rendering, we encounter certain problems that are difficult to solve in the traditional 3-dimensional geometric space. We show that using an image-space approach can provide effective solutions to these problems. Unlike geometric space algorithms that operate on 3D primitives such as...
Show moreOne of the main goals of computer graphics is the fast synthesis of photorealistic image of virtual 3D scenes. The work presented in this thesis addresses this goal of speed and realism. In real-time realistic rendering, we encounter certain problems that are difficult to solve in the traditional 3-dimensional geometric space. We show that using an image-space approach can provide effective solutions to these problems. Unlike geometric space algorithms that operate on 3D primitives such as points, edges, and polygons, image-space algorithms operate on 2D snapshot images of the 3D geometric data. Operating in image-space effectively decouples the geometric complexity of the 3D data from the run-time of the rendering algorithm. Other important advantages of image-space algorithms include ease of implementation on modern graphics hardware, and fast computation of approximate solutions to certain lighting calculations. We have applied the image-space approach and developed algorithms for three prominent problems in real-time realistic rendering, namely, representing and lighting large 3D scenes in the context of grass rendering, rendering caustics, which is a complex indirect illumination effect, and subsurface scattering for rendering of translucent objects.
Show less - Date Issued
- 2007
- Identifier
- CFE0001967, ucf:47462
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001967
- Title
- REAL-TIME REALISTIC RENDERING AND HIGH DYNAMIC RANGE IMAGE DISPLAY AND COMPRESSION.
- Creator
-
Xu, Ruifeng, Pattanaik, Sumanta, University of Central Florida
- Abstract / Description
-
This dissertation focuses on the many issues that arise from the visual rendering problem. Of primary consideration is light transport simulation, which is known to be computationally expensive. Monte Carlo methods represent a simple and general class of algorithms often used for light transport computation. Unfortunately, the images resulting from Monte Carlo approaches generally suffer from visually unacceptable noise artifacts. The result of any light transport simulation is, by its very...
Show moreThis dissertation focuses on the many issues that arise from the visual rendering problem. Of primary consideration is light transport simulation, which is known to be computationally expensive. Monte Carlo methods represent a simple and general class of algorithms often used for light transport computation. Unfortunately, the images resulting from Monte Carlo approaches generally suffer from visually unacceptable noise artifacts. The result of any light transport simulation is, by its very nature, an image of high dynamic range (HDR). This leads to the issues of the display of such images on conventional low dynamic range devices and the development of data compression algorithms to store and recover the corresponding large amounts of detail found in HDR images. This dissertation presents our contributions relevant to these issues. Our contributions to high dynamic range image processing include tone mapping and data compression algorithms. This research proposes and shows the efficacy of a novel level set based tone mapping method that preserves visual details in the display of high dynamic range images on low dynamic range display devices. The level set method is used to extract the high frequency information from HDR images. The details are then added to the range compressed low frequency information to reconstruct a visually accurate low dynamic range version of the image. Additional challenges associated with high dynamic range images include the requirements to reduce excessively large amounts of storage and transmission time. To alleviate these problems, this research presents two methods for efficient high dynamic range image data compression. One is based on the classical JPEG compression. It first converts the raw image into RGBE representation, and then sends the color base and common exponent to classical discrete cosine transform based compression and lossless compression, respectively. The other is based on the wavelet transformation. It first transforms the raw image data into the logarithmic domain, then quantizes the logarithmic data into the integer domain, and finally applies the wavelet based JPEG2000 encoder for entropy compression and bit stream truncation to meet the desired bit rate requirement. We believe that these and similar such contributions will make a wide application of high dynamic range images possible. The contributions to light transport simulation include Monte Carlo noise reduction, dynamic object rendering and complex scene rendering. Monte Carlo noise is an inescapable artifact in synthetic images rendered using stochastic algorithm. This dissertation proposes two noise reduction algorithms to obtain high quality synthetic images. The first one models the distribution of noise in the wavelet domain using a Laplacian function, and then suppresses the noise using a Bayesian method. The other extends the bilateral filtering method to reduce all types of Monte Carlo noise in a unified way. All our methods reduce Monte Carlo noise effectively. Rendering of dynamic objects adds more dimension to the expensive light transport simulation issue. This dissertation presents a pre-computation based method. It pre-computes the surface radiance for each basis lighting and animation key frame, and then renders the objects by synthesizing the pre-computed data in real-time. Realistic rendering of complex scenes is computationally expensive. This research proposes a novel 3D space subdivision method, which leads to a new rendering framework. The light is first distributed to each local region to form local light fields, which are then used to illuminate the local scenes. The method allows us to render complex scenes at interactive frame rates. Rendering has important applications in mixed reality. Consistent lighting and shadows between real scenes and virtual scenes are important features of visual integration. The dissertation proposes to render the virtual objects by irradiance rendering using live captured environmental lighting. This research also introduces a virtual shadow generation method that computes shadows cast by virtual objects to the real background. We finally conclude the dissertation by discussing a number of future directions for rendering research, and presenting our proposed approaches.
Show less - Date Issued
- 2005
- Identifier
- CFE0000730, ucf:46615
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000730
- Title
- REAL-TIME REALISTIC RENDERING OF NATURE SCENES WITH DYNAMIC LIGHTING.
- Creator
-
Boulanger, Kevin, Pattanaik, Sumanta, University of Central Florida
- Abstract / Description
-
Rendering of natural scenes has interested the scientific community for a long time due to its numerous applications. The targeted goal is to create images that are similar to what a viewer can see in real life with his/her eyes. The main obstacle is complexity: nature scenes from real life contain a huge number of small details that are hard to model, take a lot of time to render and require a huge amount of memory unavailable in current computers. This complexity mainly comes from geometry...
Show moreRendering of natural scenes has interested the scientific community for a long time due to its numerous applications. The targeted goal is to create images that are similar to what a viewer can see in real life with his/her eyes. The main obstacle is complexity: nature scenes from real life contain a huge number of small details that are hard to model, take a lot of time to render and require a huge amount of memory unavailable in current computers. This complexity mainly comes from geometry and lighting. The goal of our research is to overcome this complexity and to achieve real-time rendering of nature scenes while providing visually convincing dynamic global illumination. Our work focuses on grass and trees as they are commonly visible in everyday life. We handle geometry and lighting complexities for grass to render millions of grass blades interactively with dynamic lighting. As for lighting complexity, we address real-time rendering of trees by proposing a lighting model that handles indirect lighting. Our work makes extensive use of the current generation of Graphics Processing Units (GPUs) to meet the real-time requirement and to leave the CPU free to carry out other tasks.
Show less - Date Issued
- 2008
- Identifier
- CFE0002262, ucf:47868
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002262
- Title
- REAL-TIME CINEMATIC DESIGN OF VISUAL ASPECTS IN COMPUTER-GENERATED IMAGES.
- Creator
-
Obert, Juraj, Pattanaik, Sumanta, University of Central Florida
- Abstract / Description
-
Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a...
Show moreCreation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines.
Show less - Date Issued
- 2010
- Identifier
- CFE0003250, ucf:48559
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003250
- Title
- Ray Collection Bounding Volume Hierarchy.
- Creator
-
Rivera, Kris, Pattanaik, Sumanta, Heinrich, Mark, Hughes, Charles, University of Central Florida
- Abstract / Description
-
This thesis presents Ray Collection BVH, an improvement over a current dayRay Tracing acceleration structure to both build and perform the steps necessary toefficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonlyused acceleration structure, which aides in rendering complex scenes in 3D spaceusing Ray Tracing by breaking the scene of triangles into a simple hierarchicalstructure. The algorithm this thesis explores was developed in an attempt ataccelerating the process...
Show moreThis thesis presents Ray Collection BVH, an improvement over a current dayRay Tracing acceleration structure to both build and perform the steps necessary toefficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonlyused acceleration structure, which aides in rendering complex scenes in 3D spaceusing Ray Tracing by breaking the scene of triangles into a simple hierarchicalstructure. The algorithm this thesis explores was developed in an attempt ataccelerating the process of both constructing this structure, and also using it to renderthese complex scenes more efficiently.The idea of using "ray collection" as a data structure was accidentally stumbledupon by the author in testing a theory he had for a class project. The overall scheme ofthe algorithm essentially collects a set of localized rays together and intersects themwith subsequent levels of the BVH at each build step. In addition, only part of theacceleration structure is built on a per-Ray need basis. During this partial build, theRays responsible for creating the scene are partially processed, also saving time on theoverall procedure.Ray tracing is a widely used technique for simple rendering from realistic imagesto making movies. Particularly, in the movie industry, the level of realism brought in tothe animated movies through ray tracing is incredible. So any improvement brought tothese algorithms to improve the speed of rendering would be considered useful and welcome. This thesis makes contributions towards improving the overall speed of scenerendering, and hence may be considered as an important and useful contribution.
Show less - Date Issued
- 2011
- Identifier
- CFE0004160, ucf:49063
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004160
- Title
- 4D-CT Lung Registration and its Application for Lung Radiation Therapy.
- Creator
-
Min, Yugang, Pattanaik, Sumanta, Hughes, Charles, Foroosh, Hassan, Santhanam, Anand, University of Central Florida
- Abstract / Description
-
Radiation therapy has been successful in treating lung cancer patients, but its efficacy is limited by the inability to account for the respiratory motion during treatment planning and radiation dose delivery. Physics-based lung deformation models facilitate the motion computation of both tumor and local lung tissue during radiation therapy. In this dissertation, a novel method is discussed to accurately register 3D lungs across the respiratory phases from 4D-CT datasets, which facilitates...
Show moreRadiation therapy has been successful in treating lung cancer patients, but its efficacy is limited by the inability to account for the respiratory motion during treatment planning and radiation dose delivery. Physics-based lung deformation models facilitate the motion computation of both tumor and local lung tissue during radiation therapy. In this dissertation, a novel method is discussed to accurately register 3D lungs across the respiratory phases from 4D-CT datasets, which facilitates the estimation of the volumetric lung deformation models. This method uses multi-level and multi-resolution optical flow registration coupled with thin plate splines (TPS), to address registration issue of inconsistent intensity across respiratory phases. It achieves higher accuracy as compared to multi-resolution optical flow registration and other commonly used registration methods. Results of validation show that the lung registration is computed with 3 mm Target Registration Error (TRE) and approximately 3 mm Inverse Consistency Error (ICE). This registration method is further implemented in GPU based real time dose delivery simulation to assist radiation therapy planning.
Show less - Date Issued
- 2012
- Identifier
- CFE0004300, ucf:49464
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004300
- Title
- Verification and Automated Synthesis of Memristor Crossbars.
- Creator
-
Pourtabatabaie, Arya, Jha, Sumit Kumar, Chatterjee, Mainak, Pattanaik, Sumanta, University of Central Florida
- Abstract / Description
-
The Memristor is a newly synthesized circuit element correlating differences in electrical charge and magnetic flux, which effectively acts as a nonlinear resistor with memory. The small size of this element and its potential for passive state preservation has opened great opportunities for data-level parallel computation, since the functions of memory and processing can be realized on the same physical device.In this research we present an in-depth study of memristor crossbars for...
Show moreThe Memristor is a newly synthesized circuit element correlating differences in electrical charge and magnetic flux, which effectively acts as a nonlinear resistor with memory. The small size of this element and its potential for passive state preservation has opened great opportunities for data-level parallel computation, since the functions of memory and processing can be realized on the same physical device.In this research we present an in-depth study of memristor crossbars for combinational and sequential logic. We outline the structure of formulas which they are able to produce and henceforth the inherent powers and limitations of Memristive Crossbar Computing.As an improvement on previous methods of automated crossbar synthesis, a method for symbolically verifying crossbars is proposed, proven and analysed.
Show less - Date Issued
- 2016
- Identifier
- CFE0006840, ucf:51765
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006840
- Title
- Simulation, Analysis, and Optimization of Heterogeneous CPU-GPU Systems.
- Creator
-
Giles, Christopher, Heinrich, Mark, Ewetz, Rickard, Lin, Mingjie, Pattanaik, Sumanta, Flitsiyan, Elena, University of Central Florida
- Abstract / Description
-
With the computing industry's recent adoption of the Heterogeneous System Architecture (HSA) standard, we have seen a rapid change in heterogeneous CPU-GPU processor designs. State-of-the-art heterogeneous CPU-GPU processors tightly integrate multicore CPUs and multi-compute unit GPUs together on a single die. This brings the MIMD processing capabilities of the CPU and the SIMD processing capabilities of the GPU together into a single cohesive package with new HSA features comprising better...
Show moreWith the computing industry's recent adoption of the Heterogeneous System Architecture (HSA) standard, we have seen a rapid change in heterogeneous CPU-GPU processor designs. State-of-the-art heterogeneous CPU-GPU processors tightly integrate multicore CPUs and multi-compute unit GPUs together on a single die. This brings the MIMD processing capabilities of the CPU and the SIMD processing capabilities of the GPU together into a single cohesive package with new HSA features comprising better programmability, coherency between the CPU and GPU, shared Last Level Cache (LLC), and shared virtual memory address spaces. These advancements can potentially bring marked gains in heterogeneous processor performance and have piqued the interest of researchers who wish to unlock these potential performance gains. Therefore, in this dissertation I explore the heterogeneous CPU-GPU processor and application design space with the goal of answering interesting research questions, such as, (1) what are the architectural design trade-offs in heterogeneous CPU-GPU processors and (2) how do we best maximize heterogeneous CPU-GPU application performance on a given system. To enable my exploration of the heterogeneous CPU-GPU design space, I introduce a novel discrete event-driven simulation library called KnightSim and a novel computer architectural simulator called M2S-CGM. M2S-CGM includes all of the simulation elements necessary to simulate coherent execution between a CPU and GPU with shared LLC and shared virtual memory address spaces. I then utilize M2S-CGM for the conduct of three architectural studies. First, I study the architectural effects of shared LLC and CPU-GPU coherence on the overall performance of non-collaborative GPU-only applications. Second, I profile and analyze a set of collaborative CPU-GPU applications to determine how to best optimize them for maximum collaborative performance. Third, I study the impact of varying four key architectural parameters on collaborative CPU-GPU performance by varying GPU compute unit coalesce size, GPU to memory controller bandwidth, GPU frequency, and system wide switching fabric latency.
Show less - Date Issued
- 2019
- Identifier
- CFE0007807, ucf:52346
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007807
- Title
- Machine Learning Methods for Multiparameter Flow Cytometry Analysis and Visualization.
- Creator
-
Sassano, Emily, Jha, Sumit Kumar, Pattanaik, Sumanta, Hughes, Charles, Moore, Sean, University of Central Florida
- Abstract / Description
-
Flow cytometry is a popular analytical cell-biology instrument that uses specific wavelengths of light to profile heterogeneous populations of cells at the individual level. Current cytometers have the capability of analyzing up to 20 parameters on over a million cells, but despite the complexity of these datasets, a typical workflow relies on subjective labor-intensive manual sequential analysis. The research presented in this dissertation provides two machine learning methods to increase...
Show moreFlow cytometry is a popular analytical cell-biology instrument that uses specific wavelengths of light to profile heterogeneous populations of cells at the individual level. Current cytometers have the capability of analyzing up to 20 parameters on over a million cells, but despite the complexity of these datasets, a typical workflow relies on subjective labor-intensive manual sequential analysis. The research presented in this dissertation provides two machine learning methods to increase the objectivity, efficiency, and discovery in flow cytometry data analysis. The first, a supervised learning method, utilizes previously analyzed data to evaluate new flow cytometry files containing similar parameters. The probability distribution of each dimension in a file is matched to each related dimension of a reference file through color indexing and histogram intersection methods. Once a similar reference file is selected the cell populations previously classified are used to create a tailored support vector machine capable of classifying cell populations as an expert would. This method has produced results highly correlated with manual sequential analysis, providing an efficient alternative for analyzing a large number of samples. The second, a novel unsupervised method, is used to explore and visualize single-cell data in an objective manner. To accomplish this, a hypergraph sampling method was created to preserve rare events within the flow data before divisively clustering the sampled data using singular value decomposition. The unsampled data is added to the discovered set of clusters using a support vector machine classifier, and the final analysis is displayed as a minimum spanning tree. This tree is capable of distinguishing rare subsets of cells comprising of less than 1% of the original data.
Show less - Date Issued
- 2018
- Identifier
- CFE0007243, ucf:52241
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007243
- Title
- Multi-Modal Interfaces for Sensemaking of Graph-Connected Datasets.
- Creator
-
Wehrer, Anthony, Hughes, Charles, Wisniewski, Pamela, Pattanaik, Sumanta, Specht, Chelsea, Lisle, Curtis, University of Central Florida
- Abstract / Description
-
The visualization of hypothesized evolutionary processes is often shown through phylogenetic trees. Given evolutionary data presented in one of several widely accepted formats, software exists to render these data into a tree diagram. However, software packages commonly in use by biologists today often do not provide means to dynamically adjust and customize these diagrams for studying new hypothetical relationships, and for illustration and publication purposes. Even where these options are...
Show moreThe visualization of hypothesized evolutionary processes is often shown through phylogenetic trees. Given evolutionary data presented in one of several widely accepted formats, software exists to render these data into a tree diagram. However, software packages commonly in use by biologists today often do not provide means to dynamically adjust and customize these diagrams for studying new hypothetical relationships, and for illustration and publication purposes. Even where these options are available, there can be a lack of intuitiveness and ease-of-use. The goal of our research is, thus, to investigate more natural and effective means of sensemaking of the data with different user input modalities. To this end, we experimented with different input modalities, designing and running a series of prototype studies, ultimately focusing our attention on pen-and-touch. Through several iterations of feedback and revision provided with the help of biology experts and students, we developed a pen-and-touch phylogenetic tree browsing and editing application called PhyloPen. This application expands on the capabilities of existing software with visualization techniques such as overview+detail, linked data views, and new interaction and manipulation techniques using pen-and-touch. To determine its impact on phylogenetic tree sensemaking, we conducted a within-subject comparative summative study against the most comparable and commonly used state-of-the-art mouse-based software system, Mesquite. Conducted with biology majors at the University of Central Florida, each used both software systems on a set number of exercise tasks of the same type. Determining effectiveness by several dependent measures, the results show PhyloPen was significantly better in terms of usefulness, satisfaction, ease-of-learning, ease-of-use, and cognitive load and relatively the same in variation of completion time. These results support an interaction paradigm that is superior to classic mouse-based interaction, which could have the potential to be applied to other communities that employ graph-based representations of their problem domains.
Show less - Date Issued
- 2019
- Identifier
- CFE0007872, ucf:52788
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007872
- Title
- GPU Ray Traced Rendering And Image Fusion Based Visualization Of Urban Terrain For Enhanced Situation Awareness.
- Creator
-
Sik, Lingling, Pattanaik, Sumanta, Kincaid, John, Proctor, Michael, Tappen, Marshall, Graniela Ortiz, Benito, University of Central Florida
- Abstract / Description
-
Urban activities involving planning, preparing for and responding to time critical situations often demands sound situational awareness of overall settings. Decision makers, who are tasked to respond effectively to emergencies, must be equipped with information on the details of what is happening, and must stay informed with updates as the event unfolds and remain attentive to the extent of impact the dynamics of the surrounding settings might have. Recent increases in the volumes of geo...
Show moreUrban activities involving planning, preparing for and responding to time critical situations often demands sound situational awareness of overall settings. Decision makers, who are tasked to respond effectively to emergencies, must be equipped with information on the details of what is happening, and must stay informed with updates as the event unfolds and remain attentive to the extent of impact the dynamics of the surrounding settings might have. Recent increases in the volumes of geo-spatial data such as satellite imageries, elevation maps, street-level photographs and real-time imageries from remote sensory devices affect the way decision makers make assessments in time-critical situations. When terrain related spatial information are presented accurately, timely, and are augmented with terrain analysis such as viewshed computations, enhanced situational understanding could be formed. Painting such enhanced situational pictures, however, demands efficient techniques to process and present volumes of geo-spatial data. Modern Graphics Processing Units (GPUs) have opened up a wide field of applications far beyond processing millions of polygons. This dissertation presents approaches that harness graphics rendering techniques and GPU programmability to visualize urban terrain with accuracy, viewshed analysis and real-time imageries. The GPU ray tracing and image fusion visualization techniques presented herein have the potential to aid in achieving enhanced urban situational awareness and understanding.Current state of the art polygon based terrain representations often use coarse representations for terrain features of less importance to improve rendering rate. This results in reduced geometrical accuracy for selective terrain features that are considered less critical to the visualization or simulation needs. Alternatively, to render highly accurate urban terrain, considerable computational effort is needed. A compromise between achieving real-time rendering rate and accurate terrain representations would have to be made. Likewise, computational tasks involved in terrain-related calculations such as viewshed analysis are highly computational intensive and are traditionally performed at a non-interactive rate. The first contribution of the research involves using GPU ray tracing, a rendering approach, conventionally not employed in the simulation community in favor of rasterization, to achieve accurate visualization and improved understanding of urban terrain. The efficiency of using GPU ray tracing is demonstrated in two areas, namely, in depicting complex, large scale terrain and in visualizing viewshed terrain effects at interactive rate. Another contribution entails designing a novel approach to create an efficient and real-time mapping system. The solution achieves updating and visualizing terrain textures using 2D geo-referenced imageries for enhanced situational awareness. Fusing myriad of multi-view 2D inputs spatially for a complex 3D urban scene typically involves a large number of computationally demanding tasks such as image registrations, mosaickings and texture mapping. Current state of the art solutions essentially belongs to two groups. Each strives to either provide near real-time situational pictures in 2D or off-line complex 3D reconstructions for subsequent usages. The solution proposed in this research relies on using prior constructed synthetic terrains as backdrops to be updated with real-time geo-referenced images. The solution achieves speed in fusing information in 3D. Mapping geo-referenced images spatially in 3D puts them into context. It aids in conveying spatial relationships among the data. Prototypes to evaluate the effectiveness of the aforementioned techniques are also implemented. The benefits of augmenting situational displays with viewshed analysis and real-time geo-referenced images in relation to enhancing the user's situational awareness are also evaluated. Preliminary results from user evaluation studies demonstrate the usefulness of the techniques in enhancing operators' performances, in relation to situational awareness and understanding.
Show less - Date Issued
- 2013
- Identifier
- CFE0005115, ucf:50757
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005115
- Title
- Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality.
- Creator
-
Xiong, Yiyan, Hughes, Charles, Pattanaik, Sumanta, Laviola II, Joseph, Moshell, Michael, University of Central Florida
- Abstract / Description
-
3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence...
Show more3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character's behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant's pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting theirmobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision.
Show less - Date Issued
- 2014
- Identifier
- CFE0005277, ucf:50543
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005277