Current Search: wavelets (x)
-
-
Title
-
WAVELETS IN REAL-TIME RENDERING.
-
Creator
-
sun, weifeng, Mukherjee, Amar, University of Central Florida
-
Abstract / Description
-
Interactively simulating visual appearance of natural objects under natural illumination is a fundamental problem in computer graphics. 3D computer games, geometry modeling, training and simulation, electronic commerce, visualization, lighting design, digital libraries, geographical information systems, economic and medical image processing are typical candidate applications. Recent advances in graphics hardware have enabled real-time rasterization of complex scenes under artificial lighting...
Show moreInteractively simulating visual appearance of natural objects under natural illumination is a fundamental problem in computer graphics. 3D computer games, geometry modeling, training and simulation, electronic commerce, visualization, lighting design, digital libraries, geographical information systems, economic and medical image processing are typical candidate applications. Recent advances in graphics hardware have enabled real-time rasterization of complex scenes under artificial lighting environment. Meanwhile, pre-computation based soft shadow algorithms are proven effective under low-frequency lighting environment. Under the most practical yet popular all-frequency natural lighting environment, however, real-time rendering of dynamic scenes still remains a challenging problem. In this dissertation, we propose a systematic approach to render dynamic glossy objects under the general all-frequency lighting environment. In our framework, lighting integration is reduced to two rather basic mathematical operations, efficiently computing multi-function product and product integral. The main contribution of our work is a novel mathematical representation and analysis of multi-function product and product integral in the wavelet domain. We show that, multi-function product integral in the primal is equivalent to summation of the product of basis coefficients and integral coefficients. In the dissertation, we give a novel Generalized Haar Integral Coefficient Theorem. We also present a set of efficient algorithms to compute multi-function product and product integral. In the dissertation, we demonstrate practical applications of these algorithms in the interactive rendering of dynamic glossy objects under distant time-variant all-frequency environment lighting with arbitrary view conditions. At each vertex, the shading integral is formulated as the product integral of multiple operand functions. By approximating operand functions in the wavelet domain, we demonstrate rendering dynamic glossy scenes interactively, which is orders of magnitude faster than previous work. As an important enhancement to the popular Pre-computation Based Radiance Transfer (PRT) approach, we present a novel Just-in-time Radiance Transfer (JRT) technique, and demonstrate its application in real-time realistic rendering of dynamic all-frequency shadows under general lighting environment. Our work is a significant step towards real-time rendering of arbitrary scenes under general lighting environment. It is also of great importance to general numerical analysis and signal processing.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001495, ucf:47079
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001495
-
-
Title
-
DETECTION OF THE R-WAVE IN ECG SIGNALS.
-
Creator
-
Valluri, Sasanka, Weeks, Arthur, University of Central Florida
-
Abstract / Description
-
This thesis aims at providing a new approach for detecting R-waves in the ECG signal and generating the corresponding R-wave impulses with the delay between the original R-waves and the R-wave impulses being lesser than 100 ms. The algorithm was implemented in Matlab and tested with good results against 90 different ECG recordings from the MIT-BIH database. The Discrete Wavelet Transform (DWT) forms the heart of the algorithm providing a multi-resolution analysis of the ECG signal. The...
Show moreThis thesis aims at providing a new approach for detecting R-waves in the ECG signal and generating the corresponding R-wave impulses with the delay between the original R-waves and the R-wave impulses being lesser than 100 ms. The algorithm was implemented in Matlab and tested with good results against 90 different ECG recordings from the MIT-BIH database. The Discrete Wavelet Transform (DWT) forms the heart of the algorithm providing a multi-resolution analysis of the ECG signal. The wavelet transform decomposes the ECG signal into frequency scales where the ECG characteristic waveforms are indicated by zero crossings. The adaptive threshold algorithms discussed in this thesis search for valid zero crossings which characterize the R-waves and also remove the Preventricular Contractions (PVC's). The adaptive threshold algorithms allow the decision thresholds to adjust for signal quality changes and eliminate the need for manual adjustments when changing from patient to patient. The delay between the R-waves in the original ECG signal and the R-wave impulses obtained from the algorithm was found to be less than 100 ms.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000498, ucf:46369
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000498
-
-
Title
-
Functional Data Analysis and its application to cancer data.
-
Creator
-
Martinenko, Evgeny, Pensky, Marianna, Tamasan, Alexandru, Swanson, Jason, Richardson, Gary, University of Central Florida
-
Abstract / Description
-
The objective of the current work is to develop novel procedures for the analysis of functional dataand apply them for investigation of gender disparity in survival of lung cancer patients. In particular,we use the time-dependent Cox proportional hazards model where the clinical information isincorporated via time-independent covariates, and the current age is modeled using its expansionover wavelet basis functions. We developed computer algorithms and applied them to the dataset which is...
Show moreThe objective of the current work is to develop novel procedures for the analysis of functional dataand apply them for investigation of gender disparity in survival of lung cancer patients. In particular,we use the time-dependent Cox proportional hazards model where the clinical information isincorporated via time-independent covariates, and the current age is modeled using its expansionover wavelet basis functions. We developed computer algorithms and applied them to the dataset which is derived from Florida Cancer Data depository data set (all personal information whichallows to identify patients was eliminated). We also studied the problem of estimation of a continuousmatrix-variate function of low rank. We have constructed an estimator of such functionusing its basis expansion and subsequent solution of an optimization problem with the Schattennormpenalty. We derive an oracle inequality for the constructed estimator, study its properties viasimulations and apply the procedure to analysis of Dynamic Contrast medical imaging data.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005377, ucf:50447
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005377
-
-
Title
-
PHASE-SHIFTING HAAR WAVELETS FOR IMAGE-BASED RENDERING APPLICATIONS.
-
Creator
-
Alnasser, Mais, Foroosh, Hassan, University of Central Florida
-
Abstract / Description
-
In this thesis, we establish the underlying research background necessary for tackling the problem of phase-shifting in the wavelet transform domain. Solving this problem is the key to reducing the redundancy and huge storage requirement in Image-Based Rendering (IBR) applications, which utilize wavelets. Image-based methods for rendering of dynamic glossy objects do not truly scale to all possible frequencies and high sampling rates without trading storage, glossiness, or computational time,...
Show moreIn this thesis, we establish the underlying research background necessary for tackling the problem of phase-shifting in the wavelet transform domain. Solving this problem is the key to reducing the redundancy and huge storage requirement in Image-Based Rendering (IBR) applications, which utilize wavelets. Image-based methods for rendering of dynamic glossy objects do not truly scale to all possible frequencies and high sampling rates without trading storage, glossiness, or computational time, while varying both lighting and viewpoint. This is due to the fact that current approaches are limited to precomputed radiance transfer (PRT), which is prohibitively expensive in terms of memory requirements when both lighting and viewpoint variation are required together with high sampling rates for high frequency lighting of glossy material. At the root of the above problem is the lack of a closed-form run-time solution to the nontrivial problem of rotating wavelets, which we solve in this thesis. We specifically target Haar wavelets, which provide the most efficient solution to solving the tripleproduct integral, which in turn is fundamental to solving the environment lighting problem. The problem is divided into three main steps, each of which provides several key theoretical contributions. First, we derive closed-form expressions for linear phase-shifting in the Haar domain for one-dimensional signals, which can be generalized to N-dimensional signals due to separability. Second, we derive closed-form expressions for linear phase-shifting for two-dimensional signals that are projected using the non-separable Haar transform. For both cases, we show that the coefficients of the shifted data can be computed solely by using the coefficients of the original data. We also derive closed-form expressions for non-integer shifts, which has not been reported before. As an application example of these results, we apply the new formulae to image shifting, rotation and interpolation, and demonstrate the superiority of the proposed solutions to existing methods. In the third step, we establish a solution for non-linear phase-shifting of two-dimensional non-separable Haar-transformed signals, which is directly applicable to the original problem of image-based rendering. Our solution is the first attempt to provide an analytic solution to the difficult problem of rotating wavelets in the transform domain.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002214, ucf:47882
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002214
-
-
Title
-
AN ANALOGY BASED COSTING SYSTEM FOR INJECTION MOLDS BASED UPON GEOMETRY SIMILARITY WITH WAVELETS.
-
Creator
-
Hillsman, Cyrus, Wang, Yan, University of Central Florida
-
Abstract / Description
-
The injection molding industry is large and diversified. However there is no universally accepted way to bid molds, despite the fact that the mold and related design comprise 50% of the total cost of an injection-molded part over its lifetime. This is due to both the structure of the industry and technical difficulties in developing an automated and practical cost estimation system. The technical challenges include lack of a common data format for both parts and molds; the comprehensive...
Show moreThe injection molding industry is large and diversified. However there is no universally accepted way to bid molds, despite the fact that the mold and related design comprise 50% of the total cost of an injection-molded part over its lifetime. This is due to both the structure of the industry and technical difficulties in developing an automated and practical cost estimation system. The technical challenges include lack of a common data format for both parts and molds; the comprehensive consideration of the data about a wide variety of mold types, designs, complexities, number of cavities and other factors that directly affect cost; and the robustness of estimation due to variations of build time and cost. In this research, we propose a new mold cost estimation approach based upon clustered features of parts. Geometry similarity is used to estimate the complexity of a mold from a 2D image with one orthographic view of the injection-molded part. Wavelet descriptors of boundaries as well as other inherent shape properties such as size, number of boundaries, etc. are used to describe the complexity of the part. Regression models are then built to predict costs. In addition to mean estimates, prediction intervals are calculated to support risk management.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002866, ucf:48041
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002866
-
-
Title
-
Categorical range reporting in 2D using Wavelet tree.
-
Creator
-
Kanthareddy Sumithra, Swathi, Valliyil Thankachan, Sharma, Sundaram, Kalpathy, Jha, Sumit Kumar, University of Central Florida
-
Abstract / Description
-
The research involved optimizing the space and bounding the output time by the output size in categorical range reporting of points within the given rectangle query Q in two dimension using wavelet trees and range counting. The time taken to report those points and space to tore n points in set S can be done using wavelet tree and range counting. Consider set S consisting of n points in two-dimension. An orthogonal range reporting query rectangle Q = [a,b] x [c,d] on set S is sent to report...
Show moreThe research involved optimizing the space and bounding the output time by the output size in categorical range reporting of points within the given rectangle query Q in two dimension using wavelet trees and range counting. The time taken to report those points and space to tore n points in set S can be done using wavelet tree and range counting. Consider set S consisting of n points in two-dimension. An orthogonal range reporting query rectangle Q = [a,b] x [c,d] on set S is sent to report the set of points in S which interacts with the query rectangle[Q]. The time taken to report these points is dependent on the output size. The categorical range reporting is an extension of orthogonal range reporting, where each point (xi; yi) in S is associated with a category c[i] belongs to [sigma] and the query is to report the set of distinct categories within the query region [a,b] x [c,d] once. In this paper, we present a new solution for this problem using wavelet trees. The points in S associated with categories are stored in a wavelet tree structure. Wavelet tree structure consists of bit map for these categories. To report the categories in the given rectangle queryQ, rank and select operations on the wavelet tree is applied. It was observed that the space taken by the structure was O(n log sigma) space and query time is O(k log n log sigma). Notice that the new result is more efficient in space when log sigma = O(log n). The study provides a new and efficient way of storing large dataset and also bounds the time complexity by the output size k.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007204, ucf:52275
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007204
-
-
Title
-
MODELING AND ANALYSIS OF THE EDS MAGLEV SYSTEM WITH THE HALBACH MAGNET ARRAY.
-
Creator
-
Ko, Wonsuk, Ham, Chan, University of Central Florida
-
Abstract / Description
-
The magnetic field analysis based on the wavelet transform is performed. The Halbach array magnetic field analysis has been studied using many methods such as magnetic scalar potential, magnetic vector potential, Fourier analysis and Finite Element Methods. But these analyses cannot identify a transient oscillation at the beginning stage of levitation. The wavelet transform is used for analyzing the transient oscillatory response of an EDS Maglev system. The proposed scheme explains the under...
Show moreThe magnetic field analysis based on the wavelet transform is performed. The Halbach array magnetic field analysis has been studied using many methods such as magnetic scalar potential, magnetic vector potential, Fourier analysis and Finite Element Methods. But these analyses cannot identify a transient oscillation at the beginning stage of levitation. The wavelet transform is used for analyzing the transient oscillatory response of an EDS Maglev system. The proposed scheme explains the under-damped dynamics that results from the cradle's dynamic response to the irregular distribution of the magnetic field. It suggests this EDS Maglev system that responds to a vertical repulsive force could be subject to such instability at the beginning stage of a low levitation height. The proposed method is useful in analyzing instabilities at the beginning stage of levitation height. A controller for the EDS maglev system with the Halbach array magnet is designed for the beginning stage of levitation and after reaching the defined levitation height. To design a controller for the EDS system, two different stages are suggested. Before the object reaches a stable position and after it has reached a stable position. A stable position can be referred to as a nominal height. The former is the stage I and the latter is the stage II. At the stage I, to achieve a nominal height the robust controller is investigated. At the stage II, both translational and rotational motions are considered for the control design. To maintain system stability, damping control as well as LQR control are performed. The proposed method is helpful to understand system dynamics and achieve system stability.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001697, ucf:47196
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001697
-
-
Title
-
Improved Interpolation in SPH in Cases of Less Smooth Flow.
-
Creator
-
Brun, Oddny, Wiegand, Rudolf, Pensky, Marianna, University of Central Florida
-
Abstract / Description
-
ABSTRACTWe introduced a method presented in Information Field Theory (IFT) [Abramovich et al.,2007] to improve interpolation in Smoothed Particle Hydrodynamics (SPH) in cases of less smoothflow. The method makes use of wavelet theory combined with B-splines for interpolation. The ideais to identify any jumps a function may have and then reconstruct the smoother segments betweenthe jumps. The results of our work demonstrated superior capability when compared to a particularchallenging SPH...
Show moreABSTRACTWe introduced a method presented in Information Field Theory (IFT) [Abramovich et al.,2007] to improve interpolation in Smoothed Particle Hydrodynamics (SPH) in cases of less smoothflow. The method makes use of wavelet theory combined with B-splines for interpolation. The ideais to identify any jumps a function may have and then reconstruct the smoother segments betweenthe jumps. The results of our work demonstrated superior capability when compared to a particularchallenging SPH application, to better conserve jumps and more accurately interpolate thesmoother segments of the function. The results of our work also demonstrated increased computationalefficiency with limited loss in accuracy as number of multiplications and execution timewere reduced. Similar benefits were observed for functions with spikes analyzed by the samemethod. Lesser, but similar effects were also demonstrated for real life data sets of less smoothnature.SPH is widely used in modeling and simulation of flow of matters. SPH presents advantagescompared to grid based methods both in terms of computational efficiency and accuracy, inparticular when dealing with less smooth flow. The results we achieved through our research is animprovement to the model in cases of less smooth flow, in particular flow with jumps and spikes.Up until now such improvements have been sought through modifications to the models' physicalequations and/or kernel functions and have only partially been able to address the issue.This research, as it introduced wavelet theory and IFT to a field of science that, to ourknowledge, not currently are utilizing these methods, did lay the groundwork for future researchiiiideas to benefit SPH. Among those ideas are further development of criteria for wavelet selection,use of smoothing splines for SPH interpolation and incorporation of Bayesian field theory.Improving the method's accuracy, stability and efficiency under more challenging conditionssuch as flow with jumps and spikes, will benefit applications in a wide area of science. Justin medicine alone, such improvements will further increase real time diagnostics, treatments andtraining opportunities because jumps and spikes are often the characteristics of significant physiologicaland anatomic conditions such as pulsatile blood flow, peristaltic intestine contractions andorgans' edges appearance in imaging.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006446, ucf:51451
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006446
-
-
Title
-
Super Resolution of Wavelet-Encoded Images and Videos.
-
Creator
-
Atalay, Vildan, Foroosh, Hassan, Bagci, Ulas, Hughes, Charles, Pensky, Marianna, University of Central Florida
-
Abstract / Description
-
In this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image...
Show moreIn this dissertation, we address the multiframe super resolution reconstruction problem for wavelet-encoded images and videos. The goal of multiframe super resolution is to obtain one or more high resolution images by fusing a sequence of degraded or aliased low resolution images of the same scene. Since the low resolution images may be unaligned, a registration step is required before super resolution reconstruction. Therefore, we first explore in-band (i.e. in the wavelet-domain) image registration; then, investigate super resolution.Our motivation for analyzing the image registration and super resolution problems in the wavelet domain is the growing trend in wavelet-encoded imaging, and wavelet-encoding for image/video compression. Due to drawbacks of widely used discrete cosine transform in image and video compression, a considerable amount of literature is devoted to wavelet-based methods. However, since wavelets are shift-variant, existing methods cannot utilize wavelet subbands efficiently. In order to overcome this drawback, we establish and explore the direct relationship between the subbands under a translational shift, for image registration and super resolution. We then employ our devised in-band methodology, in a motion compensated video compression framework, to demonstrate the effective usage of wavelet subbands.Super resolution can also be used as a post-processing step in video compression in order to decrease the size of the video files to be compressed, with downsampling added as a pre-processing step. Therefore, we present a video compression scheme that utilizes super resolution to reconstruct the high frequency information lost during downsampling. In addition, super resolution is a crucial post-processing step for satellite imagery, due to the fact that it is hard to update imaging devices after a satellite is launched. Thus, we also demonstrate the usage of our devised methods in enhancing resolution of pansharpened multispectral images.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006854, ucf:51744
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006854
-
-
Title
-
Nonparametric and Empirical Bayes Estimation Methods.
-
Creator
-
Benhaddou, Rida, Pensky, Marianna, Han, Deguang, Swanson, Jason, Ni, Liqiang, University of Central Florida
-
Abstract / Description
-
In the present dissertation, we investigate two different nonparametric models; empirical Bayes model and functional deconvolution model. In the case of the nonparametric empirical Bayes estimation, we carried out a complete minimax study. In particular, we derive minimax lower bounds for the risk of the nonparametric empirical Bayes estimator for a general conditional distribution. This result has never been obtained previously. In order to attain optimal convergence rates, we use a wavelet...
Show moreIn the present dissertation, we investigate two different nonparametric models; empirical Bayes model and functional deconvolution model. In the case of the nonparametric empirical Bayes estimation, we carried out a complete minimax study. In particular, we derive minimax lower bounds for the risk of the nonparametric empirical Bayes estimator for a general conditional distribution. This result has never been obtained previously. In order to attain optimal convergence rates, we use a wavelet series based empirical Bayes estimator constructed in Pensky and Alotaibi (2005). We propose an adaptive version of this estimator using Lepski's method and show that the estimator attains optimal convergence rates. The theory is supplemented by numerous examples. Our study of the functional deconvolution model expands results of Pensky and Sapatinas (2009, 2010, 2011) to the case of estimating an $(r+1)$-dimensional function or dependent errors. In both cases, we derive minimax lower bounds for the integrated square risk over a wide set of Besov balls and construct adaptive wavelet estimators that attain those optimal convergence rates. In particular, in the case of estimating a periodic $(r+1)$-dimensional function, we show that by choosing Besov balls of mixed smoothness, we can avoid the ''curse of dimensionality'' and, hence, obtain higher than usual convergence rates when $r$ is large. The study of deconvolution of a multivariate function is motivated by seismic inversion which can be reduced to solution of noisy two-dimensional convolution equations that allow to draw inference on underground layer structures along the chosen profiles. The common practice in seismology is to recover layer structures separately for each profile and then to combine the derived estimates into a two-dimensional function. By studying the two-dimensional version of the model, we demonstrate that this strategy usually leads to estimators which are less accurate than the ones obtained as two-dimensional functional deconvolutions. Finally, we consider a multichannel deconvolution model with long-range dependent Gaussian errors. We do not limit our consideration to a specific type of long-range dependence, rather we assume that the eigenvalues of the covariance matrix of the errors are bounded above and below. We show that convergence rates of the estimators depend on a balance between the smoothness parameters of the response function, the smoothness of the blurring function, the long memory parameters of the errors, and how the total number of observations is distributed among the channels.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004814, ucf:49737
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004814
-
-
Title
-
A STUDY OF EQUATORIAL IONOPSHERIC VARIABILITY USING SIGNAL PROCESSING TECHNIQUES.
-
Creator
-
wang, xiaoni, Eastes, Richard, University of Central Florida
-
Abstract / Description
-
The dependence of equatorial ionosphere on solar irradiances and geomagnetic activity are studied in this dissertation using signal processing techniques. The statistical time series, digital signal processing and wavelet methods are applied to study the ionospheric variations. The ionospheric data used are the Total Electron Content (TEC) and the critical frequency of the F2 layer (foF2). Solar irradiance data are from recent satellites, the Student Nitric Oxide Explorer (SNOE) satellite and...
Show moreThe dependence of equatorial ionosphere on solar irradiances and geomagnetic activity are studied in this dissertation using signal processing techniques. The statistical time series, digital signal processing and wavelet methods are applied to study the ionospheric variations. The ionospheric data used are the Total Electron Content (TEC) and the critical frequency of the F2 layer (foF2). Solar irradiance data are from recent satellites, the Student Nitric Oxide Explorer (SNOE) satellite and the Thermosphere Ionosphere Mesosphere Energetics Dynamics (TIMED) satellite. The Disturbance Storm-Time (Dst) index is used as a proxy of geomagnetic activity in the equatorial region. The results are summarized as follows. (1) In the short-term variations < 27-days, the previous three days solar irradiances have significant correlation with the present day ionospheric data using TEC, which may contribute 18% of the total variations in the TEC. The 3-day delay between solar irradiances and TEC suggests the effects of neutral densities on the ionosphere. The correlations between solar irradiances and TEC are significantly higher than those using the F10.7 flux, a conventional proxy for short wavelength band of solar irradiances. (2) For variations < 27 days, solar soft X-rays show similar or higher correlations with the ionosphere electron densities than the Extreme Ultraviolet (EUV). The correlations between solar irradiances and foF2 decrease from morning (0.5) to the afternoon (0.1). (3) Geomagnetic activity plays an important role in the ionosphere in short-term variations < 10 days. The average correlation between TEC and Dst is 0.4 at 2-3, 3-5, 5-9 and 9-11 day scales, which is higher than those between foF2 and Dst. The correlations between TEC and Dst increase from morning to afternoon. The moderate/quiet geomagnetic activity plays a distinct role in these short-term variations of the ionosphere (~0.3 correlation).
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001602, ucf:47188
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001602
-
-
Title
-
REAL-TIME CINEMATIC DESIGN OF VISUAL ASPECTS IN COMPUTER-GENERATED IMAGES.
-
Creator
-
Obert, Juraj, Pattanaik, Sumanta, University of Central Florida
-
Abstract / Description
-
Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a...
Show moreCreation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003250, ucf:48559
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003250
-
-
Title
-
High Performance Techniques for Face Recognition.
-
Creator
-
Aldhahab, Ahmed, Mikhael, Wasfy, Atia, George, Jones, W Linwood, Wei, Lei, Elshennawy, Ahmad, University of Central Florida
-
Abstract / Description
-
The identification of individuals using face recognition techniques is a challenging task. This is due to the variations resulting from facial expressions, makeup, rotations, illuminations, gestures, etc. Also, facial images contain a great deal of redundant information, which negatively affects the performance of the recognition system. The dimensionality and the redundancy of the facial features have a direct effect on the face recognition accuracy. Not all the features in the feature...
Show moreThe identification of individuals using face recognition techniques is a challenging task. This is due to the variations resulting from facial expressions, makeup, rotations, illuminations, gestures, etc. Also, facial images contain a great deal of redundant information, which negatively affects the performance of the recognition system. The dimensionality and the redundancy of the facial features have a direct effect on the face recognition accuracy. Not all the features in the feature vector space are useful. For example, non-discriminating features in the feature vector space not only degrade the recognition accuracy but also increase the computational complexity.In the field of computer vision, pattern recognition, and image processing, face recognition has become a popular research topic. This is due to its wide spread applications in security and control, which allow the identified individual to access secure areas, personal information, etc. The performance of any recognition system depends on three factors: 1) the storage requirements, 2) the computational complexity, and 3) the recognition rates.Two different recognition system families are presented and developed in this dissertation. Each family consists of several face recognition systems. Each system contains three main steps, namely, preprocessing, feature extraction, and classification. Several preprocessing steps, such as cropping, facial detection, dividing the facial image into sub-images, etc. are applied to the facial images. This reduces the effect of the irrelevant information (background) and improves the system performance. In this dissertation, either a Neural Network (NN) based classifier or Euclidean distance is used for classification purposes. Five widely used databases, namely, ORL, YALE, FERET, FEI, and LFW, each containing different facial variations, such as light condition, rotations, facial expressions, facial details, etc., are used to evaluate the proposed systems. The experimental results of the proposed systems are analyzed using K-folds Cross Validation (CV).In the family-1, Several systems are proposed for face recognition. Each system employs different integrated tools in the feature extraction step. These tools, Two Dimensional Discrete Multiwavelet Transform (2D DMWT), 2D Radon Transform (2D RT), 2D or 3D DWT, and Fast Independent Component Analysis (FastICA), are applied to the processed facial images to reduce the dimensionality and to obtain discriminating features. Each proposed system produces a unique representation, and achieves less storage requirements and better performance than the existing methods.For further facial compression, there are three face recognition systems in the second family. Each system uses different integrated tools to obtain better facial representation. The integrated tools, Vector Quantization (VQ), Discrete cosine Transform (DCT), and 2D DWT, are applied to the facial images for further facial compression and better facial representation. In the systems using the tools VQ/2D DCT and VQ/ 2D DWT, each pose in the databases is represented by one centroid with 4*4*16 dimensions. In the third system, VQ/ Facial Part Detection (FPD), each person in the databases is represented by four centroids with 4*Centroids (4*4*16) dimensions. The systems in the family-2 are proposed to further reduce the dimensions of the data compared to the systems in the family-1 while attaining comparable results. For example, in family-1, the integrated tools, FastICA/ 2D DMWT, applied to different combinations of sub-images in the FERET database with K-fold=5 (9 different poses used in the training mode), reduce the dimensions of the database by 97.22% and achieve 99% accuracy. In contrast, the integrated tools, VQ/ FPD, in the family-2 reduce the dimensions of the data by 99.31% and achieve 97.98% accuracy. In this example, the integrated tools, VQ/ FPD, accomplished further data compression and less accuracy compared to those reported by FastICA/ 2D DMWT tools. Various experiments and simulations using MATLAB are applied. The experimental results of both families confirm the improvements in the storage requirements, as well as the recognition rates as compared to some recently reported methods.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006709, ucf:51878
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006709