Current Search: Li, Xin (x)
-
-
Title
-
APPROXIMATION BY BERNSTEIN POLYNOMIALS AT THE POINT OF DISCONTINUITY.
-
Creator
-
Liang, Jie, Li, Xin, University of Central Florida
-
Abstract / Description
-
Chlodovsky showed that if x0 is a point of discontinuity of the first kind of the function f, then the Bernstein polynomials Bn(f; x0) converge to the average of the one-sided limits on the right and on the left of the function f at the point x0. In 2009, Telyakovskii in extended the asymptotic formulas for the deviations of the Bernstein polynomials from the differentiable functions at the first-kind discontinuity points of the highest derivatives of even order and demonstrated the same...
Show moreChlodovsky showed that if x0 is a point of discontinuity of the first kind of the function f, then the Bernstein polynomials Bn(f; x0) converge to the average of the one-sided limits on the right and on the left of the function f at the point x0. In 2009, Telyakovskii in extended the asymptotic formulas for the deviations of the Bernstein polynomials from the differentiable functions at the first-kind discontinuity points of the highest derivatives of even order and demonstrated the same result fails for the odd order case. Then in 2010, Tonkov in found the right formulation and proved the result that was missing in the odd-order case. It turned out that the limit in the odd order case is related to the jump of the highest derivative. The proofs in these two cases look similar but have many subtle differences, so it is desirable to find out if there is a unifying principle for treating both cases. In this thesis, we obtain a unified formulation and proof for the asymptotic results of both Telyakovskii and Tonkov and discuss extension of these results in the case where the highest derivative of the function is only assumed to be bounded at the point under study.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFH0004099, ucf:44790
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004099
-
-
Title
-
ANALYSIS OF KOLMOGOROV'S SUPERPOSITION THEOREM AND ITS IMPLEMENTATION IN APPLICATIONS WITH LOW AND HIGH DIMENSIONAL DATA.
-
Creator
-
Bryant, Donald, Li, Xin, University of Central Florida
-
Abstract / Description
-
In this dissertation, we analyze Kolmogorov's superposition theorem for high dimensions. Our main goal is to explore and demonstrate the feasibility of an accurate implementation of Kolmogorov's theorem. First, based on Lorentz's ideas, we provide a thorough discussion on the proof and its numerical implementation of the theorem in dimension two. We present computational experiments which prove the feasibility of the theorem in applications of low dimensions (namely, dimensions...
Show moreIn this dissertation, we analyze Kolmogorov's superposition theorem for high dimensions. Our main goal is to explore and demonstrate the feasibility of an accurate implementation of Kolmogorov's theorem. First, based on Lorentz's ideas, we provide a thorough discussion on the proof and its numerical implementation of the theorem in dimension two. We present computational experiments which prove the feasibility of the theorem in applications of low dimensions (namely, dimensions two and three). Next, we present high dimensional extensions with complete and detailed proofs and provide the implementation that aims at applications with high dimensionality. The amalgamation of these ideas is evidenced by applications in image (two dimensional) and video (three dimensional) representations, the content based image retrieval, video retrieval, de-noising and in-painting, and Bayesian prior estimation of high dimensional data from the fields of computer vision and image processing.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002236, ucf:47909
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002236
-
-
Title
-
CONVERGENCE OF THE MEAN SHIFT ALGORITHM AND ITS GENERALIZATIONS.
-
Creator
-
Hu, Ting, Li, Xin, University of Central Florida
-
Abstract / Description
-
Mean shift is an effective iterative algorithm widely used in image analysis tasks like tracking, image segmentation, smoothing, filtering, edge detection and etc. It iteratively estimates the modes of the probability function of a set of sample data points based in a region. Mean shift was invented in 1975, but it was not widely used until the work by Cheng in 1995. After that, it becomes popular in computer vision. However the convergence, a key character of any iterative algorithm, has...
Show moreMean shift is an effective iterative algorithm widely used in image analysis tasks like tracking, image segmentation, smoothing, filtering, edge detection and etc. It iteratively estimates the modes of the probability function of a set of sample data points based in a region. Mean shift was invented in 1975, but it was not widely used until the work by Cheng in 1995. After that, it becomes popular in computer vision. However the convergence, a key character of any iterative algorithm, has been rigorously proved only very recently, but with strong assumptions. In this thesis, the method of mean shift is introduced systematically first and then the convergence is established under more relaxed assumptions. Finally, generalization of the mean shift method is also given for the estimation of probability density function using generalized multivariate smoothing functions to meet the need for more real life applications.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004059, ucf:49133
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004059
-
-
Title
-
DETECTION AND APPROXIMATION OF FUNCTION OF TWO VARIABLES IN HIGH DIMENSIONS.
-
Creator
-
pan, minzhe, LI, XIN, University of Central Florida
-
Abstract / Description
-
This thesis originates from the deterministic algorithm of DeVore, Petrova, and Wojtaszcsyk for the detection and approximation of functions of one variable in high dimensions. We propose a deterministic algorithm for the detection and approximation of function of two variables in high dimensions.
-
Date Issued
-
2010
-
Identifier
-
CFE0003467, ucf:48933
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003467
-
-
Title
-
In Quest of Bernstein Inequalities Rational Functions, Askey-Wilson Operator, and Summation Identities for Entire Functions.
-
Creator
-
Puwakgolle Gedara, Rajitha, Li, Xin, Mohapatra, Ram, Ismail, Mourad, Xu, Mengyu, University of Central Florida
-
Abstract / Description
-
The title of the dissertation gives an indication of the material involved with the connecting thread throughout being the classical Bernstein inequality (and its variants), which provides an estimate to the size of the derivative of a given polynomial on a prescribed set in the complex plane, relative to the size of the polynomial itself on the same set. Chapters 1 and 2 lay the foundation for the dissertation. In Chapter 1, we introduce the notations and terminology that will be used...
Show moreThe title of the dissertation gives an indication of the material involved with the connecting thread throughout being the classical Bernstein inequality (and its variants), which provides an estimate to the size of the derivative of a given polynomial on a prescribed set in the complex plane, relative to the size of the polynomial itself on the same set. Chapters 1 and 2 lay the foundation for the dissertation. In Chapter 1, we introduce the notations and terminology that will be used throughout. Also a brief historical recount is given on the origin of the Bernstein inequality, which dated back to the days of the discovery of the Periodic table by the Russian Chemist Dmitri Mendeleev. In Chapter 2, we narrow down the contents stated in Chapter 1 to the problems we were interested in working during the course of this dissertation. Henceforth, we present a problem formulation mainly for those results for which solutions or partial solutions are provided in the subsequent chapters.Over the years Bernstein inequality has been generalized and extended in several directions. In Chapter \ref{Bern-ineq}, we establish rational analogues to some Bernstein-type inequalities for restricted zeros and prescribed poles. Our inequalities extend the results for polynomials, especially which are themselves improved versions of the classical Erd\"{o}s-Lax and Tur\'{a}n inequalities. In working towards proving our results, we establish some auxiliary results, which may be of interest on their own. Chapters \ref{AW-on-polynomials} and \ref{AW-on-entire} focus on the research carried out with the Askey-Wilson operator applied on polynomials and entire functions (of exponential type) respectively.In Chapter 4, we first establish a Riesz-type interpolation formula on the interval $[-1,1]$ for the Askey-Wilson operator. In consequence, a sharp Bernstein inequality and a Markov inequality are obtained when differentiation is replaced by the Askey-Wilson operator. Moreover, an inverse approximation theorem is proved using a Bernstein-type inequality in $L^2-$space. We conclude this chapter with an overconvergence result which is applied to characterize all $q$-differentiable functions of Brown and Ismail. Chapter \ref{AW-on-entire} is devoted to an intriguing application of the Askey-Wilson operator. By applying it on the Sampling Theorem on entire functions of exponential type, we obtain a series representation formula, which is what we called an extended Boas' formula. Its power in discovering interesting summation formulas, some known and some new will be demonstrated. As another application, we are able to obtain a couple of Bernstein-type inequalities.In the concluding chapter, we state some avenues where this research can progress.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007237, ucf:52220
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007237
-
-
Title
-
Iteratively Reweighted Least Squares Minimization with Prior Information: A New Approach.
-
Creator
-
Popov, Dmitriy, Li, Xin, Moore, Brian, Mikusinski, Piotr, University of Central Florida
-
Abstract / Description
-
Iteratively reweighted least squares (IRLS) algorithms provide an alternative to the more standard L1-minimization approach in compressive sensing. Daubechies et al. introduced a particularly stable version of an IRLS algorithm and rigorously proved its convergence in 2010. They did not, however, consider the case in which prior information on the support of the sparse domain of the solution is available. In 2009, Miosso et al. proposed an IRLS algorithm that makes use of this information to...
Show moreIteratively reweighted least squares (IRLS) algorithms provide an alternative to the more standard L1-minimization approach in compressive sensing. Daubechies et al. introduced a particularly stable version of an IRLS algorithm and rigorously proved its convergence in 2010. They did not, however, consider the case in which prior information on the support of the sparse domain of the solution is available. In 2009, Miosso et al. proposed an IRLS algorithm that makes use of this information to further reduce the number of measurements required to recover the solution with specified accuracy. Although Miosso et al. obtained a number of simulation results strongly confirming the utility of their approach, they did not rigorously establish the convergence properties of their algorithm. In this paper, we introduce prior information on the support of the sparse domain of the solution into the algorithm of Daubechies et al. We then provide a rigorous proof of the convergence of the resulting algorithm.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004154, ucf:49082
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004154
-
-
Title
-
Autoregressive Models.
-
Creator
-
Wade, William, Richardson, Gary, Pensky, Marianna, Li, Xin, University of Central Florida
-
Abstract / Description
-
Consider a sequence of random variables which obeys a first order autoregressive model with unknown parameter alpha. Under suitable assumptions on the error structure of the model, the limiting distribution of the normalized least squares estimator of alpha is discussed. The choice of the normalizing constant depends on whether alpha is less than one, equals one, or is greater than one in absolute value. In particular, the limiting distribution is normal provided that the absolute value of...
Show moreConsider a sequence of random variables which obeys a first order autoregressive model with unknown parameter alpha. Under suitable assumptions on the error structure of the model, the limiting distribution of the normalized least squares estimator of alpha is discussed. The choice of the normalizing constant depends on whether alpha is less than one, equals one, or is greater than one in absolute value. In particular, the limiting distribution is normal provided that the absolute value of alpha is less than one, but is a function of Brownian motion whenever the absolute value of alpha equals one. Some general remarks are made whenever the sequence of random variables is a first order moving average process.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004276, ucf:49546
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004276
-
-
Title
-
On the Theory of Zeta-functions and L-functions.
-
Creator
-
Awan, Almuatazbellah, Mohapatra, Ram, Li, Xin, Brennan, Joseph, University of Central Florida
-
Abstract / Description
-
In this thesis we provide a body of knowledge that concerns Riemann zeta-function and its generalizations in a cohesive manner. In particular, we have studied and mentioned some recent results regarding Hurwitz and Lerch functions, as well as Dirichlet's L-function. We have also investigated some fundamental concepts related to these functions and their universality properties. In addition, we also discuss different formulations and approaches to the proof of the Prime Number Theorem and the...
Show moreIn this thesis we provide a body of knowledge that concerns Riemann zeta-function and its generalizations in a cohesive manner. In particular, we have studied and mentioned some recent results regarding Hurwitz and Lerch functions, as well as Dirichlet's L-function. We have also investigated some fundamental concepts related to these functions and their universality properties. In addition, we also discuss different formulations and approaches to the proof of the Prime Number Theorem and the Riemann Hypothesis. These two topics constitute the main theme of this thesis. For the Prime Number Theorem, we provide a thorough discussion that compares and contrasts Norbert Wiener's proof with that of Newman's short proof. We have also related them to Hadamard's and de la Vallee Poussin's original proofs written in 1896. As far as the Riemann Hypothesis is concerned, we discuss some recent results related to equivalent formulations of the Riemann Hypothesis as well as the Generalized Riemann Hypothesis.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005576, ucf:50268
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005576
-
-
Title
-
Dictionary Learning for Image Analysis.
-
Creator
-
Khan, Muhammad Nazar, Tappen, Marshall, Foroosh, Hassan, Stanley, Kenneth, Li, Xin, University of Central Florida
-
Abstract / Description
-
In this thesis, we investigate the use of dictionary learning for discriminative tasks on natural images. Our contributions can be summarized as follows:1) We introduce discriminative deviation based learning to achieve principled handling of the reconstruction-discrimination tradeoff that is inherent to discriminative dictionary learning.2) Since natural images obey a strong smoothness prior, we show how spatial smoothness constraints can be incorporated into the learning formulation by...
Show moreIn this thesis, we investigate the use of dictionary learning for discriminative tasks on natural images. Our contributions can be summarized as follows:1) We introduce discriminative deviation based learning to achieve principled handling of the reconstruction-discrimination tradeoff that is inherent to discriminative dictionary learning.2) Since natural images obey a strong smoothness prior, we show how spatial smoothness constraints can be incorporated into the learning formulation by embedding dictionary learning into Conditional Random Field (CRF) learning. We demonstrate that such smoothness constraints can lead to state-of-the-art performance for pixel-classification tasks.3) Finally, we lay down the foundations of super-latent learning. By treating sparse codes on a CRF as latent variables, dictionary learning can also be performed via the Latent (Structural) SVM formulation for jointly learning a classifier over the sparse codes. The dictionary is treated as a super-latent variable that generates the latent variables.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004701, ucf:49844
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004701
-
-
Title
-
Valuation of Over-The-Counter (OTC) Derivatives with Collateralization.
-
Creator
-
Guerrero, Leon, Yong, Jiongmin, Li, Xin, Brennan, Joseph, University of Central Florida
-
Abstract / Description
-
Collateralization in over-the-counter (OTC) derivatives markets has grown rapidly overthe past decade, and even faster in the past few years, due to the impact of the recentfinancial crisis and the particularly important attention to the counterparty credit risk in derivatives contracts. The addition of collateralization to such contracts significantly reduces the counterparty credit risk and allows to offset liabilities in case of default.We study the problem of valuation of OTC derivatives...
Show moreCollateralization in over-the-counter (OTC) derivatives markets has grown rapidly overthe past decade, and even faster in the past few years, due to the impact of the recentfinancial crisis and the particularly important attention to the counterparty credit risk in derivatives contracts. The addition of collateralization to such contracts significantly reduces the counterparty credit risk and allows to offset liabilities in case of default.We study the problem of valuation of OTC derivatives with payoff in a single currencyand with single underlying asset for the cases of zero, partial, and perfect collateralization. We assume the derivative is traded between two default-free counterparties and analyze the impact of collateralization on the fair present value of the derivative. We establish a uniform generalized derivative pricing framework for the three cases of collateralization and show how different approaches to pricing turn out to be consistent. We then generalize the results to include multi-asset and cross-currency arguments, where the underlyingand the derivative are in some domestic currency, but the collateral is posted in a foreign currency. We show that the results for the single currency, multi-asset case are consistent with those obtained for the single currency, single asset case.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004855, ucf:49688
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004855
-
-
Title
-
Gradient based MRF learning for image restoration and segmentation.
-
Creator
-
Samuel, Kegan, Tappen, Marshall, Da Vitoria Lobo, Niels, Foroosh, Hassan, Li, Xin, University of Central Florida
-
Abstract / Description
-
The undirected graphical model or Markov Random Field (MRF) is one of the more popular models used in computer vision and is the type of model with which this work is concerned. Models based on these methods have proven to be particularly useful in low-level vision systems and have led to state-of-the-art results for MRF-based systems. The research presented will describe a new discriminative training algorithm and its implementation.The MRF model will be trained by optimizing its parameters...
Show moreThe undirected graphical model or Markov Random Field (MRF) is one of the more popular models used in computer vision and is the type of model with which this work is concerned. Models based on these methods have proven to be particularly useful in low-level vision systems and have led to state-of-the-art results for MRF-based systems. The research presented will describe a new discriminative training algorithm and its implementation.The MRF model will be trained by optimizing its parameters so that the minimum energy solution of the model is as similar as possible to the ground-truth. While previous work has relied on time-consuming iterative approximations or stochastic approximations, this work will demonstrate how implicit differentiation can be used to analytically differentiate the overall training loss with respect to the MRF parameters. This framework leads to an efficient, flexible learning algorithm that can be applied to a number of different models.The effectiveness of the proposed learning method will then be demonstrated by learning the parameters of two related models applied to the task of denoising images. The experimental results will demonstrate that the proposed learning algorithm is comparable and, at times, better than previous training methods applied to the same tasks.A new segmentation model will also be introduced and trained using the proposed learning method. The proposed segmentation model is based on an energy minimization framework that is novel in how it incorporates priors on the size of the segments in a way that is straightforward to implement. While other methods, such as normalized cuts, tend to produce segmentations of similar sizes, this method is able to overcome that problem and produce more realistic segmentations.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004595, ucf:49207
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004595
-
-
Title
-
Quantum Algorithms for: Quantum Phase Estimation, Approximation of the Tutte Polynomial and Black-box Structures.
-
Creator
-
Ahmadi Abhari, Seyed Hamed, Brennan, Joseph, Mucciolo, Eduardo, Li, Xin, Marinescu, Dan, University of Central Florida
-
Abstract / Description
-
In this dissertation, we investigate three different problems in the field of Quantum computation. First, we discuss the quantum complexity of evaluating the Tutte polynomial of a planar graph. Furthermore, we devise a new quantum algorithm for approximating the phase of a unitary matrix. Finally, we provide quantum tools that can be utilized to extract the structure of black-box modules and algebras. While quantum phase estimation (QPE) is at the core of many quantum algorithms known to date...
Show moreIn this dissertation, we investigate three different problems in the field of Quantum computation. First, we discuss the quantum complexity of evaluating the Tutte polynomial of a planar graph. Furthermore, we devise a new quantum algorithm for approximating the phase of a unitary matrix. Finally, we provide quantum tools that can be utilized to extract the structure of black-box modules and algebras. While quantum phase estimation (QPE) is at the core of many quantum algorithms known to date, its physical implementation (algorithms based on quantum Fourier transform (QFT)) is highly constrained by the requirement of high-precision controlled phase shift operators, which remain difficult to realize. In the second part of this dissertation, we introduce an alternative approach to approximately implement QPE with arbitrary constant-precision controlled phase shift operators.The new quantum algorithm bridges the gap between QPE algorithms based on QFT and Kitaev's original approach. For approximating the eigenphase precise to the nth bit, Kitaev's original approach does not require any controlled phase shift operator. In contrast, QPE algorithms based on QFT or approximate QFT require controlled phase shift operators with precision of at least Pi/2n. The new approach fills the gap and requires only arbitrary constant-precision controlled phase shift operators. From a physical implementation viewpoint, the new algorithm outperforms Kitaev's approach.The other problem we investigate relates to approximating the Tutte polynomial. We show that the problem of approximately evaluating the Tutte polynomial of triangular graphs at the points (q,1/q) of the Tutte plane is BQP-complete for (most) roots of unity q. We also consider circular graphs and show that the problem of approximately evaluating the Tutte polynomial of these graphs at a point is DQC1-complete and at some points is in BQP.To show that these problems can be solved by a quantum computer, we rely on the relation of the Tutte polynomial of a planar G graph with the Jones and HOMFLY polynomial of the alternating link D(G) given by the medial graph of G. In the case of our graphs the corresponding links are equal to the plat and trace closures of braids. It is known how to evaluate the Jones and HOMFLY polynomial for closures of braids.To establish the hardness results, we use the property that the images of the generators of the braid group under the irreducible Jones-Wenzl representations of the Hecke algebra have finite order. We show that for each braid we can efficiently construct a braid such that the evaluation of the Jones and HOMFLY polynomials of their closures at a fixed root of unity leads to the same value and that the closures of the resulting braid are alternating links.The final part of the dissertation focuses on finding the structure of a black-box module or algebra. Suppose we are given black-box access to a finite module M or algebra over a finite ring R and a list of generators for M and R. We show how to find a linear basis and structure constants for M in quantum poly (log|M|) time. This generalizes a recent quantum algorithm of Arvind et al. which finds a basis representation for rings. We then show that our algorithm is a useful primitive allowing quantum computer to determine the structure of a finite associative algebra as a direct sum of simple algebras. Moreover, it solves a wide variety of problems regarding finite modules and rings. Although our quantum algorithm is based on Abelian Fourier transforms, it solves problems regarding the multiplicative structure of modules and algebras, which need not be commutative. Examples include finding the intersection and quotient of two modules, finding the additive and multiplicative identities in a module, computing the order of an module, solving linear equations over modules, deciding whether an ideal is maximal, finding annihilators, and testing the injectivity and surjectivity of ring homomorphisms. These problems appear to be exponentially hard classically.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004239, ucf:49526
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004239
-
-
Title
-
Weighted Low-Rank Approximation of Matrices:Some Analytical and Numerical Aspects.
-
Creator
-
Dutta, Aritra, Li, Xin, Sun, Qiyu, Mohapatra, Ram, Nashed, M, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
This dissertation addresses some analytical and numerical aspects of a problem of weighted low-rank approximation of matrices. We propose and solve two different versions of weighted low-rank approximation problems. We demonstrate, in addition, how these formulations can be efficiently used to solve some classic problems in computer vision. We also present the superior performance of our algorithms over the existing state-of-the-art unweighted and weighted low-rank approximation algorithms...
Show moreThis dissertation addresses some analytical and numerical aspects of a problem of weighted low-rank approximation of matrices. We propose and solve two different versions of weighted low-rank approximation problems. We demonstrate, in addition, how these formulations can be efficiently used to solve some classic problems in computer vision. We also present the superior performance of our algorithms over the existing state-of-the-art unweighted and weighted low-rank approximation algorithms.Classical principal component analysis (PCA) is constrained to have equal weighting on the elements of the matrix, which might lead to a degraded design in some problems. To address this fundamental flaw in PCA, Golub, Hoffman, and Stewart proposed and solved a problem of constrained low-rank approximation of matrices: For a given matrix $A = (A_1\;A_2)$, find a low rank matrix $X = (A_1\;X_2)$ such that ${\rm rank}(X)$ is less than $r$, a prescribed bound, and $\|A-X\|$ is small.~Motivated by the above formulation, we propose a weighted low-rank approximation problem that generalizes the constrained low-rank approximation problem of Golub, Hoffman and Stewart.~We study a general framework obtained by pointwise multiplication with the weight matrix and consider the following problem:~For a given matrix $A\in\mathbb{R}^{m\times n}$ solve:\begin{eqnarray*}\label{weighted problem}\min_{\substack{X}}\|\left(A-X\right)\odot W\|_F^2~{\rm subject~to~}{\rm rank}(X)\le r,\end{eqnarray*}where $\odot$ denotes the pointwise multiplication and $\|\cdot\|_F$ is the Frobenius norm of matrices.In the first part, we study a special version of the above general weighted low-rank approximation problem.~Instead of using pointwise multiplication with the weight matrix, we use the regular matrix multiplication and replace the rank constraint by its convex surrogate, the nuclear norm, and consider the following problem:\begin{eqnarray*}\label{weighted problem 1}\hat{X} (&)=(&) \arg \min_X \{\frac{1}{2}\|(A-X)W\|_F^2 +\tau\|X\|_\ast\},\end{eqnarray*}where $\|\cdot\|_*$ denotes the nuclear norm of $X$.~Considering its resemblance with the classic singular value thresholding problem we call it the weighted singular value thresholding~(WSVT)~problem.~As expected,~the WSVT problem has no closed form analytical solution in general,~and a numerical procedure is needed to solve it.~We introduce auxiliary variables and apply simple and fast alternating direction method to solve WSVT numerically.~Moreover, we present a convergence analysis of the algorithm and propose a mechanism for estimating the weight from the data.~We demonstrate the performance of WSVT on two computer vision applications:~background estimation from video sequences~and facial shadow removal.~In both cases,~WSVT shows superior performance to all other models traditionally used. In the second part, we study the general framework of the proposed problem.~For the special case of weight, we study the limiting behavior of the solution to our problem,~both analytically and numerically.~In the limiting case of weights,~as $(W_1)_{ij}\to\infty, W_2=\mathbbm{1}$, a matrix of 1,~we show the solutions to our weighted problem converge, and the limit is the solution to the constrained low-rank approximation problem of Golub et. al. Additionally, by asymptotic analysis of the solution to our problem,~we propose a rate of convergence.~By doing this, we make explicit connections between a vast genre of weighted and unweighted low-rank approximation problems.~In addition to these, we devise a novel and efficient numerical algorithm based on the alternating direction method for the special case of weight and present a detailed convergence analysis.~Our approach improves substantially over the existing weighted low-rank approximation algorithms proposed in the literature.~Finally, we explore the use of our algorithm to real-world problems in a variety of domains, such as computer vision and machine learning. Finally, for a special family of weights, we demonstrate an interesting property of the solution to the general weighted low-rank approximation problem. Additionally, we devise two accelerated algorithms by using this property and present their effectiveness compared to the algorithm proposed in Chapter 4.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006833, ucf:51789
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006833
-
-
Title
-
Modified Pal Interpolation and Sampling Bilevel Signals with Finite Rate of Innovation.
-
Creator
-
Ramesh, Gayatri, Mohapatra, Ram, Vajravelu, Kuppalapalle, Li, Xin, Sun, Qiyu, University of Central Florida
-
Abstract / Description
-
Sampling and interpolation are two important topics in signal processing. Signal processing is a vast field of study that deals with analysis and operations of signals such as sounds, images, sensor data, telecommunications and so on. It also utilizes many mathematical theories such as approximation theory, analysis and wavelets. This dissertation is divided into two chapters: Modified P(&)#225;l Interpolation and Sampling Bilevel Signals with Finite Rate of Innovation. In the first chapter,...
Show moreSampling and interpolation are two important topics in signal processing. Signal processing is a vast field of study that deals with analysis and operations of signals such as sounds, images, sensor data, telecommunications and so on. It also utilizes many mathematical theories such as approximation theory, analysis and wavelets. This dissertation is divided into two chapters: Modified P(&)#225;l Interpolation and Sampling Bilevel Signals with Finite Rate of Innovation. In the first chapter, we introduce a new interpolation process, the modified P\'al interpolation, based on papers by P(&)#225;l, J(&)#243;o and Szab(&)#243;, and we establish the existence and uniqueness of interpolation polynomials of modified P(&)#225;l type.The paradigm to recover signals with finite rate of innovation from their samples is a fairly recent field of study. In the second chapter, we show that causal bilevel signals with finite rate of innovation can be stably recovered from their samples provided that the sampling period is at or above the maximal local rate of innovation, and that the sampling kernel is causal and positive on the first sampling period. Numerical simulations are presented to discuss the recovery of bilevel causal signals in the presence of noise.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005113, ucf:50760
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005113
-
-
Title
-
Sampling and Reconstruction of Spatial Signals.
-
Creator
-
Cheng, Cheng, Li, Xin, Sun, Qiyu, Yong, Jiongmin, Liu, Zhe, Xu, Mengyu, University of Central Florida
-
Abstract / Description
-
Digital processing of signals f may start from sampling on a discrete set ?, f ?? f(?_n), ?_n ??.The sampling theory is one of the most basic and fascinating topics in applied mathematics and in engineering sciences. The most well known form is the uniform sampling theorem for band-limited/wavelet signals, that gives a framework for converting analog signals into sequences of numbers. Over the past decade, the sampling theory has undergone a strong revival and the standard sampling paradigm...
Show moreDigital processing of signals f may start from sampling on a discrete set ?, f ?? f(?_n), ?_n ??.The sampling theory is one of the most basic and fascinating topics in applied mathematics and in engineering sciences. The most well known form is the uniform sampling theorem for band-limited/wavelet signals, that gives a framework for converting analog signals into sequences of numbers. Over the past decade, the sampling theory has undergone a strong revival and the standard sampling paradigm is extended to non-bandlimited signals including signals in reproducing kernel spaces (RKSs), signals with finite rate of innovation (FRI) and sparse signals, and to nontraditional sampling methods, such as phaseless sampling.In this dissertation, we first consider the sampling and Galerkin reconstruction in a reproducing kernel space. The fidelity measure of perceptual signals, such as acoustic and visual signals, might not be well measured by least squares. In the first part of this dissertation, we introduce a fidelity measure depending on a given sampling scheme and propose a Galerkin method in Banach space setting for signal reconstruction. We show that the proposed Galerkin method provides a quasi-optimal approximation, and the corresponding Galerkin equations could be solved by an iterative approximation-projection algorithm in a reproducing kernel subspace of Lp.A spatially distributed network contains a large amount of agents with limited sensing, data processing, and communication capabilities. Recent technological advances have opened up possibilities to deploy spatially distributed networks for signal sampling and reconstruction. We introduce a graph structure for a distributed sampling and reconstruction system by coupling agents in a spatially distributed network with innovative positions of signals. We split a distributed sampling and reconstruction system into a family of overlapping smaller subsystems, and we show that the stability of the sensing matrix holds if and only if its quasi-restrictions to those subsystems have l_2 uniform stability. This new stability criterion could be pivotal for the design of a robust distributed sampling and reconstruction system against supplement, replacement and impairment of agents, as we only need to check the uniform stability of affected subsystems. We also propose an exponentially convergent distributed algorithm for signal reconstruction, that provides a suboptimal approximation to the original signal in the presence of bounded sampling noises.Phase retrieval (Phaseless Sampling and Reconstruction) arises in various fields of science and engineering. It consists of reconstructing a signal of interest from its magnitude measurements. Sampling in shift-invariant spaces is a realistic model for signals with smooth spectrum. We consider phaseless sampling and reconstruction of real-valued signals in a shift-invariant space from their magnitude measurements on the whole Euclidean space and from their phaseless samples taken on a discrete set with finite sampling density. We find an equivalence between nonseparability of signals in a shift-invariant space and their phase retrievability with phaseless samples taken on the whole Euclidean space. We also introduce an undirected graph to a signal and use connectivity of the graph to characterize the nonseparability of high-dimensional signals. Under the local complement property assumption on a shift-invariant space, we find a discrete set with finite sampling density such that signals in shift-invariant spaces, that are determined by their magnitude measurements on the whole Euclidean space, can be reconstructed in a stable way from their phaseless samples taken on that discrete set. We also propose a reconstruction algorithm which provides a suboptimal approximation to the original signal when its noisy phaseless samples are available only.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006726, ucf:51889
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006726
-
-
Title
-
Analytical solutions to nonlinear differential equations arising in physical problems.
-
Creator
-
Baxter, Mathew, Vajravelu, Kuppalapalle, Li, Xin, Mohapatra, Ram, Shuai, Zhisheng, Kassab, Alain, University of Central Florida
-
Abstract / Description
-
Nonlinear partial differential equations are difficult to solve, with many of the approximate solutions in the literature being numerical in nature. In this work, we apply the Homotopy Analysis Method to give approximate analytical solutions to nonlinear ordinary and partial differential equations. The main goal is to apply different linear operators, which can be chosen, to solve nonlinear problems. In the first three chapters, we study ordinary differential equations (ODEs) with one or two...
Show moreNonlinear partial differential equations are difficult to solve, with many of the approximate solutions in the literature being numerical in nature. In this work, we apply the Homotopy Analysis Method to give approximate analytical solutions to nonlinear ordinary and partial differential equations. The main goal is to apply different linear operators, which can be chosen, to solve nonlinear problems. In the first three chapters, we study ordinary differential equations (ODEs) with one or two linear operators. As we progress, we apply the method to partial differential equations (PDEs) and use several linear operators. The results are all purely analytical, meaning these are approximate solutions that we can evaluate at points and take their derivatives.Another main focus is error analysis, where we test how good our approximations are. The method will always produce approximations, but we use residual errors on the domain of the problem to find a measure of error.In the last two chapters, we apply similarity transforms to PDEs to transform them into ODEs. We then use the Homotopy Analysis Method on one, but are able to find exact solutions to both equations.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005303, ucf:50527
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005303
-
-
Title
-
Taming Wild Faces: Web-Scale, Open-Universe Face Identification in Still and Video Imagery.
-
Creator
-
Ortiz, Enrique, Shah, Mubarak, Sukthankar, Rahul, Da Vitoria Lobo, Niels, Wang, Jun, Li, Xin, University of Central Florida
-
Abstract / Description
-
With the increasing pervasiveness of digital cameras, the Internet, and social networking, there is a growing need to catalog and analyze large collections of photos and videos. In this dissertation, we explore unconstrained still-image and video-based face recognition in real-world scenarios, e.g. social photo sharing and movie trailers, where people of interest are recognized and all others are ignored. In such a scenario, we must obtain high precision in recognizing the known identities,...
Show moreWith the increasing pervasiveness of digital cameras, the Internet, and social networking, there is a growing need to catalog and analyze large collections of photos and videos. In this dissertation, we explore unconstrained still-image and video-based face recognition in real-world scenarios, e.g. social photo sharing and movie trailers, where people of interest are recognized and all others are ignored. In such a scenario, we must obtain high precision in recognizing the known identities, while accurately rejecting those of no interest.Recent advancements in face recognition research has seen Sparse Representation-based Classification (SRC) advance to the forefront of competing methods. However, its drawbacks, slow speed and sensitivity to variations in pose, illumination, and occlusion, have hindered its wide-spread applicability. The contributions of this dissertation are three-fold: 1. For still-image data, we propose a novel Linearly Approximated Sparse Representation-based Classification (LASRC) algorithm that uses linear regression to perform sample selection for l1-minimization, thus harnessing the speed of least-squares and the robustness of SRC. On our large dataset collected from Facebook, LASRC performs equally to standard SRC with a speedup of 100-250x.2. For video, applying the popular l1-minimization for face recognition on a frame-by-frame basis is prohibitively expensive computationally, so we propose a new algorithm Mean Sequence SRC (MSSRC) that performs video face recognition using a joint optimization leveraging all of the available video data and employing the knowledge that the face track frames belong to the same individual. Employing MSSRC results in a speedup of 5x on average over SRC on a frame-by-frame basis.3. Finally, we make the observation that MSSRC sometimes assigns inconsistent identities to the same individual in a scene that could be corrected based on their visual similarity. Therefore, we construct a probabilistic affinity graph combining appearance and co-occurrence similarities to model the relationship between face tracks in a video. Using this relationship graph, we employ random walk analysis to propagate strong class predictions among similar face tracks, while dampening weak predictions. Our method results in a performance gain of 15.8% in average precision over using MSSRC alone.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005536, ucf:50313
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005536
-
-
Title
-
Robust Subspace Estimation Using Low-Rank Optimization. Theory and Applications in Scene Reconstruction, Video Denoising, and Activity Recognition.
-
Creator
-
Oreifej, Omar, Shah, Mubarak, Da Vitoria Lobo, Niels, Stanley, Kenneth, Lin, Mingjie, Li, Xin, University of Central Florida
-
Abstract / Description
-
In this dissertation, we discuss the problem of robust linear subspace estimation using low-rank optimization and propose three formulations of it. We demonstrate how these formulations can be used to solve fundamental computer vision problems, and provide superior performance in terms of accuracy and running time.Consider a set of observations extracted from images (such as pixel gray values, local features, trajectories...etc). If the assumption that these observations are drawn from a...
Show moreIn this dissertation, we discuss the problem of robust linear subspace estimation using low-rank optimization and propose three formulations of it. We demonstrate how these formulations can be used to solve fundamental computer vision problems, and provide superior performance in terms of accuracy and running time.Consider a set of observations extracted from images (such as pixel gray values, local features, trajectories...etc). If the assumption that these observations are drawn from a liner subspace (or can be linearly approximated) is valid, then the goal is to represent each observation as a linear combination of a compact basis, while maintaining a minimal reconstruction error. One of the earliest, yet most popular, approaches to achieve that is Principal Component Analysis (PCA). However, PCA can only handle Gaussian noise, and thus suffers when the observations are contaminated with gross and sparse outliers. To this end, in this dissertation, we focus on estimating the subspace robustly using low-rank optimization, where the sparse outliers are detected and separated through the `1 norm. The robust estimation has a two-fold advantage: First, the obtained basis better represents the actual subspace because it does not include contributions from the outliers. Second, the detected outliers are often of a specific interest in many applications, as we will show throughout this thesis. We demonstrate four different formulations and applications for low-rank optimization. First, we consider the problem of reconstructing an underwater sequence by removing the turbulence caused by the water waves. The main drawback of most previous attempts to tackle this problem is that they heavily depend on modelling the waves, which in fact is ill-posed since the actual behavior of the waves along with the imaging process are complicated and include several noise components; therefore, their results are not satisfactory. In contrast, we propose a novel approach which outperforms the state-of-the-art. The intuition behind our method is that in a sequence where the water is static, the frames would be linearly correlated. Therefore, in the presence of water waves, we may consider the frames as noisy observations drawn from a the subspace of linearly correlated frames. However, the noise introduced by the water waves is not sparse, and thus cannot directly be detected using low-rank optimization. Therefore, we propose a data-driven two-stage approach, where the first stage (")sparsifies(") the noise, and the second stage detects it. The first stage leverages the temporal mean of the sequence to overcome the structured turbulence of the waves through an iterative registration algorithm. The result of the first stage is a high quality mean and a better structured sequence; however, the sequence still contains unstructured sparse noise. Thus, we employ a second stage at which we extract the sparse errors from the sequence through rank minimization. Our method converges faster, and drastically outperforms state of the art on all testing sequences. Secondly, we consider a closely related situation where an independently moving object is also present in the turbulent video. More precisely, we consider video sequences acquired in a desert battlefields, where atmospheric turbulence is typically present, in addition to independently moving targets. Typical approaches for turbulence mitigation follow averaging or de-warping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects which can often be of great interest. Therefore, we address the problem of simultaneous turbulence mitigation and moving object detection. We propose a novel three-term low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object. We simplify this extremely difficult problem into a minimization of nuclear norm, Frobenius norm, and L1 norm. Our method is based on two observations: First, the turbulence causes dense and Gaussian noise, and therefore can be captured by Frobenius norm, while the moving objects are sparse and thus can be captured by L1 norm. Second, since the object's motion is linear and intrinsically different than the Gaussian-like turbulence, a Gaussian-based turbulence model can be employed to enforce an additional constraint on the search space of the minimization. We demonstrate the robustness of our approach on challenging sequences which are significantly distorted with atmospheric turbulence and include extremely tiny moving objects. In addition to robustly detecting the subspace of the frames of a sequence, we consider using trajectories as observations in the low-rank optimization framework. In particular, in videos acquired by moving cameras, we track all the pixels in the video and use that to estimate the camera motion subspace. This is particularly useful in activity recognition, which typically requires standard preprocessing steps such as motion compensation, moving object detection, and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. In contrast, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned difficulties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the moving camera, we decompose the trajectories into their camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features, which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets, and obtained promising results.Finally, we consider a more challenging problem referred to as complex event recognition, where the activities of interest are complex and unconstrained. This problem typically pose significant challenges because it involves videos of highly variable content, noise, length, frame size ... etc. In this extremely challenging task, high-level features have recently shown a promising direction as in [53, 129], where core low-level events referred to as concepts are annotated and modeled using a portion of the training data, then each event is described using its content of these concepts. However, because of the complex nature of the videos, both the concept models and the corresponding high-level features are significantly noisy. In order to address this problem, we propose a novel low-rank formulation, which combines the precisely annotated videos used to train the concepts, with the rich high-level features. Our approach finds a new representation for each event, which is not only low-rank, but also constrained to adhere to the concept annotation, thus suppressing the noise, and maintaining a consistent occurrence of the concepts in each event. Extensive experiments on large scale real world dataset TRECVID Multimedia Event Detection 2011 and 2012 demonstrate that our approach consistently improves the discriminativity of the high-level features by a significant margin.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004732, ucf:49835
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004732
-
-
Title
-
Consumer Engagement in Travel-related Social Media.
-
Creator
-
Li, Xu, Wang, Youcheng, Robinson, Edward, Kwun, David, Nusair, Khaldoon, He, Xin, University of Central Florida
-
Abstract / Description
-
The term of (")consumer engagement(") is extensively used in the digital era. It is believed that engaged consumers play an important role in products/services referral and recommendation, new product/service development and experience/value co-creation. Although the notion of consumer engagement sounds compelling, it is not fully developed in theory. Different interpretations coexist, resulting in confusion and misuse of the concept. This study attempts to define consumer engagement and...
Show moreThe term of (")consumer engagement(") is extensively used in the digital era. It is believed that engaged consumers play an important role in products/services referral and recommendation, new product/service development and experience/value co-creation. Although the notion of consumer engagement sounds compelling, it is not fully developed in theory. Different interpretations coexist, resulting in confusion and misuse of the concept. This study attempts to define consumer engagement and develop a conceptual framework of consumer engagement, addressing antecedents of consumer engagement in online context. Moreover, some situational and social media usage-related factors are incorporated into the framework. A set of propositions are presented based on literature review and the conceptual framework to illustrate the relationship between consumer engagement and related factors. To provide empirical evidence for the conceptual model, an online survey is conducted. Participants complete the self-administered survey by answering questions concerning their online experience with the travel-related social media website they visit most. Two-step structural equation modeling is employed to analyze the data. The results show that both community experience and community identification have significant and positive relationship with consumer engagement. Community experience is also a strong predictor of community identification. Attitude toward using social media and travel involvement influence the relationship between consumer engagement and its antecedents.With focus on the interactive and experiential nature of consumer engagement, this study expands current understanding of consumer engagement and provides insights for hospitality and tourism businesses regarding how to engage consumers through travel-related social media.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004878, ucf:49657
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004878