Current Search: approximation (x)
View All Items
Pages
- Title
- Assessing Approximate Arithmetic Designs in the presence of Process Variations and Voltage Scaling.
- Creator
-
Naseer, Adnan Aquib, DeMara, Ronald, Lin, Mingjie, Karwowski, Waldemar, University of Central Florida
- Abstract / Description
-
As environmental concerns and portability of electronic devices move to the forefront of priorities,innovative approaches which reduce processor energy consumption are sought. Approximatearithmetic units are one of the avenues whereby significant energy savings can be achieved. Approximationof fundamental arithmetic units is achieved by judiciously reducing the number oftransistors in the circuit. A satisfactory tradeoff of energy vs. accuracy of the circuit can be determinedby trial-and...
Show moreAs environmental concerns and portability of electronic devices move to the forefront of priorities,innovative approaches which reduce processor energy consumption are sought. Approximatearithmetic units are one of the avenues whereby significant energy savings can be achieved. Approximationof fundamental arithmetic units is achieved by judiciously reducing the number oftransistors in the circuit. A satisfactory tradeoff of energy vs. accuracy of the circuit can be determinedby trial-and-error methods of each functional approximation. Although the accuracy of theoutput is compromised, it is only decreased to an acceptable extent that can still fulfill processingrequirements.A number of scenarios are evaluated with approximate arithmetic units to thoroughly cross-checkthem with their accurate counterparts. Some of the attributes evaluated are energy consumption,delay and process variation. Additionally, novel methods to create such approximate unitsare developed. One such method developed uses a Genetic Algorithm (GA), which mimics thebiologically-inspired evolutionary techniques to obtain an optimal solution. A GA employs geneticoperators such as crossover and mutation to mix and match several different types of approximateadders to find the best possible combination of such units for a given input set. As the GA usuallyconsumes a significant amount of time as the size of the input set increases, we tackled this problemby using various methods to parallelize the fitness computation process of the GA, which isthe most compute intensive task. The parallelization improved the computation time from 2,250seconds to 1,370 seconds for up to 8 threads, using both OpenMP and Intel TBB. Apart from usingthe GA with seeded multiple approximate units, other seeds such as basic logic gates with limitedlogic space were used to develop completely new multi-bit approximate adders with good fitnesslevels.iiiThe effect of process variation was also calculated. As the number of transistors is reduced, thedistribution of the transistor widths and gate oxide may shift away from a Gaussian Curve. This resultwas demonstrated in different types of single-bit adders with the delay sigma increasing from6psec to 12psec, and when the voltage is scaled to Near-Threshold-Voltage (NTV) levels sigmaincreases by up to 5psec. Approximate Arithmetic Units were not affected greatly by the changein distribution of the thickness of the gate oxide. Even when considering the 3-sigma value, thedelay of an approximate adder remains below that of a precise adder with additional transistors.Additionally, it is demonstrated that the GA obtains innovative solutions to the appropriate combinationof approximate arithmetic units, to achieve a good balance between energy savings andaccuracy.
Show less - Date Issued
- 2015
- Identifier
- CFE0005675, ucf:50165
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005675
- Title
- DETECTION AND APPROXIMATION OF FUNCTION OF TWO VARIABLES IN HIGH DIMENSIONS.
- Creator
-
pan, minzhe, LI, XIN, University of Central Florida
- Abstract / Description
-
This thesis originates from the deterministic algorithm of DeVore, Petrova, and Wojtaszcsyk for the detection and approximation of functions of one variable in high dimensions. We propose a deterministic algorithm for the detection and approximation of function of two variables in high dimensions.
- Date Issued
- 2010
- Identifier
- CFE0003467, ucf:48933
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003467
- Title
- APPROXIMATION BY BERNSTEIN POLYNOMIALS AT THE POINT OF DISCONTINUITY.
- Creator
-
Liang, Jie, Li, Xin, University of Central Florida
- Abstract / Description
-
Chlodovsky showed that if x0 is a point of discontinuity of the first kind of the function f, then the Bernstein polynomials Bn(f; x0) converge to the average of the one-sided limits on the right and on the left of the function f at the point x0. In 2009, Telyakovskii in extended the asymptotic formulas for the deviations of the Bernstein polynomials from the differentiable functions at the first-kind discontinuity points of the highest derivatives of even order and demonstrated the same...
Show moreChlodovsky showed that if x0 is a point of discontinuity of the first kind of the function f, then the Bernstein polynomials Bn(f; x0) converge to the average of the one-sided limits on the right and on the left of the function f at the point x0. In 2009, Telyakovskii in extended the asymptotic formulas for the deviations of the Bernstein polynomials from the differentiable functions at the first-kind discontinuity points of the highest derivatives of even order and demonstrated the same result fails for the odd order case. Then in 2010, Tonkov in found the right formulation and proved the result that was missing in the odd-order case. It turned out that the limit in the odd order case is related to the jump of the highest derivative. The proofs in these two cases look similar but have many subtle differences, so it is desirable to find out if there is a unifying principle for treating both cases. In this thesis, we obtain a unified formulation and proof for the asymptotic results of both Telyakovskii and Tonkov and discuss extension of these results in the case where the highest derivative of the function is only assumed to be bounded at the point under study.
Show less - Date Issued
- 2011
- Identifier
- CFH0004099, ucf:44790
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004099
- Title
- PADE APPROXIMANTS AND ONE OF ITS APPLICATIONS.
- Creator
-
fowe, Tame-kouontcho, Mohapatra, Ram, University of Central Florida
- Abstract / Description
-
This thesis is concerned with a brief summary of the theory of Padé approximants and one of its applications to Finance. Proofs of most of the theorems are omitted and many developments could not be mentioned due to the vastness of the field of Padé approximations. We provide reference to research papers and books that contain exhaustive treatment of the subject. This thesis is mainly divided into two parts. In the first part we derive a general expression of the Padé...
Show moreThis thesis is concerned with a brief summary of the theory of Padé approximants and one of its applications to Finance. Proofs of most of the theorems are omitted and many developments could not be mentioned due to the vastness of the field of Padé approximations. We provide reference to research papers and books that contain exhaustive treatment of the subject. This thesis is mainly divided into two parts. In the first part we derive a general expression of the Padé approximants and some of the results that will be related to the work on the second part of the thesis. The Aitken's method for quick convergence of series is highlighted as Padé . We explore the criteria for convergence of a series approximated by Padé approximants and obtain its relationship to numerical analysis with the help of the Crank-Nicholson method. The second part shows how Padé approximants can be a smooth method to model the term structure of interest rates using stochastic processes and the no arbitrage argument. Padé approximants have been considered by physicists to be appropriate for approximating large classes of functions. This fact is used here to compare Padé approximants with very low indices and two parameters to interest rates variations provided by the Federal Reserve System in the United States.
Show less - Date Issued
- 2007
- Identifier
- CFE0001682, ucf:47217
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001682
- Title
- EFFECT OF INNER SCALE ATMOSPHERIC SPECTRUM MODELS ON SCINTILLATION IN ALL OPTICAL TURBULENCE REGIMES.
- Creator
-
Mayer, Kenneth, Young, Cynthia, University of Central Florida
- Abstract / Description
-
Experimental studies have shown that a "bump" occurs in the atmospheric spectrum just prior to turbulence cell dissipation.1,3,4 In weak optical turbulence, this bump affects calculated scintillation. The purpose of this thesis was to determine if a "non-bump" atmospheric power spectrum can be used to model scintillation for plane waves and spherical waves in moderate to strong optical turbulence regimes. Scintillation expressions were developed from an "effective" von Karman spectrum using...
Show moreExperimental studies have shown that a "bump" occurs in the atmospheric spectrum just prior to turbulence cell dissipation.1,3,4 In weak optical turbulence, this bump affects calculated scintillation. The purpose of this thesis was to determine if a "non-bump" atmospheric power spectrum can be used to model scintillation for plane waves and spherical waves in moderate to strong optical turbulence regimes. Scintillation expressions were developed from an "effective" von Karman spectrum using an approach similar to that used by Andrews et al.8,14,15 in developing expressions from an "effective" modified (bump) spectrum. The effective spectrum extends the Rytov approximation into all optical turbulence regimes using filter functions to eliminate mid-range turbulent cell size effects to the scintillation index. Filter cutoffs were established by matching to known weak and saturated scintillation results. The resulting new expressions track those derived from the effective bump spectrum fairly closely. In extremely strong turbulence, differences are minimal.
Show less - Date Issued
- 2007
- Identifier
- CFE0001559, ucf:47141
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001559
- Title
- Reducing the Overhead of Memory Space, Network Communication and Disk I/O for Analytic Frameworks in Big Data Ecosystem.
- Creator
-
Zhang, Xuhong, Wang, Jun, Fan, Deliang, Lin, Mingjie, Zhang, Shaojie, University of Central Florida
- Abstract / Description
-
To facilitate big data processing, many distributed analytic frameworks and storage systems such as Apache Hadoop, Apache Hama, Apache Spark and Hadoop Distributed File System (HDFS) have been developed. Currently, many researchers are conducting research to either make them more scalable or enabling them to support more analysis applications. In my PhD study, I conducted three main works in this topic, which are minimizing the communication delay in Apache Hama, minimizing the memory space...
Show moreTo facilitate big data processing, many distributed analytic frameworks and storage systems such as Apache Hadoop, Apache Hama, Apache Spark and Hadoop Distributed File System (HDFS) have been developed. Currently, many researchers are conducting research to either make them more scalable or enabling them to support more analysis applications. In my PhD study, I conducted three main works in this topic, which are minimizing the communication delay in Apache Hama, minimizing the memory space and computational overhead in HDFS and minimizing the disk I/O overhead for approximation applications in Hadoop ecosystem. Specifically, In Apache Hama, communication delay makes up a large percentage of the overall graph processing time. While most recent research has focused on reducing the number of network messages, we add a runtime communication and computation scheduler to overlap them as much as possible. As a result, communication delay can be mitigated. In HDFS, the block location table and its corresponding maintenance could occupy more than half of the memory space and 30% of processing capacity in master node, which severely limit the scalability and performance of master node. We propose Deister that uses deterministic mathematical calculations to eliminate the huge table for storing the block locations and its corresponding maintenance. My third work proposes to enable both efficient and accurate approximations on arbitrary sub-datasets of a large dataset. Existing offline sampling based approximation systems are not adaptive to dynamic query workloads and online sampling based approximation systems suffer from low I/O efficiency and poor estimation accuracy. Therefore, we develop a distribution aware method called Sapprox. Our idea is to collect the occurrences of a sub-dataset at each logical partition of a dataset (storage distribution) in the distributed system at a very small cost, and make good use of such information to facilitate online sampling.
Show less - Date Issued
- 2017
- Identifier
- CFE0007299, ucf:52149
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007299
- Title
- Exploring sparsity, self-similarity, and low rank approximation in action recognition, motion retrieval, and action spotting.
- Creator
-
Sun, Chuan, Foroosh, Hassan, Hughes, Charles, Tappen, Marshall, Sukthankar, Rahul, Moshell, Jack, University of Central Florida
- Abstract / Description
-
This thesis consists of $4$ major parts. In the first part (Chapters $1-2$), we introduce the overview, motivation, and contribution of our works, and extensively survey the current literature for $6$ related topics. In the second part (Chapters $3-7$), we explore the concept of ``Self-Similarity" in two challenging scenarios, namely, the Action Recognition and the Motion Retrieval. We build three-dimensional volume representations for both scenarios, and devise effective techniques that can...
Show moreThis thesis consists of $4$ major parts. In the first part (Chapters $1-2$), we introduce the overview, motivation, and contribution of our works, and extensively survey the current literature for $6$ related topics. In the second part (Chapters $3-7$), we explore the concept of ``Self-Similarity" in two challenging scenarios, namely, the Action Recognition and the Motion Retrieval. We build three-dimensional volume representations for both scenarios, and devise effective techniques that can produce compact representations encoding the internal dynamics of data. In the third part (Chapter $8$), we explore the challenging action spotting problem, and propose a feature-independent unsupervised framework that is effective in spotting action under various real situations, even under heavily perturbed conditions. The final part (Chapters $9$) is dedicated to conclusions and future works.For action recognition, we introduce a generic method that does not depend on one particular type of input feature vector. We make three main contributions: (i) We introduce the concept of Joint Self-Similarity Volume (Joint SSV) for modeling dynamical systems, and show that by using a new optimized rank-1 tensor approximation of Joint SSV one can obtain compact low-dimensional descriptors that very accurately preserve the dynamics of the original system, e.g. an action video sequence; (ii) The descriptor vectors derived from the optimized rank-1 approximation make it possible to recognize actions without explicitly aligning the action sequences of varying speed of execution or difference frame rates; (iii) The method is generic and can be applied using different low-level features such as silhouettes, histogram of oriented gradients (HOG), etc. Hence, it does not necessarily require explicit tracking of features in the space-time volume. Our experimental results on five public datasets demonstrate that our method produces very good results and outperforms many baseline methods.For action recognition for incomplete videos, we determine whether incomplete videos that are often discarded carry useful information for action recognition, and if so, how one can represent such mixed collection of video data (complete versus incomplete, and labeled versus unlabeled) in a unified manner. We propose a novel framework to handle incomplete videos in action classification, and make three main contributions: (i) We cast the action classification problem for a mixture of complete and incomplete data as a semi-supervised learning problem of labeled and unlabeled data. (ii) We introduce a two-step approach to convert the input mixed data into a uniform compact representation. (iii) Exhaustively scrutinizing $280$ configurations, we experimentally show on our two created benchmarks that, even when the videos are extremely sparse and incomplete, it is still possible to recover useful information from them, and classify unknown actions by a graph based semi-supervised learning framework.For motion retrieval, we present a framework that allows for a flexible and an efficient retrieval of motion capture data in huge databases. The method first converts an action sequence into a self-similarity matrix (SSM), which is based on the notion of self-similarity. This conversion of the motion sequences into compact and low-rank subspace representations greatly reduces the spatiotemporal dimensionality of the sequences. The SSMs are then used to construct order-3 tensors, and we propose a low-rank decomposition scheme that allows for converting the motion sequence volumes into compact lower dimensional representations, without losing the nonlinear dynamics of the motion manifold. Thus, unlike existing linear dimensionality reduction methods that distort the motion manifold and lose very critical and discriminative components, the proposed method performs well, even when inter-class differences are small or intra-class differences are large. In addition, the method allows for an efficient retrieval and does not require the time-alignment of the motion sequences. We evaluate the performance of our retrieval framework on the CMU mocap dataset under two experimental settings, both demonstrating very good retrieval rates.For action spotting, our framework does not depend on any specific feature (e.g. HOG/HOF, STIP, silhouette, bag-of-words, etc.), and requires no human localization, segmentation, or framewise tracking. This is achieved by treating the problem holistically as that of extracting the internal dynamics of video cuboids by modeling them in their natural form as multilinear tensors. To extract their internal dynamics, we devised a novel Two-Phase Decomposition (TP-Decomp) of a tensor that generates very compact and discriminative representations that are robust to even heavily perturbed data. Technically, a Rank-based Tensor Core Pyramid (Rank-TCP) descriptor is generated by combining multiple tensor cores under multiple ranks, allowing to represent video cuboids in a hierarchical tensor pyramid. The problem then reduces to a template matching problem, which is solved efficiently by using two boosting strategies: (i) to reduce the search space, we filter the dense trajectory cloud extracted from the target video; (ii) to boost the matching speed, we perform matching in an iterative coarse-to-fine manner. Experiments on 5 benchmarks show that our method outperforms current state-of-the-art under various challenging conditions. We also created a challenging dataset called Heavily Perturbed Video Arrays (HPVA) to validate the robustness of our framework under heavily perturbed situations.
Show less - Date Issued
- 2014
- Identifier
- CFE0005554, ucf:50290
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005554
- Title
- Practical Implementations of the Active Set Method for Support Vector Machine Training with Semi-definite Kernels.
- Creator
-
Sentelle, Christopher, Georgiopoulos, Michael, Anagnostopoulos, Georgios, Kasparis, Takis, Stanley, Kenneth, Young, Cynthia, University of Central Florida
- Abstract / Description
-
The Support Vector Machine (SVM) is a popular binary classification model due to its superior generalization performance, relative ease-of-use, and applicability of kernel methods. SVM training entails solving an associated quadratic programming (QP) that presents significant challenges in terms of speed and memory constraints for very large datasets; therefore, research on numerical optimization techniques tailored to SVM training is vast. Slow training times are especially of concern when...
Show moreThe Support Vector Machine (SVM) is a popular binary classification model due to its superior generalization performance, relative ease-of-use, and applicability of kernel methods. SVM training entails solving an associated quadratic programming (QP) that presents significant challenges in terms of speed and memory constraints for very large datasets; therefore, research on numerical optimization techniques tailored to SVM training is vast. Slow training times are especially of concern when one considers that re-training is often necessary at several values of the model's regularization parameter, C, as well as associated kernel parameters.The active set method is suitable for solving SVM problem and is in general ideal when the Hessian is dense and the solution is sparse-the case for the l1-loss SVM formulation. There has recently been renewed interest in the active set method as a technique for exploring the entire SVM regularization path, which has been shown to solve the SVM solution at all points along the regularization path (all values of C) in not much more time than it takes, on average, to perform training at a single value of C with traditional methods. Unfortunately, the majority of active set implementations used for SVM training require positive definite kernels, and those implementations that do allow semi-definite kernels tend to be complex and can exhibit instability and, worse, lack of convergence. This severely limits applicability since it precludes the use of the linear kernel, can be an issue when duplicate data points exist, and doesn't allow use of low-rank kernel approximations to improve tractability for large datasets. The difficulty, in the case of a semi-definite kernel, arises when a particular active set results in a singular KKT matrix (or the equality-constrained problem formed using the active set is semi-definite). Typically this is handled by explicitly detecting the rank of the KKT matrix. Unfortunately, this adds significant complexity to the implementation; and, if care is not taken, numerical instability, or worse, failure to converge can result. This research shows that the singular KKT system can be avoided altogether with simple modifications to the active set method. The result is a practical, easy to implement active set method that does not need to explicitly detect the rank of the KKT matrix nor modify factorization or solution methods based upon the rank. Methods are given for both conventional SVM training as well as for computing the regularization path that are simple and numerically stable. First, an efficient revised simplex method is efficiently implemented for SVM training (SVM-RSQP) with semi-definite kernels and shown to out-perform competing active set implementations for SVM training in terms of training time as well as shown to perform on-par with state-of-the-art SVM training algorithms such as SMO and SVMLight. Next, a new regularization path-following algorithm for semi-definite kernels (Simple SVMPath) is shown to be orders of magnitude faster, more accurate, and significantly less complex than competing methods and does not require the use of external solvers. Theoretical analysis reveals new insights into the nature of the path-following algorithms. Finally, a method is given for computing the approximate regularization path and approximate kernel path using the warm-start capability of the proposed revised simplex method (SVM-RSQP) and shown to provide significant, orders of magnitude, speed-ups relative to the traditional (")grid search(") where re-training is performed at each parameter value. Surprisingly, it also shown that even when the solution for the entire path is not desired, computing the approximate path can be seen as a speed-up mechanism for obtaining the solution at a single value. New insights are given concerning the limiting behaviors of the regularization and kernel path as well as the use of low-rank kernel approximations.
Show less - Date Issued
- 2014
- Identifier
- CFE0005251, ucf:50600
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005251
- Title
- GRAPH-THEORETIC APPROACH TO MODELING PROPAGATION AND CONTROL OF NETWORK WORMS.
- Creator
-
Nikoloski, Zoran, Deo, Narsingh, University of Central Florida
- Abstract / Description
-
In today's network-dependent society, cyber attacks with network worms have become the predominant threat to confidentiality, integrity, and availability of network computing resources. Despite ongoing research efforts, there is still no comprehensive network-security solution aimed at controling large-scale worm propagation. The aim of this work is fivefold: (1) Developing an accurate combinatorial model of worm propagation that can facilitate the analysis of worm control strategies, (2)...
Show moreIn today's network-dependent society, cyber attacks with network worms have become the predominant threat to confidentiality, integrity, and availability of network computing resources. Despite ongoing research efforts, there is still no comprehensive network-security solution aimed at controling large-scale worm propagation. The aim of this work is fivefold: (1) Developing an accurate combinatorial model of worm propagation that can facilitate the analysis of worm control strategies, (2) Building an accurate epidemiological model for the propagation of a worm employing local strategies, (3) Devising distributed architecture and algorithms for detection of worm scanning activities, (4) Designing effective control strategies against the worm, and (5) Simulation of the developed models and strategies on large, scale-free graphs representing real-world communication networks. The proposed pair-approximation model uses the information about the network structure--order, size, degree distribution, and transitivity. The empirical study of propagation on large scale-free graphs is in agreement with the theoretical analysis of the proposed pair-approximation model. We, then, describe a natural generalization of the classical cops-and-robbers game--a combinatorial model of worm propagation and control. With the help of this game on graphs, we show that the problem of containing the worm is NP-hard. Six novel near-optimal control strategies are devised: combination of static and dynamic immunization, reactive dynamic and invariant dynamic immunization, soft quarantining, predictive traffic-blocking, and contact-tracing. The analysis of the predictive dynamic traffic-blocking, employing only local information, shows that the worm can be contained so that 40\% of the network nodes are not affected. Finally, we develop the Detection via Distributed Blackholes architecture and algorithm which reflect the propagation strategy used by the worm and the salient properties of the network. Our distributed detection algorithm can detect the worm scanning activity when only 1.5% of the network has been affected by the propagation. The proposed models and algorithms are analyzed with an individual-based simulation of worm propagation on realistic scale-free topologies.
Show less - Date Issued
- 2005
- Identifier
- CFE0000640, ucf:46521
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000640
- Title
- NEURAL NETWORKS SATISFYING STONE-WEIESTRASS THEOREM AND APPROXIMATING SCATTERED DATABYKOHONEN NEURAL NETWORKS.
- Creator
-
Thakkar, Pinal, Mohapatra, Ram, University of Central Florida
- Abstract / Description
-
Neural networks are an attempt to build computer networks called artificial neurons, which imitate the activities of the human brain. Its origin dates back to 1943 when neurophysiologist Warren Me Cello and logician Walter Pits produced the first artificial neuron. Since then there has been tremendous development of neural networks and their applications to pattern and optical character recognition, speech processing, time series prediction, image processing and scattered data approximation....
Show moreNeural networks are an attempt to build computer networks called artificial neurons, which imitate the activities of the human brain. Its origin dates back to 1943 when neurophysiologist Warren Me Cello and logician Walter Pits produced the first artificial neuron. Since then there has been tremendous development of neural networks and their applications to pattern and optical character recognition, speech processing, time series prediction, image processing and scattered data approximation. Since it has been shown that neural nets can approximate all but pathological functions, Neil Cotter considered neural network architecture based on Stone-Weierstrass Theorem. Using exponential functions, polynomials, rational functions and Boolean functions one can follow the method given by Cotter to obtain neural networks, which can approximate bounded measurable functions. Another problem of current research in computer graphics is to construct curves and surfaces from scattered spatial points by using B-Splines and NURBS or Bezier surfaces. Hoffman and Varady used Kohonen neural networks to construct appropriate grids. This thesis is concerned with two types of neural networks viz. those which satisfy the conditions of the Stone-Weierstrass theorem and Kohonen neural networks. We have used self-organizing maps for scattered data approximation. Neural network Tool Box from MATLAB is used to develop the required grids for approximating scattered data in one and two dimensions.
Show less - Date Issued
- 2004
- Identifier
- CFE0000226, ucf:46262
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000226
- Title
- DESIGN OPTIMIZATION OF SOLID ROCKET MOTOR GRAINS FOR INTERNAL BALLISTIC PERFORMANCE.
- Creator
-
Hainline, Roger, Nayfeh, Jamal, University of Central Florida
- Abstract / Description
-
The work presented in this thesis deals with the application of optimization tools to the design of solid rocket motor grains per internal ballistic requirements. Research concentrated on the development of an optimization strategy capable of efficiently and consistently optimizing virtually an unlimited range of radial burning solid rocket motor grain geometries. Optimization tools were applied to the design process of solid rocket motor grains through an optimization framework developed to...
Show moreThe work presented in this thesis deals with the application of optimization tools to the design of solid rocket motor grains per internal ballistic requirements. Research concentrated on the development of an optimization strategy capable of efficiently and consistently optimizing virtually an unlimited range of radial burning solid rocket motor grain geometries. Optimization tools were applied to the design process of solid rocket motor grains through an optimization framework developed to interface optimization tools with the solid rocket motor design system. This was done within a programming architecture common to the grain design system, AML. This commonality in conjunction with the object-oriented dependency-tracking features of this programming architecture were used to reduce the computational time of the design optimization process. The optimization strategy developed for optimizing solid rocket motor grain geometries was called the internal ballistic optimization strategy. This strategy consists of a three stage optimization process; approximation, global optimization, and highfidelity optimization, and optimization methodologies employed include DOE, genetic algorithms, and the BFGS first-order gradient-based algorithm. This strategy was successfully applied to the design of three solid rocket motor grains of varying complexity. The contributions of this work was the development and application of an optimization strategy to the design process of solid rocket motor grains per internal ballistic requirements.
Show less - Date Issued
- 2006
- Identifier
- CFE0001236, ucf:46929
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001236
- Title
- Approximate Binary Decision Diagrams for High-Performance Computing.
- Creator
-
Sivakumar, Anagha, Jha, Sumit Kumar, Leavens, Gary, Valliyil Thankachan, Sharma, University of Central Florida
- Abstract / Description
-
Many soft applications such as machine learning and probabilistic computational modeling can benefit from approximate but high-performance implementations. In this thesis, we study how Binary decision diagrams (BDDs) can be used to synthesize approximate high-performance implementations from high-level specifications such as program kernels written in a C-like language. We demonstrate the potential of our approach by designing nanoscale crossbars from such approximate Boolean decision...
Show moreMany soft applications such as machine learning and probabilistic computational modeling can benefit from approximate but high-performance implementations. In this thesis, we study how Binary decision diagrams (BDDs) can be used to synthesize approximate high-performance implementations from high-level specifications such as program kernels written in a C-like language. We demonstrate the potential of our approach by designing nanoscale crossbars from such approximate Boolean decision diagrams. Our work may be useful in designing massively-parallel approximate crossbar computing systems for application-specific domains such as probabilistic computational modeling.
Show less - Date Issued
- 2018
- Identifier
- CFE0007414, ucf:52704
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007414
- Title
- Bridging the Gap between Application and Solid-State-Drives.
- Creator
-
Zhou, Jian, Wang, Jun, Lin, Mingjie, Fan, Deliang, Ewetz, Rickard, Qi, GuoJun, University of Central Florida
- Abstract / Description
-
Data storage is one of the important and often critical parts of the computing systemin terms of performance, cost, reliability, and energy.Numerous new memory technologies,such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor,have emerged recently.Many of them have already entered the production system.Traditional storage optimization and caching algorithms are far from optimalbecause storage I/Os do not show simple locality.To provide optimal storage we need...
Show moreData storage is one of the important and often critical parts of the computing systemin terms of performance, cost, reliability, and energy.Numerous new memory technologies,such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor,have emerged recently.Many of them have already entered the production system.Traditional storage optimization and caching algorithms are far from optimalbecause storage I/Os do not show simple locality.To provide optimal storage we need accurate predictions of I/O behavior.However, the workloads are increasingly dynamic and diverse,making the long and short time I/O prediction challenge.Because of the evolution of the storage technologiesand the increasing diversity of workloads,the storage software is becoming more and more complex.For example, Flash Translation Layer (FTL) is added for NAND-flash based Solid State Disks (NAND-SSDs).However, it introduces overhead such as address translation delay and garbage collection costs.There are many recent studies aim to address the overhead.Unfortunately, there is no one-size-fits-all solution due to the variety of workloads.Despite rapidly evolving in storage technologies,the increasing heterogeneity and diversity in machines and workloadscoupled with the continued data explosionexacerbate the gap between computing and storage speeds.In this dissertation, we improve the data storage performance from both top-down and bottom-up approach.First, we will investigate exposing the storage level parallelismso that applications can avoid I/O contentions and workloads skewwhen scheduling the jobs.Second, we will study how architecture aware task scheduling can improve the performance of the application when PCM based NVRAM are equipped.Third, we will develop an I/O correlation aware flash translation layer for NAND-flash based Solid State Disks.Fourth, we will build a DRAM-based correlation aware FTL emulator and study the performance in various filesystems.
Show less - Date Issued
- 2018
- Identifier
- CFE0007273, ucf:52188
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007273
- Title
- Approximate In-memory computing on RERAMs.
- Creator
-
Khokhar, Salman Anwar, Heinrich, Mark, Leavens, Gary, Yuksel, Murat, Bagci, Ulas, Rahman, Talat, University of Central Florida
- Abstract / Description
-
Computing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a...
Show moreComputing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a memory-processor communication bottleneck, which is commonly referredto as the 'memory wall'. The relatively slower progress in memory technology compared with processing units has continued to exacerbate the memory wall problem. As feature sizes in the CMOSlogic family reduce further, quantum tunneling effects are becoming more prominent. Simultaneously, chip transistor density is already so high that all transistors cannot be powered up at the same time without violating temperature constraints, a phenomenon characterized as dark-silicon. Coupled with this, there is also an increase in leakage currents with smaller feature sizes, resultingin a breakdown of 'Dennard's' scaling. All these challenges cannot be met without fundamental changes in current computing paradigms. One viable solution is in-memory computing, wherecomputing and storage are performed alongside each other. A number of emerging memory fabrics such as ReRAMS, STT-RAMs, and PCM RAMs are capable of performing logic in-memory.ReRAMs possess high storage density, have extremely low power consumption and a low cost of fabrication. These advantages are due to the simple nature of its basic constituting elements whichallow nano-scale fabrication. We use flow-based computing on ReRAM crossbars for computing that exploits natural sneak paths in those crossbars.Another concurrent development in computing is the maturation of domains that are error resilient while being highly data and power intensive. These include machine learning, pattern recognition,computer vision, image processing, and networking, etc. This shift in the nature of computing workloads has given weight to the idea of (")approximate computing("), in which device efficiency is improved by sacrificing tolerable amounts of accuracy in computation. We present a mathematically rigorous foundation for the synthesis of approximate logic and its mapping to ReRAM crossbars using search based and graphical methods.
Show less - Date Issued
- 2019
- Identifier
- CFE0007827, ucf:52817
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007827
- Title
- Comparing the Variational Approximation and Exact Solutions of the Straight Unstaggered and Twisted Staggered Discrete Solitons.
- Creator
-
Marulanda, Daniel, Kaup, David, Moore, Brian, Vajravelu, Kuppalapalle, University of Central Florida
- Abstract / Description
-
Discrete nonlinear Schr(&)#246;dinger equations (DNSL) have been used to provide models of a variety of physical settings. An application of DNSL equations is provided by Bose-Einstein condensates which are trapped in deep optical-lattice potentials. These potentials effectively splits the condensate into a set of droplets held in local potential wells, which are linearly coupled across the potential barriers between them [3]. In previous works, DNLS systems have also been used for symmetric...
Show moreDiscrete nonlinear Schr(&)#246;dinger equations (DNSL) have been used to provide models of a variety of physical settings. An application of DNSL equations is provided by Bose-Einstein condensates which are trapped in deep optical-lattice potentials. These potentials effectively splits the condensate into a set of droplets held in local potential wells, which are linearly coupled across the potential barriers between them [3]. In previous works, DNLS systems have also been used for symmetric on-site-centered solitons [11]. A few works have constructed different discrete solitons via the variational approximation (VA) and have explored their regions for their solutions [11, 12]. Exact solutions for straight unstaggered-twisted staggered (SUTS) discrete solitons have been found using the shooting method [12].In this work, we will use Newton's method, which converges to the exact solutions of SUTS discrete solitons. The VA has been used to create starting points. There are two distinct types of solutions for the soliton's waveform: SUTS discrete solitons and straight unstaggered discrete solitons, where the twisted component is zero in the latter soliton. We determine the range of parameters for which each type of solution exists. We also compare the regions for the VA solutions and the exact solutions in certain selected cases. Then, we graphically and numerically compare examples of the VA solutions with their corresponding exact solutions. We also find that the VA provides reasonable approximations to the exact solutions.
Show less - Date Issued
- 2016
- Identifier
- CFE0006350, ucf:51570
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006350
- Title
- THEORETICAL TAILORING OF PERFORATED THIN SILVER FILMS FOR AFFINITY SURFACE PLASMON RESONANCE BIOSENSOR APPLICATIONS.
- Creator
-
Gongora Jr., Renan, Zou, Shengli, University of Central Florida
- Abstract / Description
-
Metallic films, in conjunction with biochemical-targeted probes, are expected to provide early diagnosis, targeted therapy and non-invasive monitoring for epidemiology applications. The resonance wavelength peaks, both plasmonic and Wood-Rayleigh Anomalies (WRAs), in the scattering spectra are affected by the metallic architecture. As of today, much research has been devoted to extinction efficiency in the plasmonic region. However, Wood Rayleigh Anomalies (WRAs) typically occur at...
Show moreMetallic films, in conjunction with biochemical-targeted probes, are expected to provide early diagnosis, targeted therapy and non-invasive monitoring for epidemiology applications. The resonance wavelength peaks, both plasmonic and Wood-Rayleigh Anomalies (WRAs), in the scattering spectra are affected by the metallic architecture. As of today, much research has been devoted to extinction efficiency in the plasmonic region. However, Wood Rayleigh Anomalies (WRAs) typically occur at wavelengths associated with the periodic distance of the structures. A significant number of papers have already focused on the plasmonic region of the visible spectrum, but a less explored area of research was presented here; the desired resonance wavelength region was 400-500nm, corresponding to the WRA for the silver film with perforated hole with a periodic distance of 400nm. Simulations obtained from the discrete dipole approximation (DDA) method, show sharp spectral bands (either high or low scattering efficiencies) in both wavelength regions of the visible spectrum simulated from Ag film with cylindrical hole arrays In addition, surprising results were obtained in the parallel scattering spectra,where the electric field is contained in the XY plane, when the angle between the metallic surface and the incident light was adjusted to 14 degrees; a bathochromic shift was observed for the WRA peak suggesting a hybrid resonance mode. Metallic films have the potential to be used in instrumental techniques for use as sensors, i.e. surface plasmon resonance affinity biosensors, but are not limited to such instrumental techniques. Although the research here was aimed towards affinity biosensors, other sensory designs can benefit from the optimized Ag film motifs. The intent of the study was to elucidate metal film motifs, when incorporated into instrumental analysis, allowing the quantification of genetic material in the visible region. Any research group that routinely benefits from quantification of various analytes in solution matrices will also benefit from this study, as there are a bewildering number of instrumental sensory methods and setups available.
Show less - Date Issued
- 2014
- Identifier
- CFH0004538, ucf:45155
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004538
- Title
- Optical Properties of Single Nanoparticles and Two-dimensional Arrays of Plasmonic Nanostructures.
- Creator
-
Zhou, Yadong, Zou, Shengli, Harper, James, Zhai, Lei, Chen, Gang, Zheng, Qipeng, University of Central Florida
- Abstract / Description
-
The tunability of plasmonic properties of nanomaterials makes them promising in many applications such as molecular detection, spectroscopy techniques, solar energy materials, etc. In the thesis, we mainly focus on the interaction between light with single nanoparticles and two-dimensional plasmonic nanostructures using electrodynamic methods. The fundamental equations of electromagnetic theory: Maxwell's equations are revisited to solve the problems of light-matter interaction, particularly...
Show moreThe tunability of plasmonic properties of nanomaterials makes them promising in many applications such as molecular detection, spectroscopy techniques, solar energy materials, etc. In the thesis, we mainly focus on the interaction between light with single nanoparticles and two-dimensional plasmonic nanostructures using electrodynamic methods. The fundamental equations of electromagnetic theory: Maxwell's equations are revisited to solve the problems of light-matter interaction, particularly the interaction of light and noble nanomaterials, such as gold and silver. In Chapter 1, Stokes parameters that describe the polarization states of electromagnetic wave are presented. The scattering and absorption of a particle with an arbitrary shape are discussed. In Chapter 2, several computational methods for solving the optical response of nanomaterials when they are illuminated by incident light are studied, which include the Discrete Dipole Approximation (DDA) method, the coupled dipole (CD) method, etc. In Chapter 3, the failure and reexamination of the relation between the Raman enhancement factor and local enhanced electric field intensity is investigated by placing a molecular dipole in the vicinity of a silver rod. Using a silver rod and a molecular dipole, we demonstrate that the relation generated using a spherical nanoparticle cannot simply be applied to systems with particles of different shapes. In Chapter 4, a silver film with switchable total transmission/reflection is discussed. The film is composed of two-dimensional rectangular prisms. The factors affecting the transmission (reflection) as well as the mechanisms leading to the phenomena are studied. Later, in Chapter 5 and 6, the sandwiched nano-film composed of two 2D rectangular prisms arrays and two glass substrates with a continuous film in between is examined to enhance the transmission of the continuous silver film.
Show less - Date Issued
- 2018
- Identifier
- CFE0007117, ucf:51943
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007117
- Title
- The Power of Quantum Walk: Insights, Implementation, and Applications.
- Creator
-
Chiang, Chen-Fu, Wocjan, Pawel, Marinescu, Dan, Dechev, Damian, Mucciolo, Eduardo, University of Central Florida
- Abstract / Description
-
In this thesis, I investigate quantum walks in quantum computing from threeaspects: the insights, the implementation, and the applications. Quantum walks are the quantum analogue of classical random walks. For the insights of quantum walks, I list and explain the required components for quantizing a classical random walk into a quantum walk. The components are, for instance, Markov chains, quantum phase estimation, and quantum spectrum theorem. I then demonstrate how the product of two...
Show moreIn this thesis, I investigate quantum walks in quantum computing from threeaspects: the insights, the implementation, and the applications. Quantum walks are the quantum analogue of classical random walks. For the insights of quantum walks, I list and explain the required components for quantizing a classical random walk into a quantum walk. The components are, for instance, Markov chains, quantum phase estimation, and quantum spectrum theorem. I then demonstrate how the product of two reflections in the walk operator provides a quadratic speed-up, in comparison to the classical counterpart. For the implementation of quantum walks, I show the construction of an efficient circuit for realizing one single step of the quantum walk operator. Furthermore, I devise a more succinct circuit to approximately implement quantum phase estimation with constant precision controlled phase shift operators. From an implementation perspective, efficient circuits are always desirable because the realization of a phase shift operator with high precision would be a costly task and a critical obstacle. For the applications of quantum walks, I apply the quantum walk technique along with other fundamental quantum techniques, such as phase estimation, to solve the partition function problem. However, there might be some scenario in which the speed-up of spectral gap is insignificant. In a situation like that that,I provide an amplitude amplification-based approach to prepare the thermal Gibbs state. Such an approach is useful when the spectral gap is extremely small. Finally, I further investigate and explore the effect of noise (perturbation)on the performance of quantum walks.
Show less - Date Issued
- 2011
- Identifier
- CFE0004094, ucf:49148
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004094
- Title
- Image degradation due to surface scattering in the presence of aberrations.
- Creator
-
Choi, Narak, Harvey, James, Zeldovich, Boris, Moharam, M., Eastes, Richard, University of Central Florida
- Abstract / Description
-
This dissertation focuses on the scattering phenomena by well-polished optical mirror surfaces. Specifically, predicting image degradation by surface scatter from rough mirror surfaces for a two-mirror telescope operating at extremely short wavelengths (9nm~30nm) is performed. To evaluate image quality, surface scatter is predicted from the surface metrology data and the point spread function in the presence of both surface scatter and aberrations is calculated.For predicting the scattering...
Show moreThis dissertation focuses on the scattering phenomena by well-polished optical mirror surfaces. Specifically, predicting image degradation by surface scatter from rough mirror surfaces for a two-mirror telescope operating at extremely short wavelengths (9nm~30nm) is performed. To evaluate image quality, surface scatter is predicted from the surface metrology data and the point spread function in the presence of both surface scatter and aberrations is calculated.For predicting the scattering intensity distribution, both numerical and analytic methods are considered. Among the numerous analytic methods, the small perturbation method (classical Rayleigh-Rice surface scatter theory), the Kirchhoff approximation method (classical Beckman-Kirchhoff surface scatter theory), and the generalized Harvey-Shack surface scatter theory are adopted. As a numerical method, the integral equation method (method of moments) known as a rigorous solution is discussed. Since the numerical method is computationally too intensive to obtain the scattering prediction directly for the two mirror telescope, it is used for validating the three analytic approximate methods in special cases. In our numerical comparison work, among the three approximate methods, the generalized Harvey-Shack model shows excellent agreement to the rigorous solution and it is used to predict surface scattering from the mirror surfaces.Regarding image degradation due to surface scatter in the presence of aberrations, it is shown that the composite point spread function is obtained in explicit form in terms of convolutions of the geometrical point spread function and scaled bidirectional scattering distribution functions of the individual surfaces of the imaging system. The approximations and assumptions in this formulation are discussed. The result is compared to the irradiance distribution obtained using commercial non-sequential ray tracing software for the case of a two-mirror telescope operating at the extreme ultra-violet wavelengths and the two results are virtually identical. Finally, the image degradation due to the surface scatter from the mirror surfaces and the aberration of the telescope is evaluated in terms of the fractional ensquared energy (for different wavelengths and field angles) which is commonly used as an image quality requirement on many NASA astronomy programs.
Show less - Date Issued
- 2012
- Identifier
- CFE0004289, ucf:49492
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004289
- Title
- Weighted Low-Rank Approximation of Matrices:Some Analytical and Numerical Aspects.
- Creator
-
Dutta, Aritra, Li, Xin, Sun, Qiyu, Mohapatra, Ram, Nashed, M, Shah, Mubarak, University of Central Florida
- Abstract / Description
-
This dissertation addresses some analytical and numerical aspects of a problem of weighted low-rank approximation of matrices. We propose and solve two different versions of weighted low-rank approximation problems. We demonstrate, in addition, how these formulations can be efficiently used to solve some classic problems in computer vision. We also present the superior performance of our algorithms over the existing state-of-the-art unweighted and weighted low-rank approximation algorithms...
Show moreThis dissertation addresses some analytical and numerical aspects of a problem of weighted low-rank approximation of matrices. We propose and solve two different versions of weighted low-rank approximation problems. We demonstrate, in addition, how these formulations can be efficiently used to solve some classic problems in computer vision. We also present the superior performance of our algorithms over the existing state-of-the-art unweighted and weighted low-rank approximation algorithms.Classical principal component analysis (PCA) is constrained to have equal weighting on the elements of the matrix, which might lead to a degraded design in some problems. To address this fundamental flaw in PCA, Golub, Hoffman, and Stewart proposed and solved a problem of constrained low-rank approximation of matrices: For a given matrix $A = (A_1\;A_2)$, find a low rank matrix $X = (A_1\;X_2)$ such that ${\rm rank}(X)$ is less than $r$, a prescribed bound, and $\|A-X\|$ is small.~Motivated by the above formulation, we propose a weighted low-rank approximation problem that generalizes the constrained low-rank approximation problem of Golub, Hoffman and Stewart.~We study a general framework obtained by pointwise multiplication with the weight matrix and consider the following problem:~For a given matrix $A\in\mathbb{R}^{m\times n}$ solve:\begin{eqnarray*}\label{weighted problem}\min_{\substack{X}}\|\left(A-X\right)\odot W\|_F^2~{\rm subject~to~}{\rm rank}(X)\le r,\end{eqnarray*}where $\odot$ denotes the pointwise multiplication and $\|\cdot\|_F$ is the Frobenius norm of matrices.In the first part, we study a special version of the above general weighted low-rank approximation problem.~Instead of using pointwise multiplication with the weight matrix, we use the regular matrix multiplication and replace the rank constraint by its convex surrogate, the nuclear norm, and consider the following problem:\begin{eqnarray*}\label{weighted problem 1}\hat{X} (&)=(&) \arg \min_X \{\frac{1}{2}\|(A-X)W\|_F^2 +\tau\|X\|_\ast\},\end{eqnarray*}where $\|\cdot\|_*$ denotes the nuclear norm of $X$.~Considering its resemblance with the classic singular value thresholding problem we call it the weighted singular value thresholding~(WSVT)~problem.~As expected,~the WSVT problem has no closed form analytical solution in general,~and a numerical procedure is needed to solve it.~We introduce auxiliary variables and apply simple and fast alternating direction method to solve WSVT numerically.~Moreover, we present a convergence analysis of the algorithm and propose a mechanism for estimating the weight from the data.~We demonstrate the performance of WSVT on two computer vision applications:~background estimation from video sequences~and facial shadow removal.~In both cases,~WSVT shows superior performance to all other models traditionally used. In the second part, we study the general framework of the proposed problem.~For the special case of weight, we study the limiting behavior of the solution to our problem,~both analytically and numerically.~In the limiting case of weights,~as $(W_1)_{ij}\to\infty, W_2=\mathbbm{1}$, a matrix of 1,~we show the solutions to our weighted problem converge, and the limit is the solution to the constrained low-rank approximation problem of Golub et. al. Additionally, by asymptotic analysis of the solution to our problem,~we propose a rate of convergence.~By doing this, we make explicit connections between a vast genre of weighted and unweighted low-rank approximation problems.~In addition to these, we devise a novel and efficient numerical algorithm based on the alternating direction method for the special case of weight and present a detailed convergence analysis.~Our approach improves substantially over the existing weighted low-rank approximation algorithms proposed in the literature.~Finally, we explore the use of our algorithm to real-world problems in a variety of domains, such as computer vision and machine learning. Finally, for a special family of weights, we demonstrate an interesting property of the solution to the general weighted low-rank approximation problem. Additionally, we devise two accelerated algorithms by using this property and present their effectiveness compared to the algorithm proposed in Chapter 4.
Show less - Date Issued
- 2016
- Identifier
- CFE0006833, ucf:51789
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006833