Current Search: parameter estimation (x)
View All Items
- Title
- PARAMETER ESTIMATION USING SENSOR FUSION AND MODEL UPDATING.
- Creator
-
Francoforte, Kevin, Catbas, Necati, University of Central Florida
- Abstract / Description
-
Engineers and infrastructure owners have to manage an aging civil infrastructure in the US. Engineers have the opportunity to analyze structures using finite element models (FEM), and often base their engineering decisions on the outcome of the results. Ultimately, the success of these decisions is directly related to the accuracy of the finite element model in representing the real-life structure. Improper assumptions in the model such as member properties or connections, can lead to...
Show moreEngineers and infrastructure owners have to manage an aging civil infrastructure in the US. Engineers have the opportunity to analyze structures using finite element models (FEM), and often base their engineering decisions on the outcome of the results. Ultimately, the success of these decisions is directly related to the accuracy of the finite element model in representing the real-life structure. Improper assumptions in the model such as member properties or connections, can lead to inaccurate results. A major source of modeling error in many finite element models of existing structures is due to improper representation of the boundary conditions. In this study, it is aimed to integrate experimental and analytical concepts by means of parameter estimation, whereby the boundary condition parameters of a structure in question are determined. FEM updating is a commonly used method to determine the "as-is" condition of an existing structure. Experimental testing of the structure using static and/or dynamic measurements can be utilized to update the unknown parameters. Optimization programs are used to update the unknown parameters by minimizing the error between the analytical and experimental measurements. Through parameter estimation, unknown parameters of the structure such as stiffness, mass or support conditions can be estimated, or more appropriately, "updated", so that the updated model provides for a better representation of the actual conditions of the system. In this study, a densely instrumented laboratory test beam was used to carry-out both analytical and experimental analysis of multiple boundary condition setups. The test beam was instrumented with an array of displacement transducers, tiltmeters and accelerometers. Linear vertical springs represented the unknown boundary stiffness parameters in the numerical model of the beam. Nine different load cases were performed and static measurements were used to update the spring stiffness, while dynamic measurements and additional load cases were used to verify these updated parameters. Two different optimization programs were used to update the unknown parameters and then the results were compared. One optimization tool was developed by the author, Spreadsheet Parameter Estimation (SPE), which utilized the Solver function found in the widely available Microsoft Excel software. The other one, comprehensive MATLAB-based PARameter Identification System (PARIS) software, was developed at Tufts University. Optimization results from the two programs are presented and discussed for different boundary condition setups in this thesis. For this purpose, finite element models were updated using the static data and then these models were checked against dynamic measurements for model validation. Model parameter updating provides excellent insight into the behavior of different boundary conditions and their effect on the overall structural behavior of the system. Updated FEM using estimated parameters from both optimization software programs generally shows promising results when compared to the experimental data sets. Although the use of SPE is simple and generally straight-forward, we will see the apparent limitations when dealing with complex, non-linear support conditions. Due to the inherent error associated with experimental measurements and FEM modeling assumptions, PARIS serves as a better suited tool to perform parameter estimation. Results from SPE can be used for quick analysis of structures, and can serve as initial inputs for the more in depth PARIS models. A number of different sensor types and spatial resolution were also investigated for the possible minimum instrumentation to have an acceptable model representation in terms of model and experimental data correlation.
Show less - Date Issued
- 2007
- Identifier
- CFE0001676, ucf:47206
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001676
- Title
- PARAMETER ESTIMATION IN LINEAR REGRESSION.
- Creator
-
Ollikainen, Kati, Malone, Linda, University of Central Florida
- Abstract / Description
-
Today increasing amounts of data are available for analysis purposes and often times for resource allocation. One method for analysis is linear regression which utilizes the least squares estimation technique to estimate a model's parameters. This research investigated, from a user's perspective, the ability of linear regression to estimate the parameters' confidence intervals at the usual 95% level for medium sized data sets. A controlled environment using simulation with known...
Show moreToday increasing amounts of data are available for analysis purposes and often times for resource allocation. One method for analysis is linear regression which utilizes the least squares estimation technique to estimate a model's parameters. This research investigated, from a user's perspective, the ability of linear regression to estimate the parameters' confidence intervals at the usual 95% level for medium sized data sets. A controlled environment using simulation with known data characteristics (clean data, bias and or multicollinearity present) was used to show underlying problems exist with confidence intervals not including the true parameter (even though the variable was selected). The Elder/Pregibon rule was used for variable selection. A comparison of the bootstrap Percentile and BCa confidence interval was made as well as an investigation of adjustments to the usual 95% confidence intervals based on the Bonferroni and Scheffe multiple comparison principles. The results show that linear regression has problems in capturing the true parameters in the confidence intervals for the sample sizes considered, the bootstrap intervals perform no better than linear regression, and the Scheffe method is too wide for any application considered. The Bonferroni adjustment is recommended for larger sample sizes and when the t-value for a selected variable is about 3.35 or higher. For smaller sample sizes all methods show problems with type II errors resulting from confidence intervals being too wide.
Show less - Date Issued
- 2006
- Identifier
- CFE0001482, ucf:47081
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001482
- Title
- CHARACTERIZATION OF AN ADVANCED NEURON MODEL.
- Creator
-
Echanique, Christopher, Behal, Aman, University of Central Florida
- Abstract / Description
-
This thesis focuses on an adaptive quadratic spiking model of a motoneuron that is both versatile in its ability to represent a range of experimentally observed neuronal firing patterns as well as computationally efficient for large network simulation. The objective of research is to fit membrane voltage data to the model using a parameter estimation approach involving simulated annealing. By manipulating the system dynamics of the model, a realizable model with linear parameterization (LP)...
Show moreThis thesis focuses on an adaptive quadratic spiking model of a motoneuron that is both versatile in its ability to represent a range of experimentally observed neuronal firing patterns as well as computationally efficient for large network simulation. The objective of research is to fit membrane voltage data to the model using a parameter estimation approach involving simulated annealing. By manipulating the system dynamics of the model, a realizable model with linear parameterization (LP) can be obtained to simplify the estimation process. With a persistently excited current input applied to the model, simulated annealing is used to efficiently determine the best model parameters that minimize the square error function between the membrane voltage reference data and data generated by the LP model. Results obtained through simulation of this approach show feasibility to predict a range of different neuron firing patterns.
Show less - Date Issued
- 2012
- Identifier
- CFH0004259, ucf:44958
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004259
- Title
- DATA-TRUE CHARACTERIZATION OF NEURONAL MODELS.
- Creator
-
Suarez, Jose, Behal, Aman, University of Central Florida
- Abstract / Description
-
In this thesis, a weighted least squares approach is initially presented to estimate the parameters of an adaptive quadratic neuronal model. By casting the discontinuities in the state variables at the spiking instants as an impulse train driving the system dynamics, the neuronal output is represented as a linearly parameterized model that depends on ltered versions of the input current and the output voltage at the cell membrane. A prediction errorbased weighted least squares method is...
Show moreIn this thesis, a weighted least squares approach is initially presented to estimate the parameters of an adaptive quadratic neuronal model. By casting the discontinuities in the state variables at the spiking instants as an impulse train driving the system dynamics, the neuronal output is represented as a linearly parameterized model that depends on ltered versions of the input current and the output voltage at the cell membrane. A prediction errorbased weighted least squares method is formulated for the model. This method allows for rapid estimation of model parameters under a persistently exciting input current injection. Simulation results show the feasibility of this approach to predict multiple neuronal ring patterns. Results of the method using data from a detailed ion-channel based model showed issues that served as the basis for the more robust resonate-and- re model presented. A second method is proposed to overcome some of the issues found in the adaptive quadratic model presented. The original quadratic model is replaced by a linear resonateand- re model -with stochastic threshold- that is both computational efficient and suitable for larger network simulations. The parameter estimation method presented here consists of different stages where the set of parameters is divided in to two. The rst set of parameters is assumed to represent the subthreshold dynamics of the model, and it is estimated using a nonlinear least squares algorithm, while the second set is associated with the threshold and reset parameters as its estimated using maximum likelihood formulations. The validity of the estimation method is then tested using detailed Hodgkin-Huxley model data as well as experimental voltage recordings from rat motoneurons.
Show less - Date Issued
- 2011
- Identifier
- CFE0003917, ucf:48724
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003917
- Title
- Techniques for automated parameter estimation in computational models of probabilistic systems.
- Creator
-
Hussain, Faraz, Jha, Sumit, Leavens, Gary, Turgut, Damla, Uddin, Nizam, University of Central Florida
- Abstract / Description
-
The main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complexstochastic systems such that the resultant model meets user-specified behavioral specifications.These algorithms are designed to operate on probabilistic systems (-) systems that, in general,behave differently under identical conditions. The algorithms work using an approach thatcombines formal verification and...
Show moreThe main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complexstochastic systems such that the resultant model meets user-specified behavioral specifications.These algorithms are designed to operate on probabilistic systems (-) systems that, in general,behave differently under identical conditions. The algorithms work using an approach thatcombines formal verification and mathematical optimization to explore a model's parameterspace.The problem of determining whether a model instantiated with a given set of parametervalues satisfies the desired specification is first defined using formal verification terminology,and then reformulated in terms of statistical hypothesis testing. Parameter space explorationinvolves determining the outcome of the hypothesis testing query for each parameter pointand is guided using simulated annealing. The first algorithm uses the sequential probabilityratio test (SPRT) to solve the hypothesis testing problems, whereas the second algorithmuses an approach based on Bayesian statistical model checking (BSMC).The SPRT-based parameter synthesis algorithm was used to validate that a given model ofglucose-insulin metabolism has the capability of representing diabetic behavior by synthesizingvalues of three parameters that ensure that the glucose-insulin subsystem spends at least 20minutes in a diabetic scenario. The BSMC-based algorithm was used to discover the valuesof parameters in a physiological model of the acute inflammatory response that guarantee aset of desired clinical outcomes.These two applications demonstrate how our algorithms use formal verification, statisticalhypothesis testing and mathematical optimization to automatically synthesize parameters ofcomplex probabilistic models in order to meet user-specified behavioral properties
Show less - Date Issued
- 2016
- Identifier
- CFE0006117, ucf:51200
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006117
- Title
- SMOOTHING PARAMETER SELECTION IN NONPARAMETRIC FUNCTIONAL ESTIMATION.
- Creator
-
Amezziane, Mohamed, Ahmad, Ibrahim, University of Central Florida
- Abstract / Description
-
This study intends to build up new techniques for how to obtain completely data-driven choices of the smoothing parameter in functional estimation, within the confines of minimal assumptions. The focus of the study will be within the framework of the estimation of the distribution function, the density function and their multivariable extensions along with some of their functionals such as the location and the integrated squared derivatives.
- Date Issued
- 2004
- Identifier
- CFE0000307, ucf:46314
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000307
- Title
- APPLICATION OF TRAINED POD-RBF TO INTERPOLATION IN HEAT TRANSFER AND FLUID MECHANICS.
- Creator
-
Ashley, Rebecca A, Kassab, Alain, University of Central Florida
- Abstract / Description
-
To accurately model or predict future operating conditions of a system in engineering or applied mechanics, it is necessary to understand its fundamental principles. These may be the material parameters, defining dimensional characteristics, or the boundary conditions. However, there are instances when there is little to no prior knowledge of the system properties or conditions, and consequently, the problem cannot be modeled accurately. It is therefore critical to define a method that can...
Show moreTo accurately model or predict future operating conditions of a system in engineering or applied mechanics, it is necessary to understand its fundamental principles. These may be the material parameters, defining dimensional characteristics, or the boundary conditions. However, there are instances when there is little to no prior knowledge of the system properties or conditions, and consequently, the problem cannot be modeled accurately. It is therefore critical to define a method that can identify the desired characteristics of the current system without accumulating extensive computation time. This thesis formulates an inverse approach using proper orthogonal decomposition (POD) with an accompanying radial basis function (RBF) interpolation network. This method is capable of predicting the desired characteristics of a specimen even with little prior knowledge of the system. This thesis first develops a conductive heat transfer problem, and by using the truncated POD - RBF interpolation network, temperature values are predicted given a varying Biot number. Then, a simple bifurcation problem is modeled and solved for velocity profiles while changing the mass flow rate. This bifurcation problem provides the data and foundation for future research into the left ventricular assist device (LVAD) and implementation of POD - RBF. The trained POD - RBF inverse approach defined in this thesis can be implemented in several applications of engineering and mechanics. It provides model reduction, error filtration, regularization and an improvement over previous analysis utilizing computational fluid dynamics (CFD).
Show less - Date Issued
- 2018
- Identifier
- CFH2000279, ucf:45782
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH2000279
- Title
- PARAMETER ESTIMATION IN HEAT TRANSFER AND ELASTICITY USING TRAINED POD-RBF NETWORK INVERSE METHODS.
- Creator
-
Rogers, Craig, Kassab, Alain, University of Central Florida
- Abstract / Description
-
In applied mechanics it is always necessary to understand the fundamental properties of a system in order to generate an accurate numerical model or to predict future operating conditions. These fundamental properties include, but are not limited to, the material parameters of a specimen, the boundary conditions inside of a system, or essential dimensional characteristics that define the system or body. However in certain instances there may be little to no knowledge about the systems...
Show moreIn applied mechanics it is always necessary to understand the fundamental properties of a system in order to generate an accurate numerical model or to predict future operating conditions. These fundamental properties include, but are not limited to, the material parameters of a specimen, the boundary conditions inside of a system, or essential dimensional characteristics that define the system or body. However in certain instances there may be little to no knowledge about the systems conditions or properties; as a result the problem cannot be modeled accurately using standard numerical methods. Consequently, it is critical to define an approach that is capable of identifying such characteristics of the problem at hand. In this thesis, an inverse approach is formulated using proper orthogonal decomposition (POD) with an accompanying radial basis function (RBF) network to estimate the current material parameters of a specimen with little prior knowledge of the system. Specifically conductive heat transfer and linear elasticity problems are developed in this thesis and modeled with a corresponding finite element (FEM) or boundary element (BEM) method. In order to create the truncated POD-RBF network to be utilized in the inverse approach, a series of direct FEM or BEM solutions are used to generate a statistical data set of temperatures or deformations in the system or body, each having a set of various material parameters. The data set is then transformed via POD to generate an orthonormal basis to accurately solve for the desired material characteristics using the Levenberg-Marquardt (LM) algorithm. For now, the LM algorithm can be simply defined as a direct relation to the minimization of the Euclidean norm of the objective Least Squares function(s). The trained POD-RBF inverse technique outlined in this thesis provides a flexible by which this inverse approach can be implemented into various fields of engineering and mechanics. More importantly this approach is designed to offer an inexpensive way to accurately estimate material characteristics or properties using nondestructive techniques. While the POD-RBF inverse approach outlined in this thesis focuses primarily in application to conduction heat transfer, elasticity, and fracture mechanics, this technique is designed to be directly applicable to other realistic conditions and/or industries.
Show less - Date Issued
- 2010
- Identifier
- CFE0003267, ucf:48517
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003267
- Title
- Characterization of a Spiking Neuron Model via a Linear Approach.
- Creator
-
Jabalameli, Amirhossein, Behal, Aman, Hickman, James, Haralambous, Michael, University of Central Florida
- Abstract / Description
-
In the past decade, characterizing spiking neuron models has been extensively researched as anessential issue in computational neuroscience. In this thesis, we examine the estimation problemof two different neuron models. In Chapter 2, We propose a modified Izhikevich model withan adaptive threshold. In our two-stage estimation approach, a linear least squares method anda linear model of the threshold are derived to predict the location of neuronal spikes. However,desired results are not...
Show moreIn the past decade, characterizing spiking neuron models has been extensively researched as anessential issue in computational neuroscience. In this thesis, we examine the estimation problemof two different neuron models. In Chapter 2, We propose a modified Izhikevich model withan adaptive threshold. In our two-stage estimation approach, a linear least squares method anda linear model of the threshold are derived to predict the location of neuronal spikes. However,desired results are not obtained and the predicted model is unsuccessful in duplicating the spikelocations. Chapter 3 is focused on the parameter estimation problem of a multi-timescale adaptivethreshold (MAT) neuronal model. Using the dynamics of a non-resetting leaky integrator equippedwith an adaptive threshold, a constrained iterative linear least squares method is implemented tofit the model to the reference data. Through manipulation of the system dynamics, the thresholdvoltage can be obtained as a realizable model that is linear in the unknown parameters. This linearlyparametrized realizable model is then utilized inside a prediction error based framework to identifythe threshold parameters with the purpose of predicting single neuron precise firing times. Thisestimation scheme is evaluated using both synthetic data obtained from an exact model as well asthe experimental data obtained from in vitro rat somatosensory cortical neurons. Results show theability of this approach to fit the MAT model to different types of reference data.
Show less - Date Issued
- 2015
- Identifier
- CFE0005958, ucf:50803
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005958
- Title
- Sampling and Subspace Methods for Learning Sparse Group Structures in Computer Vision.
- Creator
-
Jaberi, Maryam, Foroosh, Hassan, Pensky, Marianna, Gong, Boqing, Qi, GuoJun, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
The unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the...
Show moreThe unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the union of group-structures of different dimensions embedded in a high-dimensional ambient space.To address the problem of high volume, we propose a sampling method referred to as the Sparse Withdrawal of Inliers in a First Trial (SWIFT), which determines the smallest sample size in one grab so that all group-structures are adequately represented and discovered with high probability. The key features of SWIFT are: (i) sparsity, which is independent of the population size; (ii) no prior knowledge of the distribution of data, or the number of underlying group-structures; and (iii) robustness in the presence of an overwhelming number of outliers. We report a comprehensive study of the proposed sampling method in terms of accuracy, functionality, and effectiveness in reducing the computational cost in various applications of computer vision. In the second part of this dissertation, we study dimensionality reduction for multi-structural data. We propose a probabilistic subspace clustering method that unifies soft- and hard-clustering in a single framework. This is achieved by introducing a delayed association of uncertain points to subspaces of lower dimensions based on a confidence measure. Delayed association yields higher accuracy in clustering subspaces that have ambiguities, i.e. due to intersections and high-level of outliers/noise, and hence leads to more accurate self-representation of underlying subspaces. Altogether, this dissertation addresses the key theoretical and practically issues of size and dimension in big data analysis.
Show less - Date Issued
- 2018
- Identifier
- CFE0007017, ucf:52039
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007017