Current Search: Monte Carlo (x)
View All Items
- Title
- PERCOLATION STUDY OF NANO-COMPOSITE CONDUCTIVITY USING MONTE CARLO SIMULATIONPERCOLATION.
- Creator
-
Bai, Jing, Lin, Kuo-Chi, University of Central Florida
- Abstract / Description
-
A Monte Carlo model is developed for predicting electrical conductivity of carbon nanofiber composite materials. The conductive nanofibers are models as both 2D and 3D network of finite sites that are randomly distributed. The percolation behavior of the network is studied using the Monte Carlo method, which leads to the determination of the percolation threshold. The effect of the nanofiber aspect ratio on the critical nanofiber volume rate is investigated in the current model, each of the...
Show moreA Monte Carlo model is developed for predicting electrical conductivity of carbon nanofiber composite materials. The conductive nanofibers are models as both 2D and 3D network of finite sites that are randomly distributed. The percolation behavior of the network is studied using the Monte Carlo method, which leads to the determination of the percolation threshold. The effect of the nanofiber aspect ratio on the critical nanofiber volume rate is investigated in the current model, each of the nanofibers needs five independent geometrical parameters (i.e., three coordinates in space and two orientation angles) for its identification. There are three controlling parameters for each nanofiber, which includes the nanofiber length, the nanofiber diameter, and the nanofiber aspect ratio. The simulation results reveal a relationship between the fiber aspect ratio and the percolation threshold: the higher the aspect ratio, the lower the threshold. With the simulation results obtained from the Monte Carlo model, the effective electrical conductivity of the composite is then determined by assuming the conductivity is proportional to the ratio of the number of nanofibers forming the largest cluster to the total number of nanofibers. The numerical results indicate that as the volume rate reaches a critical value, the conductivity starts to rise sharply. These obtained simulation results agree fairly with experimental and numerical data published earlier by others. In addition, we investigate the convergence of the current percolation model. We also find the tunneling effect does not affect the critical volume rate greatly. We propose that the percolation model is not scalable as well.
Show less - Date Issued
- 2009
- Identifier
- CFE0002644, ucf:48230
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002644
- Title
- COARSE GRAINED MONTE CARLO SIMULATION OF THE SELF-ASSEMBLY OF THEHIV-1 CAPSID PROTEIN.
- Creator
-
Weber, Jeffrey, Chen, Bo, University of Central Florida
- Abstract / Description
-
In this study, a Monte Carlo simulation was designed to observe the self-assembly of the HIV-1 capsid protein. The simulation allowed a coarse grained model of the capsid protein with defined interaction sites to move freely in three dimensions using the Metropolis criterion. Observations were made as to which parameters affected the assembly the process. The ways in which the assembly were affected were also noted. It was found that proper dimerization of the capsid protein was necessary in...
Show moreIn this study, a Monte Carlo simulation was designed to observe the self-assembly of the HIV-1 capsid protein. The simulation allowed a coarse grained model of the capsid protein with defined interaction sites to move freely in three dimensions using the Metropolis criterion. Observations were made as to which parameters affected the assembly the process. The ways in which the assembly were affected were also noted. It was found that proper dimerization of the capsid protein was necessary in order for the lattice to form properly. It was also found that a strong trimeric interface could be responsible for double-layered assemblies. Further studies may be conducted by further varying of parameters or reworking the dynamics of the simulation. The possible causes of curvature within the assembly still need to be researched further.
Show less - Date Issued
- 2014
- Identifier
- CFH0004618, ucf:45316
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004618
- Title
- STUDY OF LOW SPEED TRANSITIONAL REGIME GAS FLOWS IN MICROCHANNELS USING INFORMATION PRESERVATION (IP) METHOD.
- Creator
-
KURSUN, Umit, Kapat, Jayanta, University of Central Florida
- Abstract / Description
-
Proper design of thermal management solutions for future nano-scale electronics or photonics will require knowledge of flow and transport through micron-scale ducts. As in the macro-scale conventional counterparts, such micron-scale flow systems would require robust simulation tools for early-stage design iterations. It can be envisioned that an ideal Nanoscale thermal management (NSTM) solution will involve two-phase flow, liquid flow and gas flow. This study focuses on numerical simulation...
Show moreProper design of thermal management solutions for future nano-scale electronics or photonics will require knowledge of flow and transport through micron-scale ducts. As in the macro-scale conventional counterparts, such micron-scale flow systems would require robust simulation tools for early-stage design iterations. It can be envisioned that an ideal Nanoscale thermal management (NSTM) solution will involve two-phase flow, liquid flow and gas flow. This study focuses on numerical simulation gas flow in microchannels as a fundamental thermal management technique in any future NSTM solution. A well-known particle-based method, Direct Simulation Monte Carlo (DSMC) is selected as the simulation tool. Unlike continuum based equations which would fail at large Kn numbers, the DSMC method is valid in all Knudsen regimes. Due to its conceptual simplicity and flexibility, DSMC has a lot of potential and has already given satisfactory answers to a broad range of macroscopic problems. It has also a lot of potential in handling complex MEMS flow problems with ease. However, the high-level statistical noise in DSMC must be eliminated and pressure boundary conditions must be effectively implemented in order to utilize the DSMC under subsonic flow conditions. The statistical noise of classical DSMC can be eliminated trough the use of IP method. The method saves computational time by several orders of magnitude compared to a similar DSMC simulation. As in the regular DSMC procedures, the molecular velocity is used to determine the molecular positions and compute collisions. Separating the macroscopic velocity from the molecular velocity through the use of the IP method, however, eliminates the high-level of statistical noise as typical in DSMC calculations of low-speed flows. The conventional boundary conditions of the classical DSMC method, such as constant velocity free-stream and vacuum conditions are incorrect in subsonic flow conditions. There should be a substantial amount of backpressure allowing new molecules to enter from the outlet as well as inlet boundaries. Additionally, the application of pressure boundaries will facilitate comparison of numerical and experimental results more readily. Therefore, the main aim of this study is to build the unidirectional, non-isothermal IP algorithm method with periodic boundary conditions on the two dimensional classical DSMC algorithm. The IP algorithm is further modified to implement pressure boundary conditions using the method of characteristics. The applicability of the final algorithm in solving a real flow situation is verified on parallel plate Poiseuille and backward facing step flows in microchannels which are established benchmark problems in computational fluid dynamics studies. The backward facing step geometry is also of practical importance in a variety of engineering applications including Integrated Circuit (IC) design. Such an investigation in microchannels with sufficient accuracy may provide insight into the more complex flow and transport processes in any future Nanoscale thermal management (NSTM) solution. The flow and heat transfer mechanisms at different Knudsen numbers are investigated.
Show less - Date Issued
- 2006
- Identifier
- CFE0001281, ucf:46910
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001281
- Title
- MULTIPLE SCATTERING OF LIGHT IN INHOMOGENEOUS MEDIA AND APPLICATIONS.
- Creator
-
Mujat, Claudia, Dogariu, Aristide, University of Central Florida
- Abstract / Description
-
Light scattering-based techniques are being developed for non-invasive diagnostics of inhomogeneous media in various fields, such as medicine, biology, and material characterization. However, as most media of interest are highly scattering and have a complex structure, it is difficult to obtain a full analytical solution of the scattering problem without introducing approximations and assumptions about the properties of the system under consideration. Moreover, most of the previous studies...
Show moreLight scattering-based techniques are being developed for non-invasive diagnostics of inhomogeneous media in various fields, such as medicine, biology, and material characterization. However, as most media of interest are highly scattering and have a complex structure, it is difficult to obtain a full analytical solution of the scattering problem without introducing approximations and assumptions about the properties of the system under consideration. Moreover, most of the previous studies deal with idealized scattering situations, rarely encountered in practice. This dissertation provides new analytical, numerical, and experimental solutions to describe subtle effects introduced by the properties of the light sources, and by the boundaries, absorption and morphology of the investigated media. A novel Monte Carlo simulation was developed to describe the statistics of partially coherent beams after propagation through inhomogeneous media. The Monte Carlo approach also enabled us to study the influence of the refractive index contrast on the diffusive processes, to discern between different effects of absorption in multiple scattering, and to support experimental results on inhomogeneous media with complex morphology. A detailed description of chromatic effects in scattering was used to develop new models that explain the spectral dependence of the detected signal in applications such as imaging and diffuse reflectance measurements. The quantitative and non-invasive characterization of inhomogeneous media with complex structures, such as porous membranes, diffusive coatings, and incipient lesions in natural teeth was then demonstrated.
Show less - Date Issued
- 2004
- Identifier
- CFE0000048, ucf:46143
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000048
- Title
- From Excited Charge Dynamics to Cluster Diffusion: Development and Application of Techniques Beyond DFT and KMC.
- Creator
-
Acharya, Shree Ram, Rahman, Talat, Chow, Lee, Stolbov, Sergey, Wu, Annie, University of Central Florida
- Abstract / Description
-
This dissertation focuses on developing reliable and accurate computational techniques which enable the examination of static and dynamic properties of various activated phenomena using deterministic and stochastic approaches. To explore ultrafast electron dynamics in materials with strong electron-electron correlation, under the influence of a laser pulse, an ab initio electronic structure method based on time-dependent density functional theory (TDDFT) in combination with dynamical mean...
Show moreThis dissertation focuses on developing reliable and accurate computational techniques which enable the examination of static and dynamic properties of various activated phenomena using deterministic and stochastic approaches. To explore ultrafast electron dynamics in materials with strong electron-electron correlation, under the influence of a laser pulse, an ab initio electronic structure method based on time-dependent density functional theory (TDDFT) in combination with dynamical mean field theory (DMFT) is developed and applied to: 1) single-band Hubbard model; 2) multi-band metal Ni; and 3) multi-band insulator MnO. The ultrafast demagnetization in Ni reveal the importance of memory and correlation effects, leading to much better agreement with experimental data than previously obtained, while for MnO the main channels of charge response are identified. Furthermore, an analytical form of the exchange-correlation kernel is obtained for future applications, saving tremendous computational cost. In another project, size-dependent temporal and spatial evolution of homo- and hetero-epitaxial adatom islands on fcc(111) transition metals surfaces are investigated using the self-learning kinetic Monte Carlo (SLKMC) method that explores long-time dynamics unbiased by apriori selected diffusion processes. Novel multi-atom diffusion processes are revealed. Trends in the diffusion coefficients point to the relative role of adatom lateral interaction and island-substrate binding energy in determining island diffusivity. Moreover, analysis of the large data-base of the activation energy barriers generated for multitude of diffusion processes for variety of systems allows extraction of a set of descriptors that in turn generate predictive models for energy barrier evaluation. Finally, the kinetics of the industrially important methanol partial oxidation reaction on a model nanocatalyst is explored using KMC supplemented by DFT energetics. Calculated thermodynamics explores the active surface sites for reaction components including different intermediates and energetics of competing probable reaction pathways, while kinetic study attends to the selectivity of products and its variation with external factors.
Show less - Date Issued
- 2018
- Identifier
- CFE0006965, ucf:52910
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006965
- Title
- Understanding the Role of Defects in the Radiation Response of Nanoceria.
- Creator
-
Kumar, Amit, Seal, Sudipta, Heinrich, Helge, Cho, Hyoung, Leuenberger, Michael, Zhai, Lei, Devanathan, Ram, University of Central Florida
- Abstract / Description
-
Nanoscale cerium oxide (nanoceria) have shown to possess redox active property , and has been widely studied for potential use in catalysis, chemical-mechanical planarization, bio-medical and solid oxide fuel cell (SOFC), etc. The redox state of nanoceria can be tuned by controlling the defects within the lattice and thus its physical and chemical properties. Perfect ceria lattice has fluorite structure and the research in last decade has shown that oxide and mixed oxide systems with...
Show moreNanoscale cerium oxide (nanoceria) have shown to possess redox active property , and has been widely studied for potential use in catalysis, chemical-mechanical planarization, bio-medical and solid oxide fuel cell (SOFC), etc. The redox state of nanoceria can be tuned by controlling the defects within the lattice and thus its physical and chemical properties. Perfect ceria lattice has fluorite structure and the research in last decade has shown that oxide and mixed oxide systems with pyrochlore and fluorite have better structural stability under high energy radiation. However, the current literature shows a limited number of studies on the effect of high energy radiation on nanoceria. This dissertation aims at understanding the phenomena occurring on irradiation of nanoceria lattice through experiments and atomistic simulation.At first, research was conducted to show the ability to control the defects in nanoceria lattice and understand the effect in tailoring its properties. The defect state of nanoceria was controlled by lower valence state rare earth dopant europium. Extensive materials characterization was done using high resolution transmission electron microscopy (HRTEM), UV-Visible spectroscopy (UV-Vis), X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy to understand the effect of dopant chemistry in modifying the chemical state of nanoceria. The defects originating in the lattice and redox state was quantified with increasing dopant concentration. The photoluminescence property of the control and doped nanoceria were evaluated with respect to its defect state. It was observed that defect plays an important role in modifying the photoluminescence property and that it can be tailored in a wide range to control the optical properties of nanoceria.Having seen the importance of defects in controlling the properties of nanoceria, further experiments were conducted to understand the effect of radiation in cerium oxide thin films of different crystallinity. The cerium oxide thin films were synthesized using oxygen plasma assisted molecular beam epitaxy (OPA-MBE) growth. The thin films were exposed to high energy radiation over a wide range of fluence (1013 to 1017 He+ ions/cm3). The current literature does not report the radiation effect in nanoceria in this wide range and upto this high fluence. The chemical state of the thin film was studied using in-situ XPS for each dose of radiation. It was found that radiation induced defects within both the ceria thin films and the valence state deviated further towards non-stoichiometry with radiation. The experimental results from cerium oxide thin film irradiation were studied in the light of simulation. Classical molecular dynamics and Monte Carlo simulation were used for designing the model ceria nanoparticle and studying the interaction of the lattice model with radiation. Electronic and nuclear stopping at the end of the range were modeled in ceria lattice using classical molecular dynamics to simulate the effect of radiation. It was seen that displacement damage was the controlling factor in defect production in ceria lattice. The simulation results suggested that nanosized cerium oxide has structural stability under radiation and encounters radiation damage due to the mixed valence states. A portion of the study will focus on observing the lattice stability of cerium with increasing concentration of the lower valence (Ce3+) within the lattice. With this current theoretical understanding of the role of redox state and defects during irradiation, the surfaces and bulk of nanoceria can be tailored for radiation stable structural applications.
Show less - Date Issued
- 2012
- Identifier
- CFE0004396, ucf:49375
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004396
- Title
- A Framework for Measuring Return on Investment for Healthcare Simulation-Based Training.
- Creator
-
Bukhari, Hatim, Rabelo, Luis, Elshennawy, Ahmad, Goldiez, Brian, Andreatta, Pamela, University of Central Florida
- Abstract / Description
-
In the healthcare sector, providing high-quality service in a safe environment for both patient and staff is an obvious and ultimate major objective. Training is an essential component for achieving this important objective. Most organizations acknowledge that employee simulation-based training programs are an important part of the human capital strategy, yet few have effectively succeeded in quantifying the real and precise ROI of this type of investment. Therefore, if the training is...
Show moreIn the healthcare sector, providing high-quality service in a safe environment for both patient and staff is an obvious and ultimate major objective. Training is an essential component for achieving this important objective. Most organizations acknowledge that employee simulation-based training programs are an important part of the human capital strategy, yet few have effectively succeeded in quantifying the real and precise ROI of this type of investment. Therefore, if the training is perceived as a waste of resources and its ROI is not clearly recognized, it will be the first option to cut when the budget cut is needed.The various intangible benefits of healthcare simulation-based training are very difficult to quantify. In addition, there was not a unified way to count for the different cost and benefits to provide a justifiable ROI. Quantifying the qualitative and intangible benefits of medical training simulator needed a framework that helps to identify and convert qualitative and intangible benefits into monetary value so it can be considered in the ROI evaluation.This research is a response to the highlighted importance of developing a comprehensive framework that has the capability to take into consideration the wide range of benefits that simulation-based training can bring to the healthcare system taking into consideration the characteristics of this specific field of investment. The major characteristics of investment in this field include the uncertainty, the qualitative nature of the major benefits, and the diversity and the wide range of applications.This comprehensive framework is an integration of several methodologies and tools. It consists of three parts. The first part of the framework is the benefits and cost structure, which pays special attention to the qualitative and intangible benefits by considering the Value Measurement methodology (VMM) and other previously existing models. The second part of the framework is important to deal with the uncertainty associated with this type of investment. Monte Carlo simulation is a tool that considered multiple scenarios of input sets instead of a single set of inputs. The third part of the framework considers an advanced value analysis of the investment. It goes beyond the discounted cash flow (DCF) methodologies like net present value (NPV) that consider a single scenario for the cash flow to Real Options Analysis that consider the flexibility over the lifetime of the investment when evaluating the value of the investment. This framework has been validated through case studies.
Show less - Date Issued
- 2017
- Identifier
- CFE0006859, ucf:51750
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006859
- Title
- HISTORICAL RESPONSES OF MARINE TURTLES TO GLOBAL CLIMATE CHANGE AND JUVENILE LOGGERHEAD RECRUITMENT IN FLORIDA.
- Creator
-
Reece, Joshua, Parkinson, Christopher, University of Central Florida
- Abstract / Description
-
Marine turtle conservation is most successful when it is based on sound data incorporating life history, historical population stability, and gene flow among populations. This research attempts to provide that information through two studies. In chapter I, I identify historical patterns of gene flow, population sizes, and contraction/expansion during major climatic shifts. In chapter II, I reveal a life history characteristic of loggerhead turtles previously undocumented. I identify a pattern...
Show moreMarine turtle conservation is most successful when it is based on sound data incorporating life history, historical population stability, and gene flow among populations. This research attempts to provide that information through two studies. In chapter I, I identify historical patterns of gene flow, population sizes, and contraction/expansion during major climatic shifts. In chapter II, I reveal a life history characteristic of loggerhead turtles previously undocumented. I identify a pattern of juvenile recruitment to foraging grounds proximal to their natal nesting beach. This pattern results in a predictable recruitment pattern from juvenile foraging ground aggregations to local rookeries. This research will provide crucial information to conservation managers by demonstrating how sensitive marine turtles are to global climate change. In the second component of my research, I demonstrate how threats posed to juvenile foraging grounds will have measurable effects on rookeries proximal to those foraging grounds. The addition of this basic life history information will have dramatic effects on marine turtle conservation in the future, and will serve as the basis for more thorough, forward-looking recovery plans.
Show less - Date Issued
- 2005
- Identifier
- CFE0000341, ucf:46281
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000341
- Title
- THEORETICAL AND NUMERICAL STUDIES OF PHASE TRANSITIONS AND ERROR THRESHOLDS IN TOPOLOGICAL QUANTUM MEMORIES.
- Creator
-
Jouzdani, Pejman, Mucciolo, Eduardo, Chang, Zenghu, Leuenberger, Michael, Abouraddy, Ayman, University of Central Florida
- Abstract / Description
-
This dissertation is the collection of a progressive research on the topic of topological quantum computation and information with the focus on the error threshold of the well-known models such as the unpaired Majorana, the toric code, and the planar code.We study the basics of quantum computation and quantum information, and in particular quantum error correction. Quantum error correction provides a tool for enhancing the quantum computation fidelity in the noisy environment of a real world....
Show moreThis dissertation is the collection of a progressive research on the topic of topological quantum computation and information with the focus on the error threshold of the well-known models such as the unpaired Majorana, the toric code, and the planar code.We study the basics of quantum computation and quantum information, and in particular quantum error correction. Quantum error correction provides a tool for enhancing the quantum computation fidelity in the noisy environment of a real world. We begin with a brief introduction to stabilizer codes. The stabilizer formalism of the theory of quantum error correction gives a well-defined description of quantum codes that is used throughout this dissertation. Then, we turn our attention to a quite new subject, namely, topological quantum codes. Topological quantum codes take advantage of the topological characteristics of a physical many-body system. The physical many-body systems studied in the context of topological quantum codes are of two essential natures: they either have intrinsic interaction that self-corrects errors, or are actively corrected to be maintainedin a desired quantum state. Examples of the former are the toric code and the unpaired Majorana, while an example for the latter is the surface code.A brief introduction and history of topological phenomena in condensed matter is provided. The unpaired Majorana and the Kitaev toy model are briefly explained. Later we introduce a spin model that maps onto the Kitaev toy model through a sequence of transformations. We show how this model is robust and tolerates local perturbations. The research on this topic, at the time of writing this dissertation, is still incomplete and only preliminary results are represented.As another example of passive error correcting codes with intrinsic Hamiltonian, the toric code is introduced. We also analyze the dynamics of the errors in the toric code known as anyons. We show numerically how the addition of disorder to the physical system underlying the toric code slows down the dynamics of the anyons. We go further and numerically analyze the presence of time-dependent noise and the consequent delocalization of localized errors.The main portion of this dissertation is dedicated to the surface code. We study the surface code coupled to a non-interacting bosonic bath. We show how the interaction between the code and the bosonic bath can effectively induce correlated errors. These correlated errors may be corrected up to some extend. The extension beyond which quantum error correction seems impossible is the error threshold of the code. This threshold is analyzed by mapping the effective correlated error model onto a statistical model. We then study the phase transition in the statistical model. The analysis is in two parts. First, we carry out derivation of the effective correlated model, its mapping onto a statistical model, and perform an exact numerical analysis. Second, we employ a Monte Carlo method to extend the numerical analysis to large system size.We also tackle the problem of surface code with correlated and single-qubit errors by an exact mapping onto a two-dimensional Ising model with boundary fields. We show how the phase transition point in one model, the Ising model, coincides with the intrinsic error threshold of the other model, the surface code.
Show less - Date Issued
- 2014
- Identifier
- CFE0005512, ucf:50314
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005512
- Title
- Developing a Group Decision Support System (GDSS) for decision making under uncertainty.
- Creator
-
Mokhtari, Soroush, Abdel-Aty, Mohamed, Madani Larijani, Kaveh, Wang, Dingbao, Xanthopoulos, Petros, University of Central Florida
- Abstract / Description
-
Multi-Criteria Decision Making (MCDM) problems are often associated with tradeoffs between performances of the available alternative solutions under decision making criteria. These problems become more complex when performances are associated with uncertainty. This study proposes a stochastic MCDM procedure that can handle uncertainty in MCDM problems. The proposed method coverts a stochastic MCDM problem into many deterministic ones through a Monte-Carlo (MC) selection. Each deterministic...
Show moreMulti-Criteria Decision Making (MCDM) problems are often associated with tradeoffs between performances of the available alternative solutions under decision making criteria. These problems become more complex when performances are associated with uncertainty. This study proposes a stochastic MCDM procedure that can handle uncertainty in MCDM problems. The proposed method coverts a stochastic MCDM problem into many deterministic ones through a Monte-Carlo (MC) selection. Each deterministic problem is then solved using a range of MCDM methods and the ranking order of the alternatives is established for each deterministic MCDM. The final ranking of the alternatives can be determined based on winning probabilities and ranking distribution of the alternatives. Ranking probability distributions can help the decision-maker understand the risk associated with the overall ranking of the options. Therefore, the final selection of the best alternative can be affected by the risk tolerance of the decision-makers. A Group Decision Support System (GDSS) is developed here with a user-friendly interface to facilitate the application of the proposed MC-MCDM approach in real-world multi-participant decision making for an average user. The GDSS uses a range of decision making methods to increase the robustness of the decision analysis outputs and to help understand the sensitivity of the results to level of cooperation among the decision-makers. The decision analysis methods included in the GDSS are: 1) conventional MCDM methods (Maximin, Lexicographic, TOPSIS, SAW and Dominance), appropriate when there is a high cooperation level among the decision-makers; 2) social choice rules or voting methods (Condorcet Choice, Borda scoring, Plurality, Anti-Plurality, Median Voting, Hare System of voting, Majoritarian Compromise ,and Condorcet Practical), appropriate for cases with medium cooperation level among the decision-makers; and 3) Fallback Bargaining methods (Unanimity, Q-Approval and Fallback Bargaining with Impasse), appropriate for cases with non-cooperative decision-makers. To underline the utility of the proposed method and the developed GDSS in providing valuable insights into real-world hydro-environmental group decision making, the GDSS is applied to a benchmark example, namely the California's Sacramento-San Joaquin Delta decision making problem. The implications of GDSS' outputs (winning probabilities and ranking distributions) are discussed. Findings are compared with those of previous studies, which used other methods to solve this problem, to highlight the sensitivity of the results to the choice of decision analysis methods and/or different cooperation levels among the decision-makers.
Show less - Date Issued
- 2013
- Identifier
- CFE0004723, ucf:49821
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004723
- Title
- MONTE CARLO SIMULATION OF HOLE TRANSPORT AND TERAHERTZ AMPLIFICATION IN MULTILAYER DELTA DOPED SEMICONDUCTOR STRUCTURES.
- Creator
-
Dolguikh, Maxim, Peale, Robert, University of Central Florida
- Abstract / Description
-
Monte Carlo method for the simulation of hole dynamics in degenerate valence subbands of cubic semiconductors is developed. All possible intra- and inter-subband scattering rates are theoretically calculated for Ge, Si, and GaAs. A far-infrared laser concept based on intersubband transitions of holes in p-type periodically delta-doped semiconductor films is studied using numerical Monte-Carlo simulation of hot hole dynamics. The considered device consists of monocrystalline pure Ge layers...
Show moreMonte Carlo method for the simulation of hole dynamics in degenerate valence subbands of cubic semiconductors is developed. All possible intra- and inter-subband scattering rates are theoretically calculated for Ge, Si, and GaAs. A far-infrared laser concept based on intersubband transitions of holes in p-type periodically delta-doped semiconductor films is studied using numerical Monte-Carlo simulation of hot hole dynamics. The considered device consists of monocrystalline pure Ge layers periodically interleaved with delta-doped layers and operates with vertical or in-plane hole transport in the presence of a perpendicular in-plane magnetic field. Inversion population on intersubband transitions arises due to light hole accumulation in E B fields, as in the bulk p-Ge laser. However, the considered structure achieves spatial separation of hole accumulation regions from the doped layers, which reduces ionized-impurity and carrier-carrier scattering for the majority of light holes. This allows remarkable increase of the gain in comparison with bulk p-Ge lasers. Population inversion and gain sufficient for laser operation are expected up to 77 K. Test structures grown by chemical vapor deposition demonstrate feasibility of producing the device with sufficient active thickness to allow quasioptical electrodynamic cavity solutions. The same device structure is considered in GaAs. The case of Si is much more complicated due to strong anisotropy of the valence band. The primary new result for Si is the first consideration of the anisotropy of optical phonon scattering for hot holes.
Show less - Date Issued
- 2005
- Identifier
- CFE0000863, ucf:46672
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000863
- Title
- REAL-TIME REALISTIC RENDERING AND HIGH DYNAMIC RANGE IMAGE DISPLAY AND COMPRESSION.
- Creator
-
Xu, Ruifeng, Pattanaik, Sumanta, University of Central Florida
- Abstract / Description
-
This dissertation focuses on the many issues that arise from the visual rendering problem. Of primary consideration is light transport simulation, which is known to be computationally expensive. Monte Carlo methods represent a simple and general class of algorithms often used for light transport computation. Unfortunately, the images resulting from Monte Carlo approaches generally suffer from visually unacceptable noise artifacts. The result of any light transport simulation is, by its very...
Show moreThis dissertation focuses on the many issues that arise from the visual rendering problem. Of primary consideration is light transport simulation, which is known to be computationally expensive. Monte Carlo methods represent a simple and general class of algorithms often used for light transport computation. Unfortunately, the images resulting from Monte Carlo approaches generally suffer from visually unacceptable noise artifacts. The result of any light transport simulation is, by its very nature, an image of high dynamic range (HDR). This leads to the issues of the display of such images on conventional low dynamic range devices and the development of data compression algorithms to store and recover the corresponding large amounts of detail found in HDR images. This dissertation presents our contributions relevant to these issues. Our contributions to high dynamic range image processing include tone mapping and data compression algorithms. This research proposes and shows the efficacy of a novel level set based tone mapping method that preserves visual details in the display of high dynamic range images on low dynamic range display devices. The level set method is used to extract the high frequency information from HDR images. The details are then added to the range compressed low frequency information to reconstruct a visually accurate low dynamic range version of the image. Additional challenges associated with high dynamic range images include the requirements to reduce excessively large amounts of storage and transmission time. To alleviate these problems, this research presents two methods for efficient high dynamic range image data compression. One is based on the classical JPEG compression. It first converts the raw image into RGBE representation, and then sends the color base and common exponent to classical discrete cosine transform based compression and lossless compression, respectively. The other is based on the wavelet transformation. It first transforms the raw image data into the logarithmic domain, then quantizes the logarithmic data into the integer domain, and finally applies the wavelet based JPEG2000 encoder for entropy compression and bit stream truncation to meet the desired bit rate requirement. We believe that these and similar such contributions will make a wide application of high dynamic range images possible. The contributions to light transport simulation include Monte Carlo noise reduction, dynamic object rendering and complex scene rendering. Monte Carlo noise is an inescapable artifact in synthetic images rendered using stochastic algorithm. This dissertation proposes two noise reduction algorithms to obtain high quality synthetic images. The first one models the distribution of noise in the wavelet domain using a Laplacian function, and then suppresses the noise using a Bayesian method. The other extends the bilateral filtering method to reduce all types of Monte Carlo noise in a unified way. All our methods reduce Monte Carlo noise effectively. Rendering of dynamic objects adds more dimension to the expensive light transport simulation issue. This dissertation presents a pre-computation based method. It pre-computes the surface radiance for each basis lighting and animation key frame, and then renders the objects by synthesizing the pre-computed data in real-time. Realistic rendering of complex scenes is computationally expensive. This research proposes a novel 3D space subdivision method, which leads to a new rendering framework. The light is first distributed to each local region to form local light fields, which are then used to illuminate the local scenes. The method allows us to render complex scenes at interactive frame rates. Rendering has important applications in mixed reality. Consistent lighting and shadows between real scenes and virtual scenes are important features of visual integration. The dissertation proposes to render the virtual objects by irradiance rendering using live captured environmental lighting. This research also introduces a virtual shadow generation method that computes shadows cast by virtual objects to the real background. We finally conclude the dissertation by discussing a number of future directions for rendering research, and presenting our proposed approaches.
Show less - Date Issued
- 2005
- Identifier
- CFE0000730, ucf:46615
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000730
- Title
- Predictive Modeling of Functional Materials for Catalytic and Sensor Applications.
- Creator
-
Rawal, Takat, Rahman, Talat, Chang, Zenghu, Leuenberger, Michael, Zou, Shengli, University of Central Florida
- Abstract / Description
-
The research conducted in my dissertation focuses on theoretical and computational studies of the electronic and geometrical structures, and the catalytic and optical properties of functional materials in the form of nano-structures, extended surfaces, two-dimensional systems and hybrid structures. The fundamental aspect of my research is to predict nanomaterial properties through ab-initio calculations using methods such as quantum mechanical density functional theory (DFT) and kinetic Monte...
Show moreThe research conducted in my dissertation focuses on theoretical and computational studies of the electronic and geometrical structures, and the catalytic and optical properties of functional materials in the form of nano-structures, extended surfaces, two-dimensional systems and hybrid structures. The fundamental aspect of my research is to predict nanomaterial properties through ab-initio calculations using methods such as quantum mechanical density functional theory (DFT) and kinetic Monte Carlo simulation, which help rationalize experimental observations, and ultimately lead to the rational design of materials for the electronic and energy-related applications. Focusing on the popular single-layer MoS2, I first show how its hybrid structure with 29-atom transition metal nanoparticles (M29 where M=Cu, Ag, and Au) can lead to composite catalysts suitable for oxidation reactions. Interestingly, the effect is found to be most pronounced for Au29 when MoS2 is defect-laden (S vacancy row). Second, I show that defect-laden MoS2 can be functionalized either by deposited Au nanoparticles or when supported on Cu(111) to serve as a cost-effective catalyst for methanol synthesis via CO hydrogenation reactions. The charge transfer and electronic structural changes in these sub systems lead to the presence of 'frontier' states near the Fermi level, making the systems catalytically active. Next, in the emerging area of single metal atom catalysis, I provide rationale for the viability of single Pd sites stabilized on ZnO(101 ?0) as the active sites for methanol partial oxidation, an important reaction for the production of H2. We trace its excellent activity to the modified electronic structure of the single Pd site as well as neighboring Zn cationic sites. With the DFT-calculated activation energy barriers for a large set of reactions, we perform ab-initio kMC simulations to determine the selectivity of the products (CO2 and H2). These findings offer an opportunity for maximizing the efficiency of precious metal atoms, and optimizing their activity and selectivity (for desired products). In related work on extended surfaces while trying to explain the Scanning Tunneling Microscopy images observed by our experimental collaborators, I discovered a new mechanism involved in the process of Ag vacancy formation on Ag(110), in the presence of O atoms which leads to the reconstruction and eventually oxidation of the Ag surface. In a similar vein, I was able to propose a mechanism for the orange photoluminescence (PL), observed by our experimental collaborators, of a coupled system of benzylpiperazine (BZP) molecule and iodine on a copper surface. Our results show that the adsorbed BZP and iodine play complimentary roles in producing the PL in the visible range. Upon photo-excitation of the BZP-I/CuI(111) system, excited electrons are transferred into the conduction band (CB) of CuI, and holes are trapped by the adatoms. The relaxation of holes into BZP HOMO is facilitated by its realignment. Relaxed holes subsequently recombine with excited electrons in the CB of the CuI film, thus producing a luminescence peak at ~2.1 eV. These results can be useful for forensic applications in detecting illicit substances.
Show less - Date Issued
- 2017
- Identifier
- CFE0006783, ucf:51813
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006783
- Title
- Modeling social norms in real-world agent-based simulations.
- Creator
-
Beheshti, Rahmatollah, Sukthankar, Gita, Boloni, Ladislau, Wu, Annie, Swarup, Samarth, University of Central Florida
- Abstract / Description
-
Studying and simulating social systems including human groups and societies can be a complex problem. In order to build a model that simulates humans' actions, it is necessary to consider the major factors that affect human behavior. Norms are one of these factors: social norms are the customary rules that govern behavior in groups and societies. Norms are everywhere around us, from the way people handshake or bow to the clothes they wear. They play a large role in determining our behaviors....
Show moreStudying and simulating social systems including human groups and societies can be a complex problem. In order to build a model that simulates humans' actions, it is necessary to consider the major factors that affect human behavior. Norms are one of these factors: social norms are the customary rules that govern behavior in groups and societies. Norms are everywhere around us, from the way people handshake or bow to the clothes they wear. They play a large role in determining our behaviors. Studies on norms are much older than the age of computer science, since normative studies have been a classic topic in sociology, psychology, philosophy and law. Various theories have been put forth about the functioning of social norms. Although an extensive amount of research on norms has been performed during the recent years, there remains a significant gap between current models and models that can explain real-world normative behaviors. Most of the existing work on norms focuses on abstract applications, and very few realistic normative simulations of human societies can be found. The contributions of this dissertation include the following: 1) a new hybrid technique based on agent-based modeling and Markov Chain Monte Carlo is introduced. This method is used to prepare a smoking case study for applying normative models. 2) This hybrid technique is described using category theory, which is a mathematical theory focusing on relations rather than objects. 3) The relationship between norm emergence in social networks and the theory of tipping points is studied. 4) A new lightweight normative architecture for studying smoking cessation trends is introduced. This architecture is then extended to a more general normative framework that can be used to model real-world normative behaviors. The final normative architecture considers cognitive and social aspects of norm formation in human societies. Normative architectures based on only one of these two aspects exist in the literature, but a normative architecture that effectively includes both of these two is missing.
Show less - Date Issued
- 2015
- Identifier
- CFE0005577, ucf:50244
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005577
- Title
- CMOS RF CITUITS VARIABILITY AND RELIABILITY RESILIENT DESIGN, MODELING, AND SIMULATION.
- Creator
-
Liu, Yidong, Yuan, Jiann-Shiun, University of Central Florida
- Abstract / Description
-
The work presents a novel voltage biasing design that helps the CMOS RF circuits resilient to variability and reliability. The biasing scheme provides resilience through the threshold voltage (VT) adjustment, and at the mean time it does not degrade the PA performance. Analytical equations are established for sensitivity of the resilient biasing under various scenarios. Power Amplifier (PA) and Low Noise Amplifier (LNA) are investigated case by case through modeling and experiment. PTM 65nm...
Show moreThe work presents a novel voltage biasing design that helps the CMOS RF circuits resilient to variability and reliability. The biasing scheme provides resilience through the threshold voltage (VT) adjustment, and at the mean time it does not degrade the PA performance. Analytical equations are established for sensitivity of the resilient biasing under various scenarios. Power Amplifier (PA) and Low Noise Amplifier (LNA) are investigated case by case through modeling and experiment. PTM 65nm technology is adopted in modeling the transistors within these RF blocks. A traditional class-AB PA with resilient design is compared the same PA without such design in PTM 65nm technology. Analytical equations are established for sensitivity of the resilient biasing under various scenarios. A traditional class-AB PA with resilient design is compared the same PA without such design in PTM 65nm technology. The results show that the biasing design helps improve the robustness of the PA in terms of linear gain, P1dB, Psat, and power added efficiency (PAE). Except for post-fabrication calibration capability, the design reduces the majority performance sensitivity of PA by 50% when subjected to threshold voltage (VT) shift and 25% to electron mobility (¼n) degradation. The impact of degradation mismatches is also investigated. It is observed that the accelerated aging of MOS transistor in the biasing circuit will further reduce the sensitivity of PA. In the study of LNA, a 24 GHz narrow band cascade LNA with adaptive biasing scheme under various aging rate is compared to LNA without such biasing scheme. The modeling and simulation results show that the adaptive substrate biasing reduces the sensitivity of noise figure and minimum noise figure subject to process variation and device aging such as threshold voltage shift and electron mobility degradation. Simulation of different aging rate also shows that the sensitivity of LNA is further reduced with the accelerated aging of the biasing circuit. Thus, for majority RF transceiver circuits, the adaptive body biasing scheme provides overall performance resilience to the device reliability induced degradation. Also the tuning ability designed in RF PA and LNA provides the circuit post-process calibration capability.
Show less - Date Issued
- 2011
- Identifier
- CFE0003595, ucf:48861
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003595
- Title
- AN ALL-AGAINST-ONE GAME APPROACH FOR THE MULTI-PLAYER PURSUIT-EVASION PROBLEM.
- Creator
-
Talebi, Shahriar, Simaan, Marwan, Qu, Zhihua, Vosoughi, Azadeh, University of Central Florida
- Abstract / Description
-
The traditional pursuit-evasion game considers a situation where one pursuer tries to capture an evader, while the evader is trying to escape. A more general formulation of this problem is to consider multiple pursuers trying to capture one evader. This general multi-pursuer one-evader problem can also be used to model a system of systems in which one of the subsystems decides to dissent (evade) from the others while the others (the pursuer subsystems) try to pursue a strategy to prevent it...
Show moreThe traditional pursuit-evasion game considers a situation where one pursuer tries to capture an evader, while the evader is trying to escape. A more general formulation of this problem is to consider multiple pursuers trying to capture one evader. This general multi-pursuer one-evader problem can also be used to model a system of systems in which one of the subsystems decides to dissent (evade) from the others while the others (the pursuer subsystems) try to pursue a strategy to prevent it from doing so. An important challenge in analyzing these types of problems is to develop strategies for the pursuers along with the advantages and disadvantages of each. In this thesis, we investigate three possible and conceptually different strategies for pursuers: (1) act non-cooperatively as independent pursuers, (2) act cooperatively as a unified team of pursuers, and (3) act individually as greedy pursuers. The evader, on the other hand, will consider strategies against all possible strategies by the pursuers. We assume complete uncertainty in the game i.e. no player knows which strategies the other players are implementing and none of them has information about any of the parameters in the objective functions of the other players. To treat the three pursuers strategies under one general framework, an all-against-one linear quadratic dynamic game is considered and the corresponding closed-loop Nash solution is discussed. Additionally, different necessary and sufficient conditions regarding the stability of the system, and existence and definiteness of the closed-loop Nash strategies under different strategy assumptions are derived. We deal with the uncertainties in the strategies by first developing the Nash strategies for each of the resulting games for all possible options available to both sides. Then we deal with the parameter uncertainties by performing a Monte Carlo analysis to determine probabilities of capture for the pursuers (or escape for the evader) for each resulting game. Results of the Monte Carlo simulation show that in general, pursuers do not always benefit from cooperating as a team and that acting as non-cooperating players may yield a higher probability of capturing of the evader.
Show less - Date Issued
- 2017
- Identifier
- CFE0007135, ucf:52314
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007135
- Title
- PATTERNS OF MOTION: DISCOVERY AND GENERALIZED REPRESENTATION.
- Creator
-
Saleemi, Imran, Shah, Mubarak, University of Central Florida
- Abstract / Description
-
In this dissertation, we address the problem of discovery and representation of motion patterns in a variety of scenarios, commonly encountered in vision applications. The overarching goal is to devise a generic representation, that captures any kind of object motion observable in video sequences. Such motion is a significant source of information typically employed for diverse applications such as tracking, anomaly detection, and action and event recognition. We present statistical...
Show moreIn this dissertation, we address the problem of discovery and representation of motion patterns in a variety of scenarios, commonly encountered in vision applications. The overarching goal is to devise a generic representation, that captures any kind of object motion observable in video sequences. Such motion is a significant source of information typically employed for diverse applications such as tracking, anomaly detection, and action and event recognition. We present statistical frameworks for representation of motion characteristics of objects, learned from tracks or optical flow, for static as well as moving cameras, and propose algorithms for their application to a variety of problems. The proposed motion pattern models and learning methods are general enough to be employed in a variety of problems as we demonstrate experimentally. We first propose a novel method to model and learn the scene activity, observed by a static camera. The motion patterns of objects in the scene are modeled in the form of a multivariate non-parametric probability density function of spatiotemporal variables (object locations and transition times between them). Kernel Density Estimation (KDE) is used to learn this model in a completely unsupervised fashion. Learning is accomplished by observing the trajectories of objects by a static camera over extended periods of time. The model encodes the probabilistic nature of the behavior of moving objects in the scene and is useful for activity analysis applications, such as persistent tracking and anomalous motion detection. In addition, the model also captures salient scene features, such as, the areas of occlusion and most likely paths. Once the model is learned, we use a unified Markov Chain Monte-Carlo (MCMC) based framework for generating the most likely paths in the scene, improving foreground detection, persistent labelling of objects during tracking and deciding whether a given trajectory represents an anomaly to the observed motion patterns. Experiments with real world videos are reported which validate the proposed approach. The representation and estimation framework proposed above, however, has a few limitations. This algorithm proposes to use a single global statistical distribution to represent all kinds of motion observed in a particular scene. It therefore, does not find a separation between multiple semantically distinct motion patterns in the scene. Instead, the learned model is a joint distribution over all possible patterns followed by objects. To overcome this limitation, we then propose a superior method for the discovery and statistical representation of motion patterns in a scene. The advantages of this approach over the first one are two-fold: first, this model is applicable to scenes of dense crowded motion where tracking may not be feasible, and second, it distinguishes between motion patterns that are distinct at a semantic level of abstraction. We propose a mixture model representation of salient patterns of optical flow, and present an algorithm for learning these patterns from dense optical flow in a hierarchical, unsupervised fashion. Using low level cues of noisy optical flow, K-means is employed to initialize a Gaussian mixture model for temporally segmented clips of video. The components of this mixture are then filtered and instances of motion patterns are computed using a simple motion model, by linking components across space and time. Motion patterns are then initialized and membership of instances in different motion patterns is established by using KL divergence between mixture distributions of pattern instances. Finally, a pixel level representation of motion patterns is proposed by deriving conditional expectation of optical flow. Results of extensive experiments are presented for multiple surveillance sequences containing numerous patterns involving both pedestrian and vehicular traffic. The proposed method exploits optical flow as the low level feature and performs a hierarchical clustering to obtain motion patterns; and we observe that the use of optical flow is also an integral part of a variety of other vision applications, for example, as features based representation of human actions. We, therefore, propose a new representation for articulated human actions using the motion patterns. The representation is based on hierarchical clustering of observed optical flow in four dimensional, spatial and motion flow space. The automatically discovered motion patterns, are the primitive actions, representative of flow at salient regions on the human body, much like trajectories of body joints, which are notoriously difficult to obtain automatically. The proposed method works in a completely unsupervised fashion, and in sharp contrast to state of the art representations like bag of video words, provides a truly semantically meaningful representation. Each primitive action depicts the most atomic sub-action, like left arm moving upwards, or right leg moving downward and leftward, and is represented by a mixture of four dimensional Gaussian distributions. A sequence of primitive actions are discovered in the test video, and labelled by computing the KL divergence between mixtures. The entire video sequence containing the human action, is thus reduced to a simple string, which is matched against similar strings of training videos to classify the action. The string matching is performed by global alignment, using the well-known Needleman-Wunsch algorithm. Experiments reported on multiple human actions data sets, confirm the validity, simplicity, and semantically meaningful nature of the proposed representation. Results obtained are encouraging and comparable to the state of the art.
Show less - Date Issued
- 2011
- Identifier
- CFE0003646, ucf:48836
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003646
- Title
- On Distributed Estimation for Resource Constrained Wireless Sensor Networks.
- Creator
-
Sani, Alireza, Vosoughi, Azadeh, Rahnavard, Nazanin, Wei, Lei, Atia, George, Chatterjee, Mainak, University of Central Florida
- Abstract / Description
-
We study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically...
Show moreWe study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically distributed tiny sensors are tasked with collecting data from the field. Each sensor locally processes its noisy observation (local processing can include compression,dimension reduction, quantization, etc) and transmits the processed observation over communication channels to the FC, where the received data is used to form a global estimate of the unknown source such that the Mean Square Error (MSE) of the DES is minimized. The accuracy of DES depends on many factors such as intensity of observation noises in sensors, quantization errors in sensors, available power and bandwidth of the network, quality of communication channels between sensors and the FC, and the choice of fusion rule in the FC. Taking into account all of these contributing factors and implementing a DES system which minimizes the MSE and satisfies all constraints is a challenging task. In order to probe into different aspects of this challenging task we identify and formulate the following three problems and address them accordingly:1- Consider an inhomogeneous WSN where the sensors' observations is modeled linear with additive Gaussian noise. The communication channels between sensors and FC are orthogonal power and bandwidth-constrained erroneous wireless fading channels. The unknown to be estimated is a Gaussian vector. Sensors employ uniform multi-bit quantizers and BPSK modulation. Given this setup, we ask: what is the best fusion rule in the FC? what is the best transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize the MSE? In order to answer these questions, we derive some upper bounds on global MSE and through minimizing those bounds, we propose various resource allocation schemes for the problem, through which we investigate the effect of contributing factors on the MSE.2- Consider an inhomogeneous WSN with an FC which is tasked with estimating a scalar Gaussian unknown. The sensors are equipped with uniform multi-bit quantizers and the communication channels are modeled as Binary Symmetric Channels (BSC). In contrast to former problem the sensors experience independent multiplicative noises (in addition to additive noise). The natural question in this scenario is: how does multiplicative noise affect the DES system performance? how does it affect the resource allocation for sensors, with respect to the case where there is no multiplicative noise? We propose a linear fusion rule in the FC and derive the associated MSE in closed-form. We propose several rate allocation schemes with different levels of complexity which minimize the MSE. Implementing the proposed schemes lets us study the effect of multiplicative noise on DES system performance and its dynamics. We also derive Bayesian Cramer-Rao Lower Bound (BCRLB) and compare the MSE performance of our porposed methods against the bound.As a dual problem we also answer the question: what is the minimum required bandwidth of thenetwork to satisfy a predetermined target MSE?3- Assuming the framework of Bayesian DES of a Gaussian unknown with additive and multiplicative Gaussian noises involved, we answer the following question: Can multiplicative noise improve the DES performance in any case/scenario? the answer is yes, and we call the phenomena as 'enhancement mode' of multiplicative noise. Through deriving different lower bounds, such as BCRLB,Weiss-Weinstein Bound (WWB), Hybrid CRLB (HCRLB), Nayak Bound (NB), Yatarcos Bound (YB) on MSE, we identify and characterize the scenarios that the enhancement happens. We investigate two situations where variance of multiplicative noise is known and unknown. Wealso compare the performance of well-known estimators with the derived bounds, to ensure practicability of the mentioned enhancement modes.
Show less - Date Issued
- 2017
- Identifier
- CFE0006913, ucf:51698
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006913