Current Search: distributed simulation (x)
View All Items
- Title
- A HOLISTIC USABILITY FRAMEWORK FOR DISTRIBUTED SIMULATION SYSTEMS.
- Creator
-
Dawson, Jeffrey, Rabelo, Luis, University of Central Florida
- Abstract / Description
-
This dissertation develops a holistic usability framework for distributed simulation systems (DSSs). The framework is developed considering relevant research in human-computer interaction, computer science, technical writing, engineering, management, and psychology. The methodology used consists of three steps: (1) framework development, (2) surveys of users to validate and refine the framework, and to determine attribute weights, and (3) application of the framework to two real-world systems...
Show moreThis dissertation develops a holistic usability framework for distributed simulation systems (DSSs). The framework is developed considering relevant research in human-computer interaction, computer science, technical writing, engineering, management, and psychology. The methodology used consists of three steps: (1) framework development, (2) surveys of users to validate and refine the framework, and to determine attribute weights, and (3) application of the framework to two real-world systems. The concept of a holistic usability framework for DSSs arose during a project to improve the usability of the Virtual Test Bed, a prototypical DSS, and the framework is partly a result of that project. In addition, DSSs at Ames Research Center were studied for additional insights. The framework has six dimensions: end user needs, end user interface(s), programming, installation, training, and documentation. The categories of participants in this study include managers, researchers, programmers, end users, trainers, and trainees. The first survey was used to obtain qualitative and quantitative data to validate and refine the framework. Attributes that failed the validation test were dropped from the framework. A second survey was used to obtain attribute weights. The refined framework was used to evaluate two existing DSSs, measuring their holistic usabilities. Ensuring that the needs of the variety of types of users who interact with the system during design, development, and use are met is important to launch a successful system. Adequate consideration of system usability along the several dimensions in the framework will not only ensure system success but also increase productivity, lower life cycle costs, and result in a more pleasurable working experience for people who work with the system.
Show less - Date Issued
- 2006
- Identifier
- CFE0001256, ucf:46906
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001256
- Title
- Model-Based Systems Engineering Approach to Distributed and Hybrid Simulation Systems.
- Creator
-
Pastrana, John, Rabelo, Luis, Lee, Gene, Elshennawy, Ahmad, Kincaid, John, University of Central Florida
- Abstract / Description
-
INCOSE defines Model-Based Systems Engineering (MBSE) as (")the formalized application of modeling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases.(") One very important development is the utilization of MBSE to develop distributed and hybrid (discrete-continuous) simulation modeling systems. MBSE can help to describe the systems to be modeled...
Show moreINCOSE defines Model-Based Systems Engineering (MBSE) as (")the formalized application of modeling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases.(") One very important development is the utilization of MBSE to develop distributed and hybrid (discrete-continuous) simulation modeling systems. MBSE can help to describe the systems to be modeled and help make the right decisions and partitions to tame complexity. The ability to embrace conceptual modeling and interoperability techniques during systems specification and design presents a great advantage in distributed and hybrid simulation systems development efforts. Our research is aimed at the definition of a methodological framework that uses MBSE languages, methods and tools for the development of these simulation systems. A model-based composition approach is defined at the initial steps to identify distributed systems interoperability requirements and hybrid simulation systems characteristics. Guidelines are developed to adopt simulation interoperability standards and conceptual modeling techniques using MBSE methods and tools. Domain specific system complexity and behavior can be captured with model-based approaches during the system architecture and functional design requirements definition. MBSE can allow simulation engineers to formally model different aspects of a problem ranging from architectures to corresponding behavioral analysis, to functional decompositions and user requirements (Jobe, 2008).
Show less - Date Issued
- 2014
- Identifier
- CFE0005395, ucf:50464
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005395
- Title
- RESOURCE-CONSTRAINT AND SCALABLE DATA DISTRIBUTION MANAGEMENT FOR HIGH LEVEL ARCHITECTURE.
- Creator
-
Gupta, Pankaj, Guha, Ratan, University of Central Florida
- Abstract / Description
-
In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six...
Show moreIn this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences.
Show less - Date Issued
- 2007
- Identifier
- CFE0001949, ucf:47447
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001949
- Title
- A FRAMEWORK TO MODEL COMPLEX SYSTEMS VIA DISTRIBUTED SIMULATION A CASE STUDY OF THE VIRTUAL TEST BED SIMULATION SYSTEM USING THE HIGH LEVEL ARCHITECTURE.
- Creator
-
Park, Jaebok, Sepulveda, Jose, University of Central Florida
- Abstract / Description
-
As the size, complexity, and functionality of systems we need to model and simulate con-tinue to increase, benefits such as interoperability and reusability enabled by distributed discrete-event simulation are becoming extremely important in many disciplines, not only military but also many engineering disciplines such as distributed manufacturing, supply chain management, and enterprise engineering, etc. In this dissertation we propose a distributed simulation framework for the development...
Show moreAs the size, complexity, and functionality of systems we need to model and simulate con-tinue to increase, benefits such as interoperability and reusability enabled by distributed discrete-event simulation are becoming extremely important in many disciplines, not only military but also many engineering disciplines such as distributed manufacturing, supply chain management, and enterprise engineering, etc. In this dissertation we propose a distributed simulation framework for the development of modeling and the simulation of complex systems. The framework is based on the interoperability of a simulation system enabled by distributed simulation and the gateways which enable Com-mercial Off-the-Shelf (COTS) simulation packages to interconnect to the distributed simulation engine. In the case study of modeling Virtual Test Bed (VTB), the framework has been designed as a distributed simulation to facilitate the integrated execution of different simulations, (shuttle process model, Monte Carlo model, Delay and Scrub Model) each of which is addressing differ-ent mission components as well as other non-simulation applications (Weather Expert System and Virtual Range). Although these models were developed independently and at various times, the original purposes have been seamlessly integrated, and interact with each other through Run-time Infrastructure (RTI) to simulate shuttle launch related processes. This study found that with the framework the defining properties of complex systems - interaction and emergence are realized and that the software life cycle models (including the spiral model and prototyping) can be used as metaphors to manage the complexity of modeling and simulation of the system. The system of systems (a complex system is intrinsically a "system of systems") continuously evolves to accomplish its goals, during the evolution subsystems co-ordinate with one another and adapt with environmental factors such as policies, requirements, and objectives. In the case study we first demonstrate how the legacy models developed in COTS simulation languages/packages and non-simulation tools can be integrated to address a compli-cated system of systems. We then describe the techniques that can be used to display the state of remote federates in a local federate in the High Level Architecture (HLA) based distributed simulation using COTS simulation packages.
Show less - Date Issued
- 2005
- Identifier
- CFE0000534, ucf:46416
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000534
- Title
- DATA BANDWIDTH REDUCTION TECHNIQUES FOR DISTRIBUTED EMBEDDED SIMULATION USING CONCURRENT BEHAVIOR MODELS.
- Creator
-
Bahr, Hubert, DeMara, Ronald, University of Central Florida
- Abstract / Description
-
Maintaining coherence between the independent views of multiple participants at distributed locations is essential in an Embedded Simulation environment. Currently, the Distributed Interactive Simulation (DIS) protocol maintains coherence by broadcasting the entity state streams from each simulation station. In this dissertation, a novel alternative to DIS that replaces the transmitting sources with local sources is developed, validated, and assessed by analytical and experimental means. The...
Show moreMaintaining coherence between the independent views of multiple participants at distributed locations is essential in an Embedded Simulation environment. Currently, the Distributed Interactive Simulation (DIS) protocol maintains coherence by broadcasting the entity state streams from each simulation station. In this dissertation, a novel alternative to DIS that replaces the transmitting sources with local sources is developed, validated, and assessed by analytical and experimental means. The proposed Concurrent Model approach reduces the communication burden to transmission of only synchronization and model-update messages. Necessary and sufficient conditions for the correctness of Concurrent Models in a discrete event simulation environment are established by developing Behavioral Congruence ¨B(EL, ER) and Temporal Congruence ¨T(t, ER) functions. They indicate model discrepancies with respect to the simulation time t, and the local and remote entity state streams EL and ER, respectively. Performance benefits were quantified in terms of the bandwidth reduction ratio BR=N/I obtained from the comparison of the OneSAF Testbed Semi-Automated Forces (OTBSAF) simulator under DIS requiring a total of N bits and a testbed modified for the Concurrent Model approach which required I bits. In the experiments conducted, a range of 100 d BR d 294 was obtained representing two orders of magnitude reduction in simulation traffic. Investigation showed that the models rely heavily on the priority data structure of the discrete event simulation and that performance of the overall simulation can be enhanced by an additional 6% by improving the queue management. A low run-time overhead, self-adapting storage policy called the Smart Priority Queue (SPQ) was developed and evaluated within the Concurrent Model. The proposed SPQ policies employ a lowcomplexity linear queue for near head activities and a rapid-indexing variable binwidth calendar queue for distant events. The SPQ configuration is determined by monitoring queue access behavior using cost scoring factors and then applying heuristics to adjust the organization of the underlying data structures. Results indicate that optimizing storage to the spatial distribution of queue access can decrease HOLD operation cost between 25% and 250% over existing algorithms such as calendar queues. Taken together, these techniques provide an entity state generation mechanism capable of overcoming the challenges of Embedded Simulation in harsh mobile communications environments with restricted bandwidth, increased message latency, and extended message drop-outs.
Show less - Date Issued
- 2004
- Identifier
- CFE0000198, ucf:46166
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000198
- Title
- IMPROVING PROJECT MANAGEMENT WITH SIMULATION AND COMPLETION DISTRIBUTION FUNCTIONS.
- Creator
-
Cates, Grant, Mollaghasemi, Mansooreh, University of Central Florida
- Abstract / Description
-
Despite the critical importance of project completion timeliness, management practices in place today remain inadequate for addressing the persistent problem of project completion tardiness. Uncertainty has been identified as a contributing factor in late projects. This uncertainty resides in activity duration estimates, unplanned upsetting events, and the potential unavailability of critical resources. This research developed a comprehensive simulation based methodology for conducting...
Show moreDespite the critical importance of project completion timeliness, management practices in place today remain inadequate for addressing the persistent problem of project completion tardiness. Uncertainty has been identified as a contributing factor in late projects. This uncertainty resides in activity duration estimates, unplanned upsetting events, and the potential unavailability of critical resources. This research developed a comprehensive simulation based methodology for conducting quantitative project completion-time risk assessments. The methodology enables project stakeholders to visualize uncertainty or risk, i.e. the likelihood of their project completing late and the magnitude of the lateness, by providing them with a completion time distribution function of their projects. Discrete event simulation is used to determine a project's completion distribution function. The project simulation is populated with both deterministic and stochastic elements. Deterministic inputs include planned activities and resource requirements. Stochastic inputs include activity duration growth distributions, probabilities for unplanned upsetting events, and other dynamic constraints upon project activities. Stochastic inputs are based upon past data from similar projects. The time for an entity to complete the simulation network, subject to both the deterministic and stochastic factors, represents the time to complete the project. Multiple replications of the simulation are run to create the completion distribution function. The methodology was demonstrated to be effective for the on-going project to assemble the International Space Station. Approximately $500 million per month is being spent on this project, which is scheduled to complete by 2010. Project stakeholders participated in determining and managing completion distribution functions. The first result was improved project completion risk awareness. Secondly, mitigation options were analyzed to improve project completion performance and reduce total project cost.
Show less - Date Issued
- 2004
- Identifier
- CFE0000209, ucf:46243
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000209
- Title
- An Integrated Framework for Automated Data Collection and Processing for Discrete Event Simulation Models.
- Creator
-
Rodriguez, Carlos, Kincaid, John, Karwowski, Waldemar, O'Neal, Thomas, Kaup, David, Mouloua, Mustapha, University of Central Florida
- Abstract / Description
-
Discrete Events Simulation (DES) is a powerful tool of modeling and analysis used in different disciplines. DES models require data in order to determine the different parameters that drive the simulations. The literature about DES input data management indicates that the preparation of necessary input data is often a highly manual process, which causes inefficiencies, significant time consumption and a negative user experience.The focus of this research investigation is addressing the manual...
Show moreDiscrete Events Simulation (DES) is a powerful tool of modeling and analysis used in different disciplines. DES models require data in order to determine the different parameters that drive the simulations. The literature about DES input data management indicates that the preparation of necessary input data is often a highly manual process, which causes inefficiencies, significant time consumption and a negative user experience.The focus of this research investigation is addressing the manual data collection and processing (MDCAP) problem prevalent in DES projects. This research investigation presents an integrated framework to solve the MDCAP problem by classifying the data needed for DES projects into three generic classes. Such classification permits automating and streamlining the preparation of the data, allowing DES modelers to collect, update, visualize, fit, validate, tally and test data in real-time, by performing intuitive actions. In addition to the proposed theoretical framework, this project introduces an innovative user interface that was programmed based on the ideas of the proposed framework. The interface is called DESI, which stands for Discrete Event Simulation Inputs.The proposed integrated framework to automate DES input data preparation was evaluated against benchmark measures presented in the literature in order to show its positive impact in DES input data management. This research investigation demonstrates that the proposed framework, instantiated by the DESI interface, addresses current gaps in the field, reduces the time devoted to input data management within DES projects and advances the state-of-the-art in DES input data management automation.
Show less - Date Issued
- 2015
- Identifier
- CFE0005878, ucf:50861
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005878
- Title
- Network Partitioning in Distributed Agent-Based Models.
- Creator
-
Petkova, Antoniya, Deo, Narsingh, Hughes, Charles, Bassiouni, Mostafa, Shaykhian, Gholam, University of Central Florida
- Abstract / Description
-
Agent-Based Models (ABMs) are an emerging simulation paradigm for modeling complex systems, comprised of autonomous, possibly heterogeneous, interacting agents. The utility of ABMs lies in their ability to represent such complex systems as self-organizing networks of agents. Modeling and understanding the behavior of complex systems usually occurs at large and representative scales, and often obtaining and visualizing of simulation results in real-time is critical.The real-time requirement...
Show moreAgent-Based Models (ABMs) are an emerging simulation paradigm for modeling complex systems, comprised of autonomous, possibly heterogeneous, interacting agents. The utility of ABMs lies in their ability to represent such complex systems as self-organizing networks of agents. Modeling and understanding the behavior of complex systems usually occurs at large and representative scales, and often obtaining and visualizing of simulation results in real-time is critical.The real-time requirement necessitates the use of in-memory computing, as it is dif?cult and challenging to handle the latency and unpredictability of disk accesses. Combining this observation with the scale requirement emphasizes the need to use parallel and distributed computing platforms, such as MPI-enabled CPU clusters. Consequently, the agent population must be "partitioned" across different CPUs in a cluster. Further, the typically high volume of interactions among agents can quickly become a signi?cant bottleneck for real-time or large-scale simulations. The problem is exacerbated if the underlying ABM network is dynamic and the inter-process communication evolves over the course of the simulation. Therefore, it is critical to develop topology-aware partitioning mechanisms to support such large simulations.In this dissertation, we demonstrate that distributed agent-based model simulations bene?t from the use of graph partitioning algorithms that involve a local, neighborhood-based perspective. Such methods do not rely on global accesses to the network and thus are more scalable. In addition, we propose two partitioning schemes that consider the bottom-up individual-centric nature of agent-based modeling. The ?rst technique utilizes label-propagation community detection to partition the dynamic agent network of an ABM. We propose a latency-hiding, seamless integration of community detection in the dynamics of a distributed ABM. To achieve this integration, we exploit the similarity in the process flow patterns of a label-propagation community-detection algorithm and self-organizing ABMs.In the second partitioning scheme, we apply a combination of the Guided Local Search (GLS) and Fast Local Search (FLS) metaheuristics in the context of graph partitioning. The main driving principle of GLS is the dynamic modi?cation of the objective function to escape local optima. The algorithm augments the objective of a local search, thereby transforming the landscape structure and escaping a local optimum. FLS is a local search heuristic algorithm that is aimed at reducing the search space of the main search algorithm. It breaks down the space into sub-neighborhoods such that inactive sub-neighborhoods are removed from the search process. The combination of GLS and FLS allowed us to design a graph partitioning algorithm that is both scalable and sensitive to the inherent modularity of real-world networks.
Show less - Date Issued
- 2017
- Identifier
- CFE0006903, ucf:51706
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006903
- Title
- MODELING, DESIGN AND EVALUATION OF NETWORKING SYSTEMS AND PROTOCOLS THROUGH SIMULATION.
- Creator
-
Lacks, Daniel, Kocak, Taskin, University of Central Florida
- Abstract / Description
-
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework...
Show moreComputer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.
Show less - Date Issued
- 2007
- Identifier
- CFE0001887, ucf:47399
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001887
- Title
- AN INTERACTIVE DISTRIBUTED SIMULATION FRAMEWORK WITH APPLICATION TO WIRELESS NETWORKS AND INTRUSION DETECTION.
- Creator
-
Kachirski, Oleg, Guha, Ratan, University of Central Florida
- Abstract / Description
-
In this dissertation, we describe the portable, open-source distributed simulation framework (WINDS) targeting simulations of wireless network infrastructures that we have developed. We present the simulation framework which uses modular architecture and apply the framework to studies of mobility pattern effects, routing and intrusion detection mechanisms in simulations of large-scale wireless ad hoc, infrastructure, and totally mobile networks. The distributed simulations within the...
Show moreIn this dissertation, we describe the portable, open-source distributed simulation framework (WINDS) targeting simulations of wireless network infrastructures that we have developed. We present the simulation framework which uses modular architecture and apply the framework to studies of mobility pattern effects, routing and intrusion detection mechanisms in simulations of large-scale wireless ad hoc, infrastructure, and totally mobile networks. The distributed simulations within the framework execute seamlessly and transparently to the user on a symmetric multiprocessor cluster computer or a network of computers with no modifications to the code or user objects. A visual graphical interface precisely depicts simulation object states and interactions throughout the simulation execution, giving the user full control over the simulation in real time. The network configuration is detected by the framework, and communication latency is taken into consideration when dynamically adjusting the simulation clock, allowing the simulation to run on a heterogeneous computing system. The simulation framework is easily extensible to multi-cluster systems and computing grids. An entire simulation system can be constructed in a short time, utilizing user-created and supplied simulation components, including mobile nodes, base stations, routing algorithms, traffic patterns and other objects. These objects are automatically compiled and loaded by the simulation system, and are available for dynamic simulation injection at runtime. Using our distributed simulation framework, we have studied modern intrusion detection systems (IDS) and assessed applicability of existing intrusion detection techniques to wireless networks. We have developed a mobile agent-based IDS targeting mobile wireless networks, and introduced load-balancing optimizations aimed at limited-resource systems to improve intrusion detection performance. Packet-based monitoring agents of our IDS employ a CASE-based reasoner engine that performs fast lookups of network packets in the existing SNORT-based intrusion rule-set. Experiments were performed using the intrusion data from MIT Lincoln Laboratories studies, and executed on a cluster computer utilizing our distributed simulation system.
Show less - Date Issued
- 2005
- Identifier
- CFE0000642, ucf:46545
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000642
- Title
- On Distributed Estimation for Resource Constrained Wireless Sensor Networks.
- Creator
-
Sani, Alireza, Vosoughi, Azadeh, Rahnavard, Nazanin, Wei, Lei, Atia, George, Chatterjee, Mainak, University of Central Florida
- Abstract / Description
-
We study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically...
Show moreWe study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically distributed tiny sensors are tasked with collecting data from the field. Each sensor locally processes its noisy observation (local processing can include compression,dimension reduction, quantization, etc) and transmits the processed observation over communication channels to the FC, where the received data is used to form a global estimate of the unknown source such that the Mean Square Error (MSE) of the DES is minimized. The accuracy of DES depends on many factors such as intensity of observation noises in sensors, quantization errors in sensors, available power and bandwidth of the network, quality of communication channels between sensors and the FC, and the choice of fusion rule in the FC. Taking into account all of these contributing factors and implementing a DES system which minimizes the MSE and satisfies all constraints is a challenging task. In order to probe into different aspects of this challenging task we identify and formulate the following three problems and address them accordingly:1- Consider an inhomogeneous WSN where the sensors' observations is modeled linear with additive Gaussian noise. The communication channels between sensors and FC are orthogonal power and bandwidth-constrained erroneous wireless fading channels. The unknown to be estimated is a Gaussian vector. Sensors employ uniform multi-bit quantizers and BPSK modulation. Given this setup, we ask: what is the best fusion rule in the FC? what is the best transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize the MSE? In order to answer these questions, we derive some upper bounds on global MSE and through minimizing those bounds, we propose various resource allocation schemes for the problem, through which we investigate the effect of contributing factors on the MSE.2- Consider an inhomogeneous WSN with an FC which is tasked with estimating a scalar Gaussian unknown. The sensors are equipped with uniform multi-bit quantizers and the communication channels are modeled as Binary Symmetric Channels (BSC). In contrast to former problem the sensors experience independent multiplicative noises (in addition to additive noise). The natural question in this scenario is: how does multiplicative noise affect the DES system performance? how does it affect the resource allocation for sensors, with respect to the case where there is no multiplicative noise? We propose a linear fusion rule in the FC and derive the associated MSE in closed-form. We propose several rate allocation schemes with different levels of complexity which minimize the MSE. Implementing the proposed schemes lets us study the effect of multiplicative noise on DES system performance and its dynamics. We also derive Bayesian Cramer-Rao Lower Bound (BCRLB) and compare the MSE performance of our porposed methods against the bound.As a dual problem we also answer the question: what is the minimum required bandwidth of thenetwork to satisfy a predetermined target MSE?3- Assuming the framework of Bayesian DES of a Gaussian unknown with additive and multiplicative Gaussian noises involved, we answer the following question: Can multiplicative noise improve the DES performance in any case/scenario? the answer is yes, and we call the phenomena as 'enhancement mode' of multiplicative noise. Through deriving different lower bounds, such as BCRLB,Weiss-Weinstein Bound (WWB), Hybrid CRLB (HCRLB), Nayak Bound (NB), Yatarcos Bound (YB) on MSE, we identify and characterize the scenarios that the enhancement happens. We investigate two situations where variance of multiplicative noise is known and unknown. Wealso compare the performance of well-known estimators with the derived bounds, to ensure practicability of the mentioned enhancement modes.
Show less - Date Issued
- 2017
- Identifier
- CFE0006913, ucf:51698
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006913
- Title
- The Identification and Segmentation of Astrocytoma Prior to Critical Mass, by means of a Volumetric/Subregion Regression Analysis of Normal and Neoplastic Brain Tissue.
- Creator
-
Higgins, Lyn, Hughes, Charles, Morrow, Patricia Bockelman, Bagci, Ulas, Lisle, Curtis, University of Central Florida
- Abstract / Description
-
As the underlying cause of Glioblastoma Multiforme (GBM) is presently unclear, this research implements a new approach to identifying and segmenting plausible instances of GBM prior to critical mass. Grade-IV Astrocytoma, or GBM, is an aggressive and malignant cancer arising from star-shaped glial cells, or astrocytes, where the astrocytes, functionally, assist in the support and protection of neurons within the central nervous system and spinal cord. Subsequently, our motivation for...
Show moreAs the underlying cause of Glioblastoma Multiforme (GBM) is presently unclear, this research implements a new approach to identifying and segmenting plausible instances of GBM prior to critical mass. Grade-IV Astrocytoma, or GBM, is an aggressive and malignant cancer arising from star-shaped glial cells, or astrocytes, where the astrocytes, functionally, assist in the support and protection of neurons within the central nervous system and spinal cord. Subsequently, our motivation for researching the ability to recognize GBM is that the underlying cause of the mutation is presently unclear, leading to the operative that GBM is only detectable through a combination of MRI and CT brain scans, cooperatively, along with a resection biopsy. Since astrocytoma only becomes evident at critical mass, when the cellular structure of the neoplasm becomes visible within the image, this research seeks to achieve earlier identification and segmentation of the neoplasm by evaluating the malignant area via a volumetric voxel approach to removing noise artifacts and analyzing voxel differentials. In order to investigate neoplasm continuity, a differential approach has been implemented utilizing a multi-polynomial/multi-domain regression algorithm, thus, ultimately, providing a graphical and mathematical analysis of the differentials within critical mass and non-critical mass images. Given these augmentations to MRI and CT image rectifications, we theorize that our approach will improve on astrocytoma recognition and segmentation, along with achieving greater accuracy in diagnostic evaluations of the malignant area.
Show less - Date Issued
- 2018
- Identifier
- CFE0007336, ucf:52111
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007336