Current Search: Zou, Changchun (x)
View All Items
Pages
- Title
- UTILIZING EDGE IN IOT AND VIDEO STREAMING APPLICATIONS TO REDUCE BOTTLENECKS IN INTERNET TRAFFIC.
- Creator
-
Akpinar, Kutalmis, Hua, Kien, Zou, Changchun, Turgut, Damla, Wang, Jun, University of Central Florida
- Abstract / Description
-
There is a large increase in the surge of data over Internet due to the increasing demand on multimedia content. It is estimated that 80% of Internet traffic will be video by 2022, according to a recent study. At the same time, IoT devices on Internet will double the human population. While infrastructure standards on IoT are still nonexistent, enterprise solutions tend to encourage cloud-based solutions, causing an additional surge of data over the Internet. This study proposes solutions to...
Show moreThere is a large increase in the surge of data over Internet due to the increasing demand on multimedia content. It is estimated that 80% of Internet traffic will be video by 2022, according to a recent study. At the same time, IoT devices on Internet will double the human population. While infrastructure standards on IoT are still nonexistent, enterprise solutions tend to encourage cloud-based solutions, causing an additional surge of data over the Internet. This study proposes solutions to bring video traffic and IoT computation back to the edges of the network, so that costly Internet infrastructure upgrades are not necessary. An efficient way to prevent the Internet surge over the network for IoT is to push the application specific computation to the edge of the network, close to where the data is generated, so that large data can be eliminated before being delivered to the cloud. In this study, an event query language and processing environment is provided to process events from various devices. The query processing environment brings the application developers, sensor infrastructure providers and end users together. It uses boolean events as the streaming and processing units. This addresses the device heterogeneity and pushes the data-intense tasks to the edge of network.The second focus of the study is Video-on-Demand applications. A characteristic of VoD traffic is its high redundancy. Due to the demand on popular content, the same video traffic flows through Internet Service Provider's network as overlapping but separate streams. In previous studies on redundancy elimination, overlapping streams are merged into each other in link-level by receiving the packet only for the first stream, and re-using it for the subsequent duplicated streams. In this study, we significantly improve these techniques by introducing a merger-aware routing method.Our final focus is increasing utilization of Content Delivery Network (CDN) servers on the edge of network to reduce the long-distance traffic. The proposed system uses Software Defined Networks (SDN) to route adaptive video streaming clients to the best available CDN servers in terms of network availability. While performing the network assistance, the system does not reveal the video request information to the network provider, thus enabling privacy protection for encrypted streams. The request routing is performed in segment level for adaptive streaming. This enables to re-route the client to the best available CDN without an interruption if network conditions change during the stream.
Show less - Date Issued
- 2019
- Identifier
- CFE0007882, ucf:52774
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007882
- Title
- Trust-Based Rating Prediction and Malicious Profile Detection in Online Social Recommender Systems.
- Creator
-
Davoudi, Anahita, Chatterjee, Mainak, Hu, Haiyan, Zou, Changchun, Rahnavard, Nazanin, University of Central Florida
- Abstract / Description
-
Online social networks and recommender systems have become an effective channel for influencing millions of users by facilitating exchange and spread of information. This dissertation addresses multiple challenges that are faced by online social recommender systems such as: i) finding the extent of information spread; ii) predicting the rating of a product; and iii) detecting malicious profiles. Most of the research in this area do not capture the social interactions and rely on empirical or...
Show moreOnline social networks and recommender systems have become an effective channel for influencing millions of users by facilitating exchange and spread of information. This dissertation addresses multiple challenges that are faced by online social recommender systems such as: i) finding the extent of information spread; ii) predicting the rating of a product; and iii) detecting malicious profiles. Most of the research in this area do not capture the social interactions and rely on empirical or statistical approaches without considering the temporal aspects. We capture the temporal spread of information using a probabilistic model and use non-linear differential equations to model the diffusion process. To predict the rating of a product, we propose a social trust model and use the matrix factorization method to estimate user's taste by incorporating user-item rating matrix. The effect of tastes of friends of a user is captured using a trust model which is based on similarities between users and their centralities. Similarity is modeled using Vector Space Similarity and Pearson Correlation Coefficient algorithms, whereas degree, eigen-vector, Katz, and PageRank are used to model centrality. As rating of a product has tremendous influence on its saleability, social recommender systems are vulnerable to profile injection attacks that affect user's opinion towards favorable or unfavorable recommendations for a product. We propose a classification approach for detecting attackers based on attributes that provide the likelihood of a user profile of that of an attacker. To evaluate the performance, we inject push and nuke attacks, and use precision and recall to identify the attackers. All proposed models have been validated using datasets from Facebook, Epinions, and Digg. Results exhibit that the proposed models are able to better predict the information spread, rating of a product, and identify malicious user profiles with high accuracy and low false positives.
Show less - Date Issued
- 2018
- Identifier
- CFE0007168, ucf:52245
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007168
- Title
- Scalable Network Design and Management with Decentralized Software-defined Networking.
- Creator
-
Atwal, Kuldip Singh, Bassiouni, Mostafa, Fu, Xinwen, Zou, Changchun, Deo, Narsingh, University of Central Florida
- Abstract / Description
-
Network softwarization is among the most significant innovations of computer networks in the last few decades. The lack of uniform and programmable interfaces for network management led to the design of OpenFlow protocol for the university campuses and enterprise networks. This breakthrough coupled with other similar efforts led to an emergence of two complementary but independent paradigms called software-defined networking (SDN) and network function virtualization (NFV). As of this writing,...
Show moreNetwork softwarization is among the most significant innovations of computer networks in the last few decades. The lack of uniform and programmable interfaces for network management led to the design of OpenFlow protocol for the university campuses and enterprise networks. This breakthrough coupled with other similar efforts led to an emergence of two complementary but independent paradigms called software-defined networking (SDN) and network function virtualization (NFV). As of this writing, these paradigms are becoming the de-facto norms of wired and wireless networks alike. This dissertation mainly addresses the scalability aspect of SDN for multiple network types. Although centralized control and separation of control and data planes play a pivotal role for ease of network management, these concepts bring in many challenges as well. Scalability is among the most crucial challenges due to the unprecedented growth of computer networks in the past few years. Therefore, we strive to grapple with this problem in diverse networking scenarios and propose novel solutions by harnessing capabilities provided by SDN and other related technologies. Specifically, we present the techniques to deploy SDN at the Internet scale and to extend the concepts of softwarization for mobile access networks and vehicular networks. Multiple optimizations are employed to mitigate latency and other overheads that contribute to achieve performance gains. Additionally, by taking care of sparse connectivity and high mobility, the intrinsic constraints of centralization for wireless ad-hoc networks are addressed in a systematic manner. The state-of-the-art virtualization techniques are coupled with cloud computing methods to exploit the potential of softwarization in general and SDN in particular. Finally, by tapping into the capabilities of machine learning techniques, an SDN-based solution is proposed that inches closer towards the longstanding goal of self-driving networks. Extensive experiments performed on a large-scale testbed corroborates effectiveness of our approaches.
Show less - Date Issued
- 2019
- Identifier
- CFE0007600, ucf:52543
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007600
- Title
- THE IMPLICATIONS OF VIRTUAL ENVIRONMENTS IN DIGITAL FORENSIC INVESTIGATIONS.
- Creator
-
Patterson, Farrah, Lang, Sheau-Dong, Guha, Ratan, Zou, Changchun, University of Central Florida
- Abstract / Description
-
This research paper discusses the role of virtual environments in digital forensic investigations. With virtual environments becoming more prevalent as an analysis tool in digital forensic investigations, it's becoming more important for digital forensic investigators to understand the limitation and strengths of virtual machines. The study aims to expose limitations within commercial closed source virtual machines and open source virtual machines. The study provides a brief overview of...
Show moreThis research paper discusses the role of virtual environments in digital forensic investigations. With virtual environments becoming more prevalent as an analysis tool in digital forensic investigations, it's becoming more important for digital forensic investigators to understand the limitation and strengths of virtual machines. The study aims to expose limitations within commercial closed source virtual machines and open source virtual machines. The study provides a brief overview of history digital forensic investigations and virtual environments, and concludes with an experiment with four common open and closed source virtual machines; the effects of the virtual machines on the host machine as well as the performance of the virtual machine itself. My findings discovered that while the open source tools provided more control and freedom to the operator, the closed source tools were more stable and consistent in their operation. The significance of these findings can be further researched by applying them in the context of exemplifying reliability of forensic techniques when presented as analysis tool used in litigation.
Show less - Date Issued
- 2011
- Identifier
- CFE0004152, ucf:49050
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004152
- Title
- HFS Plus File System Exposition and Forensics.
- Creator
-
Ware, Scott, Lang, Sheau-Dong, Guha, Ratan, Zou, Changchun, University of Central Florida
- Abstract / Description
-
The Macintosh Hierarchical File System Plus, HFS+, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use...
Show moreThe Macintosh Hierarchical File System Plus, HFS+, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use of B-trees to implement key data structures for maintaining meta-data about folders, files, and data. The implementation of what happens within HFS+ at volume format, or when folders, files, and data are created, moved, or deleted is largely a mystery to those who are not programmers. The vast majority of information on this subject is relegated to documentation in books, papers, and online content that direct the reader to C code, libraries, and include files. If one can't interpret the complex C or Perl code implementations the opportunity to understand the workflow within HFS+ is less than adequate to develop a basic understanding of the internals and how they work. The basic concepts learned from this research will facilitate a better understanding of the HFS+ file system and journal as changes resulting from the adding and deleting files or folders are applied in a controlled, easy to follow, process.The primary tool used to examine the file system changes is a proprietary command line interface, CLI, tool called fileXray. This tool is actually a custom implementation of the HFS+ file system that has the ability to examine file system, meta-data, and data level information that isn't available in other tools. We will also use Apple's command line interface tool, Terminal, the WinHex graphical user interface, GUI, editor, The Sleuth Kit command line tools and DiffFork 1.1.9 help to document and illustrate the file system changes. The processes used to document the pristine and changed versions of the file system, with each experiment, are very similar such that the output files are identical with the exception of the actual change. Keeping the processes the same enables baseline comparisons using a diff tool like DiffFork. Side by side and line by line comparisons of the allocation, extents overflow, catalog, and attributes files will help identify where the changes occurred. The target device in this experiment is a two-gigabyte Universal Serial Bus, USB, thumb drive formatted with Global Unit Identifier, GUID, and Partition Table. Where practical, HFS+ special files and data structures will be manually parsed; documented, and illustrated.
Show less - Date Issued
- 2012
- Identifier
- CFE0004341, ucf:49440
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004341
- Title
- Providing Context to the Clues: Recovery and Reliability of Location Data from Android Devices.
- Creator
-
Bell, Connie, Lang, Sheau-Dong, Guha, Ratan, Zou, Changchun, University of Central Florida
- Abstract / Description
-
Mobile device data continues to increase in significance in both civil and criminal investigations. Location data is often of particular interest. To date, research has established that the devices are location aware, incorporate a variety of resources to obtain location information, and cache the information in various ways. However, a review of the existing research suggests varying degrees of reliability of any such recovered location data. In an effort to clarify the issue, this project...
Show moreMobile device data continues to increase in significance in both civil and criminal investigations. Location data is often of particular interest. To date, research has established that the devices are location aware, incorporate a variety of resources to obtain location information, and cache the information in various ways. However, a review of the existing research suggests varying degrees of reliability of any such recovered location data. In an effort to clarify the issue, this project offers case studies of multiple Android mobile devices utilized in controlled conditions with known settings and applications in documented locations. The study uses data recovered from test devices to corroborate previously identified accuracy trends noted in research involving live-tracked devices, and it further offers detailed analysis strategies for the recovery of location data from devices themselves. A methodology for reviewing device data for possible artifacts that may allow an examiner to evaluate location data reliability is also presented. This paper also addresses emerging trends in device security and cloud storage, which may have significant implications for future mobile device location data recovery and analysis. Discussion of recovered cloud data introduces a distinct and potentially significant resource for investigators, and the paper addresses the cloud resources' advantages and limitations.
Show less - Date Issued
- 2015
- Identifier
- CFE0005924, ucf:50837
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005924
- Title
- SPS: an SMS-based Push Service for Energy Saving in Smartphone's Idle State.
- Creator
-
Dondyk, Erich, Zou, Changchun, Chatterjee, Mainak, Hua, Kien, University of Central Florida
- Abstract / Description
-
Despite of all the advances in smartphone technology in recent years, smartphones still remain limited by their battery life. Unlike other power hungry components in the smartphone, the cellular data and Wi-Fi interfaces often continue to be used even while the phone is in the idle state to accommodate unnecessary data traffic produced by some applications. In addition, bad reception has been proven to greatly increase energy consumed by the radio, which happens quite often when smartphone...
Show moreDespite of all the advances in smartphone technology in recent years, smartphones still remain limited by their battery life. Unlike other power hungry components in the smartphone, the cellular data and Wi-Fi interfaces often continue to be used even while the phone is in the idle state to accommodate unnecessary data traffic produced by some applications. In addition, bad reception has been proven to greatly increase energy consumed by the radio, which happens quite often when smartphone users are inside buildings. In this paper, we present a Short message service Push based Service (SPS) to save unnecessary power consumption when smartphones are in idle state, especially in bad reception areas. First, SPS disables a smartphone's data interfaces whenever the phone is in idle state. Second, to preserve the real-time notification functionality required by some apps, such as new email arrivals and social media updates, when a notification is needed, a wakeup text message will be received by the phone, and then SPS enables the phone's data interfaces to connect to the corresponding server to retrieve notification data via the normal data network. Once the notification data has been retrieved, SPS will disable the data interfaces again if the phone is still in idle state. We have developed a complete prototype for Android smartphones. Our experiments show that SPS consumes less energy than the current approach. In areas with bad reception, the SPS prototype can double the battery life of a smartphone.
Show less - Date Issued
- 2014
- Identifier
- CFE0005157, ucf:50718
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005157
- Title
- High-Performance Composable Transactional Data Structures.
- Creator
-
Zhang, Deli, Dechev, Damian, Leavens, Gary, Zou, Changchun, Lin, Mingjie, University of Central Florida
- Abstract / Description
-
Exploiting the parallelism in multiprocessor systems is a major challenge in the post ``power wall'' era. Programming for multicore demands a change in the way we design and use fundamental data structures. Concurrent data structures allow scalable and thread-safe accesses to shared data. They provide operations that appear to take effect atomically when invoked individually.A main obstacle to the practical use of concurrent data structures is their inability to support composable operations,...
Show moreExploiting the parallelism in multiprocessor systems is a major challenge in the post ``power wall'' era. Programming for multicore demands a change in the way we design and use fundamental data structures. Concurrent data structures allow scalable and thread-safe accesses to shared data. They provide operations that appear to take effect atomically when invoked individually.A main obstacle to the practical use of concurrent data structures is their inability to support composable operations, i.e., to execute multiple operations atomically in a transactional manner. The problem stems from the inability of concurrent data structure to ensure atomicity of transactions composed from operations on a single or multiple data structures instances. This greatly hinders software reuse because users can only invoke data structure operations in a limited number of ways.Existing solutions, such as software transactional memory (STM) and transactional boosting, manage transaction synchronization in an external layer separated from the data structure's own thread-level concurrency control. Although this reduces programming effort, it leads to significant overhead associated with additional synchronization and the need to rollback aborted transactions. In this dissertation, I address the practicality and efficiency concerns by designing, implementing, and evaluating high-performance transactional data structures that facilitate the development of future highly concurrent software systems.Firstly, I present two methodologies for implementing high-performance transactional data structures based on existing concurrent data structures using either lock-based or lock-free synchronizations. For lock-based data structures, the idea is to treat data accessed by multiple operations as resources. The challenge is for each thread to acquire exclusive access to desired resources while preventing deadlock or starvation. Existing locking strategies, like two-phase locking and resource hierarchy, suffer from performance degradation under heavy contention, while lacking a desirable fairness guarantee. To overcome these issues, I introduce a scalable lock algorithm for shared-memory multiprocessors addressing the resource allocation problem. It is the first multi-resource lock algorithm that guarantees the strongest first-in, first-out (FIFO) fairness. For lock-free data structures, I present a methodology for transforming them into high-performance lock-free transactional data structures without revamping the data structures' original synchronization design. My approach leverages the semantic knowledge of the data structure to eliminate the overhead of false conflicts and rollbacks.Secondly, I apply the proposed methodologies and present a suite of novel transactional search data structures in the form of an open source library. This is interesting not only because the fundamental importance of search data structures in computer science and their wide use in real world programs, but also because it demonstrate the implementation issues that arise when using the methodologies I have developed. This library is not only a compilation of a large number of fundamental data structures for multiprocessor applications, but also a framework for enabling composable transactions, and moreover, an infrastructure for continuous integration of new data structures. By taking such a top-down approach, I am able to identify and consider the interplay of data structure interface operations as a whole, which allows for scrutinizing their commutativity rules and hence opens up possibilities for design optimizations.Lastly, I evaluate the throughput of the proposed data structures using transactions with randomly generated operations on two difference hardware systems. To ensure the strongest possible competition, I chose the best performing alternatives from state-of-the-art locking protocols and transactional memory systems in the literature. The results show that it is straightforward to build efficient transactional data structures when using my multi-resource lock as a drop-in replacement for transactional boosted data structures. Furthermore, this work shows that it is possible to build efficient lock-free transactional data structures with all perceived benefits of lock-freedom and with performance far better than generic transactional memory systems.
Show less - Date Issued
- 2016
- Identifier
- CFE0006428, ucf:51453
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006428
- Title
- Spatial and Temporal Modeling for Human Activity Recognition from Multimodal Sequential Data.
- Creator
-
Ye, Jun, Hua, Kien, Foroosh, Hassan, Zou, Changchun, Karwowski, Waldemar, University of Central Florida
- Abstract / Description
-
Human Activity Recognition (HAR) has been an intense research area for more than a decade. Different sensors, ranging from 2D and 3D cameras to accelerometers, gyroscopes, and magnetometers, have been employed to generate multimodal signals to detect various human activities. With the advancement of sensing technology and the popularity of mobile devices, depth cameras and wearable devices, such as Microsoft Kinect and smart wristbands, open a unprecedented opportunity to solve the...
Show moreHuman Activity Recognition (HAR) has been an intense research area for more than a decade. Different sensors, ranging from 2D and 3D cameras to accelerometers, gyroscopes, and magnetometers, have been employed to generate multimodal signals to detect various human activities. With the advancement of sensing technology and the popularity of mobile devices, depth cameras and wearable devices, such as Microsoft Kinect and smart wristbands, open a unprecedented opportunity to solve the challenging HAR problem by learning expressive representations from the multimodal signals recording huge amounts of daily activities which comprise a rich set of categories.Although competitive performance has been reported, existing methods focus on the statistical or spatial representation of the human activity sequence;while the internal temporal dynamics of the human activity sequence arenot sufficiently exploited. As a result, they often face the challenge of recognizing visually similar activities composed of dynamic patterns in different temporal order. In addition, many model-driven methods based on sophisticated features and carefully-designed classifiers are computationally demanding and unable to scale to a large dataset. In this dissertation, we propose to address these challenges from three different perspectives; namely, 3D spatial relationship modeling, dynamic temporal quantization, and temporal order encoding.We propose a novel octree-based algorithm for computing the 3D spatial relationships between objects from a 3D point cloud captured by a Kinect sensor. A set of 26 3D spatial directions are defined to describe the spatial relationship of an object with respect to a reference object. These 3D directions are implemented as a set of spatial operators, such as "AboveSouthEast" and "BelowNorthWest," of an event query language to query human activities in an indoor environment; for example, "A person walks in the hallway from north to south." The performance is quantitatively evaluated in a public RGBD object dataset and qualitatively investigated in a live video computing platform.In order to address the challenge of temporal modeling in human action recognition, we introduce the dynamic temporal quantization, a clustering-like algorithm to quantize human action sequences of varied lengths into fixed-size quantized vectors. A two-step optimization algorithm is proposed to jointly optimize the quantization of the original sequence. In the aggregation step, frames falling into the sample segment are aggregated by max-polling and produce the quantized representation of the segment. During the assignment step, frame-segment assignment is updated according to dynamic time warping, while the temporal order of the entire sequence is preserved. The proposed technique is evaluated on three public 3D human action datasets and achieves state-of-the-art performance.Finally, we propose a novel temporal order encoding approach that models the temporal dynamics of the sequential data for human activity recognition. The algorithm encodes the temporal order of the latent patterns extracted by the subspace projection and generates a highly compact First-Take-All (FTA) feature vector representing the entire sequential data. An optimization algorithm is further introduced to learn the optimized projections in order to increase the discriminative power of the FTA feature. The compactness of the FTA feature makes it extremely efficient for human activity recognition with nearest neighbor search based on Hamming distance. Experimental results on two public human activity datasets demonstrate the advantages of the FTA feature over state-of-the-art methods in both accuracy and efficiency.
Show less - Date Issued
- 2016
- Identifier
- CFE0006516, ucf:51367
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006516
- Title
- MongoDB Incidence Response.
- Creator
-
Morales, Cory, Lang, Sheau-Dong, Zou, Changchun, Guha, Ratan, University of Central Florida
- Abstract / Description
-
NoSQL (Not only SQL) databases have been gaining some popularity over the last few years. Such big companies as Expedia, Shutterfly, MetLife, and Forbes use NoSQL databases to manage data on different projects. These databases can contain a variety of information ranging from nonproprietary data to personally identifiable information like social security numbers. Databases run the risk of cyber intrusion at all times. This paper gives a brief explanation of NoSQL and thoroughly explains a...
Show moreNoSQL (Not only SQL) databases have been gaining some popularity over the last few years. Such big companies as Expedia, Shutterfly, MetLife, and Forbes use NoSQL databases to manage data on different projects. These databases can contain a variety of information ranging from nonproprietary data to personally identifiable information like social security numbers. Databases run the risk of cyber intrusion at all times. This paper gives a brief explanation of NoSQL and thoroughly explains a method of Incidence Response with MongoDB, a NoSQL database provider. This method involves an automated process with a new self-built software tool that analyzing MongoDB audit log's and generates an html page with indicators to show possible intrusions and activities on the instance of MongoDB. When dealing with NoSQL databases there is a lot more to consider than with the traditional RDMS's, and since there is not a lot of out of the box support forensics tools can be very helpful.
Show less - Date Issued
- 2016
- Identifier
- CFE0006538, ucf:51356
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006538
- Title
- Placement of Mode and Wavelength Converters for Throughput Enhancement in Optical Networks.
- Creator
-
Abdulrahman, Ruaa, Bassiouni, Mostafa, Chatterjee, Mainak, Zou, Changchun, University of Central Florida
- Abstract / Description
-
The success of recent experiments to transport data using combined wavelength division multiplexed (WDM) and mode-division multiplexed (MDM) transmission has generated optimism for the attainment of optical networks with unprecedented bandwidth capacity, exceeding the fundamental Shannon capacity limit attained by WDM alone. Optical mode converters and wavelength converters are devices that can be placed in future optical nodes (routers) to prevent or reduce the connection blocking rate and...
Show moreThe success of recent experiments to transport data using combined wavelength division multiplexed (WDM) and mode-division multiplexed (MDM) transmission has generated optimism for the attainment of optical networks with unprecedented bandwidth capacity, exceeding the fundamental Shannon capacity limit attained by WDM alone. Optical mode converters and wavelength converters are devices that can be placed in future optical nodes (routers) to prevent or reduce the connection blocking rate and consequently increase network throughput. In this thesis, the specific problem of the placement of mode converters (MC) and mode-wavelength converters (MWC) in combined mode and wavelength division multiplexing (MWDM) networks is investigated. Four previously proposed wavelength converter placement heuristics are extended to handle the placement of MC and MWC in MWDM networks. A simple but effective method for the placement of mode and wavelength converters in MWDM networks is proposed based on ranking the nodes with respect to the volume of received connection requests. The results of extensive simulation tests to evaluate the new method and compare its performance with the performance of the other four heuristics are presented. The thesis provides extensive comparison results among the five converter placement methods using different network topologies and under different network loads. The results demonstrate the effectiveness of the new proposed method in achieving lower blocking rates compared to the other more-complex converter placement heuristics.
Show less - Date Issued
- 2014
- Identifier
- CFE0005118, ucf:50756
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005118
- Title
- Scene Understanding for Real Time Processing of Queries over Big Data Streaming Video.
- Creator
-
Aved, Alexander, Hua, Kien, Foroosh, Hassan, Zou, Changchun, Ni, Liqiang, University of Central Florida
- Abstract / Description
-
With heightened security concerns across the globe and the increasing need to monitor, preserve and protect infrastructure and public spaces to ensure proper operation, quality assurance and safety, numerous video cameras have been deployed. Accordingly, they also need to be monitored effectively and efficiently. However, relying on human operators to constantly monitor all the video streams is not scalable or cost effective. Humans can become subjective, fatigued, even exhibit bias and it is...
Show moreWith heightened security concerns across the globe and the increasing need to monitor, preserve and protect infrastructure and public spaces to ensure proper operation, quality assurance and safety, numerous video cameras have been deployed. Accordingly, they also need to be monitored effectively and efficiently. However, relying on human operators to constantly monitor all the video streams is not scalable or cost effective. Humans can become subjective, fatigued, even exhibit bias and it is difficult to maintain high levels of vigilance when capturing, searching and recognizing events that occur infrequently or in isolation.These limitations are addressed in the Live Video Database Management System (LVDBMS), a framework for managing and processing live motion imagery data. It enables rapid development of video surveillance software much like traditional database applications are developed today. Such developed video stream processing applications and ad hoc queries are able to "reuse" advanced image processing techniques that have been developed. This results in lower software development and maintenance costs. Furthermore, the LVDBMS can be intensively tested to ensure consistent quality across all associated video database applications. Its intrinsic privacy framework facilitates a formalized approach to the specification and enforcement of verifiable privacy policies. This is an important step towards enabling a general privacy certification for video surveillance systems by leveraging a standardized privacy specification language.With the potential to impact many important fields ranging from security and assembly line monitoring to wildlife studies and the environment, the broader impact of this work is clear. The privacy framework protects the general public from abusive use of surveillance technology; success in addressing the (")trust(") issue will enable many new surveillance-related applications. Although this research focuses on video surveillance, the proposed framework has the potential to support many video-based analytical applications.
Show less - Date Issued
- 2013
- Identifier
- CFE0004648, ucf:49900
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004648
- Title
- Improving fairness, throughput and blocking performance for long haul and short reach optical networks.
- Creator
-
Tariq, Sana, Bassiouni, Mostafa, Zou, Changchun, Turgut, Damla, Li, Guifang, University of Central Florida
- Abstract / Description
-
Innovations in optical communication are expected to transform the landscape of global communications, internet and datacenter networks. This dissertation investigates several important issues in optical communication such as fairness, throughput, blocking probability and differentiated quality of service (QoS). Novel algorithms and new approaches have been presented to improve the performance of optical circuit switching (OCS) and optical burst switching (OBS) for long haul, and datacenter...
Show moreInnovations in optical communication are expected to transform the landscape of global communications, internet and datacenter networks. This dissertation investigates several important issues in optical communication such as fairness, throughput, blocking probability and differentiated quality of service (QoS). Novel algorithms and new approaches have been presented to improve the performance of optical circuit switching (OCS) and optical burst switching (OBS) for long haul, and datacenter networks. Extensive simulations tests have been conducted to evaluate the effectiveness of the proposed algorithms. These simulation tests were performed over a number of network topologies such as ring, mesh and U.S. Long-Haul, some high processing computing (HPC) topologies such as 2D and 6D mesh torus topologies and modern datacenter topologies such as FatTree and BCube.Two new schemes are proposed for long haul networks to improve throughput and hop count fairness in OBS networks. The idea is motivated by the observation that providing a slightly more priority to longer bursts over short bursts can significantly improve the throughput of the OBS networks without adversely affecting hop-count fairness. The results of extensive performance tests have shown that proposed schemes improve the throughput of optical OBS networks and enhance the hop-count fairness. Another contribution of this dissertation is the research work on developing routing and wavelength assignment schemes in multimode fiber networks. Two additional schemes for long haul networks are presented and evaluated over multimode fiber networks. First for alleviating the fairness problem in OBS networks using wavelength-division multiplexing as well as mode-division multiplexing while the second scheme for achieving higher throughput without sacrificing hop count fairness.We have also shown the significant benefits of using both mode division multiplexing and wavelength division multiplexing in real-life short-distance optical networks such as the optical circuit switching networks used in the hybrid electronic-optical switching architectures for datacenters. We evaluated four mode and wavelength assignment heuristics and compared their throughput performance. We also included preliminary results of impact of the cascaded mode conversion constraint on network throughput. Datacenter and high performance computing networks share a number of common performance goals. Another highly efficient adaptive mode wavelength- routing algorithm is presented over OBS networks to improve throughput of these networks. The effectiveness of the proposed model has been validated by extensive simulation results.In order to optimize bandwidth and maximize throughput of datacenters, an extension of TCP called multipath-TCP (MPTCP) has been evaluated over an OBS network using dense interconnect datacenter topologies. We have proposed a service differentiation scheme using MPTCP over OBS for datacenter traffic. The scheme is evaluated over mixed workload traffic model of datacenters and is shown to provide tangible service differentiation between flows of different priority levels. An adaptive QoS differentiation architecture is proposed for software defined optical datacenter networks using MPTCP over OBS. This scheme prioritizes flows based on current network state.
Show less - Date Issued
- 2015
- Identifier
- CFE0005721, ucf:50146
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005721
- Title
- Improving the performance of data-intensive computing on Cloud platforms.
- Creator
-
Dai, Wei, Bassiouni, Mostafa, Zou, Changchun, Wang, Jun, Lin, Mingjie, Bai, Yuanli, University of Central Florida
- Abstract / Description
-
Big Data such as Terabyte and Petabyte datasets are rapidly becoming the new norm for various organizations across a wide range of industries. The widespread data-intensive computing needs have inspired innovations in parallel and distributed computing, which has been the effective way to tackle massive computing workload for decades. One significant example is MapReduce, which is a programming model for expressing distributed computations on huge datasets, and an execution framework for data...
Show moreBig Data such as Terabyte and Petabyte datasets are rapidly becoming the new norm for various organizations across a wide range of industries. The widespread data-intensive computing needs have inspired innovations in parallel and distributed computing, which has been the effective way to tackle massive computing workload for decades. One significant example is MapReduce, which is a programming model for expressing distributed computations on huge datasets, and an execution framework for data-intensive computing on commodity clusters as well. Since it was originally proposed by Google, MapReduce has become the most popular technology for data-intensive computing. While Google owns its proprietary implementation of MapReduce, an open source implementation called Hadoop has gained wide adoption in the rest of the world. The combination of Hadoop and Cloud platforms has made data-intensive computing much more accessible and affordable than ever before.This dissertation addresses the performance issue of data-intensive computing on Cloud platforms from three different aspects: task assignment, replica placement, and straggler identification. Both task assignment and replica placement are subjects closely related to load balancing, which is one of the key issues that can significantly affect the performance of parallel and distributed applications. While task assignment schemes strive to balance data processing load among cluster nodes to achieve minimum job completion time, replica placement policies aim to assign block replicas to cluster nodes according to their processing capabilities to exploit data locality to the maximum extent. Straggler identification is also one of the crucial issues data-intensive computing has to deal with, as the overall performance of parallel and distributed applications is often determined by the node with the lowest performance. The results of extensive evaluation tests confirm that the schemes/policies proposed in this dissertation can improve the performance of data-intensive applications running on Cloud platforms.
Show less - Date Issued
- 2017
- Identifier
- CFE0006731, ucf:51896
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006731
- Title
- Soft-Error Resilience Framework For Reliable and Energy-Efficient CMOS Logic and Spintronic Memory Architectures.
- Creator
-
Alghareb, Faris, DeMara, Ronald, Lin, Mingjie, Zou, Changchun, Jha, Sumit Kumar, Song, Zixia, University of Central Florida
- Abstract / Description
-
The revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error...
Show moreThe revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error (SE) dependability issues. Consequently, soft-error mitigation techniques have become essential to improve systems' reliability. Herein, first, we proposed optimized soft-error resilience designs to improve robustness of sub-micron computing systems. The proposed approaches were developed to deliver energy-efficiency and tolerate double/multiple errors simultaneously while incurring acceptable speed performance degradation compared to the prior work. Secondly, the impact of Process Variation (PV) at the Near-Threshold Voltage (NTV) region on redundancy-based SE-mitigation approaches for High-Performance Computing (HPC) systems was investigated to highlight the approach that can realize favorable attributes, such as reduced critical datapath delay variation and low speed degradation. Finally, recently, spin-based devices have been widely used to design Non-Volatile (NV) elements such as NV latches and flip-flops, which can be leveraged in normally-off computing architectures for Internet-of-Things (IoT) and energy-harvesting-powered applications. Thus, in the last portion of this dissertation, we design and evaluate for soft-error resilience NV-latching circuits that can achieve intriguing features, such as low energy consumption, high computing performance, and superior soft errors tolerance, i.e., concurrently able to tolerate Multiple Node Upset (MNU), to potentially become a mainstream solution for the aerospace and avionic nanoelectronics. Together, these objectives cooperate to increase energy-efficiency and soft errors mitigation resiliency of larger-scale emerging NV latching circuits within iso-energy constraints. In summary, addressing these reliability concerns is paramount to successful deployment of future reliable and energy-efficient CMOS logic and spintronic memory architectures with deeply-scaled devices operating at low-voltages.
Show less - Date Issued
- 2019
- Identifier
- CFE0007884, ucf:52765
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007884
- Title
- Advancing Practical Specification Techniques for Modern Software Systems.
- Creator
-
Singleton, John, Leavens, Gary, Jha, Sumit Kumar, Zou, Changchun, Hughes, Charles, Brennan, Joseph, University of Central Florida
- Abstract / Description
-
The pervasive nature of software (and the tendency for it to contain errors) has long been a concern of theoretical computer scientists. Many investigators have endeavored to produce theories, tools, and techniques for verifying the behavior of software systems. One of the most promising lines of research is that of formal specification, which is a subset of the larger field of formal methods. In formal specification, one composes a precise mathematical description of a software system and...
Show moreThe pervasive nature of software (and the tendency for it to contain errors) has long been a concern of theoretical computer scientists. Many investigators have endeavored to produce theories, tools, and techniques for verifying the behavior of software systems. One of the most promising lines of research is that of formal specification, which is a subset of the larger field of formal methods. In formal specification, one composes a precise mathematical description of a software system and uses tools and techniques to ensure that the software that has been written conforms to this specification. Examples of such systems are Z notation, the Java Modeling Language, and many others. However, a fundamental problem that plagues this line of research is that the specifications themselves are often costly to produce and difficult to reuse. If the field of formal specification is to advance, we must develop sound techniques for reducing the cost of producing and reusing software specifications. The work presented in this dissertation lays out a path to producing sophisticated, automated tools for inferring large, complex code bases, tools for allowing engineers to share and reuse specifications, and specification languages for specifying information flow policies that can be written separately from program code. This dissertation introduces three main lines of research. First, I discuss a system that facilitates the authoring, sharing, and reuse of software specifications. Next, I discuss a technique which aims to reduce the cost of producing specifications by automatically inferring them. Finally, I discuss a specification language called Evidently which aims to make information flow security policies easier to write, maintain, and enforce by untangling them from the code to which they are applied.
Show less - Date Issued
- 2018
- Identifier
- CFE0007099, ucf:51953
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007099
- Title
- Masquerading Techniques in IEEE 802.11 Wireless Local Area Networks.
- Creator
-
Nakhila, Omar, Zou, Changchun, Turgut, Damla, Bassiouni, Mostafa, Chatterjee, Mainak, Wang, Chung-Ching, University of Central Florida
- Abstract / Description
-
The airborne nature of wireless transmission offers a potential target for attackers to compromise IEEE 802.11 Wireless Local Area Network (WLAN). In this dissertation, we explore the current WLAN security threats and their corresponding defense solutions. In our study, we divide WLAN vulnerabilities into two aspects, client, and administrator. The client-side vulnerability investigation is based on examining the Evil Twin Attack (ETA) while our administrator side research targets Wi-Fi...
Show moreThe airborne nature of wireless transmission offers a potential target for attackers to compromise IEEE 802.11 Wireless Local Area Network (WLAN). In this dissertation, we explore the current WLAN security threats and their corresponding defense solutions. In our study, we divide WLAN vulnerabilities into two aspects, client, and administrator. The client-side vulnerability investigation is based on examining the Evil Twin Attack (ETA) while our administrator side research targets Wi-Fi Protected Access II (WPA2). Three novel techniques have been presented to detect ETA. The detection methods are based on (1) creating a secure connection to a remote server to detect the change of gateway's public IP address by switching from one Access Point (AP) to another. (2) Monitoring multiple Wi-Fi channels in a random order looking for specific data packets sent by the remote server. (3) Merging the previous solutions into one universal ETA detection method using Virtual Wireless Clients (VWCs). On the other hand, we present a new vulnerability that allows an attacker to force the victim's smartphone to consume data through the cellular network by starting the data download on the victim's cell phone without the victim's permission. A new scheme has been developed to speed up the active dictionary attack intensity on WPA2 based on two novel ideas. First, the scheme connects multiple VWCs to the AP at the same time-each VWC has its own spoofed MAC address. Second, each of the VWCs could try many passphrases using single wireless session. Furthermore, we present a new technique to avoid bandwidth limitation imposed by Wi-Fi hotspots. The proposed method creates multiple VWCs to access the WLAN. The combination of the individual bandwidth of each VWC results in an increase of the total bandwidth gained by the attacker. All proposal techniques have been implemented and evaluated in real-life scenarios.
Show less - Date Issued
- 2018
- Identifier
- CFE0007063, ucf:51979
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007063
- Title
- Virtual Router Approach for Wireless Ad Hoc Networks.
- Creator
-
Ho, Ai, Hua, Kien, Guha, Ratan, Moshell, Jack, Zou, Changchun, Wang, Ching, University of Central Florida
- Abstract / Description
-
Wireless networks have become increasingly popular in recent years. There are two variations of mobile wireless networks: infrastructure mobile networks and infrastructureless mobile networks. The latter are also known as mobile ad hoc network (MANET). MANETs have no fixed routers. Instead, mobile nodes function as relay nodes or routers, which discover and maintain communication connections between source nodes and destination nodes for various data transmission sessions. In other words, an...
Show moreWireless networks have become increasingly popular in recent years. There are two variations of mobile wireless networks: infrastructure mobile networks and infrastructureless mobile networks. The latter are also known as mobile ad hoc network (MANET). MANETs have no fixed routers. Instead, mobile nodes function as relay nodes or routers, which discover and maintain communication connections between source nodes and destination nodes for various data transmission sessions. In other words, an MANET is a self-organizing multi-hop wireless network in which all nodes within a given geographical area participate in the routing and data forwarding process. Such networks are scalable and self-healing. They support mobile applications where an infrastructure is either not available (e.g., rescue operations and underground networks) or not desirable (e.g., harsh industrial environments).In many ad hoc networks such as vehicular networks, links among nodes change constantly and rapidly due to high node speed. Maintaining communication links of an established communication path that extends between source and destination nodes is a significant challenge in mobile ad hoc networks due to movement of the mobile nodes. In particular, such communication links are often broken under a high mobility environment. Communication links can also be broken by obstacles such as buildings in a street environment that block radio signal. In a street environment, obstacles and fast moving nodes result in a very short window of communication between nodes on different streets. Although a new communication route can be established when a break in the communication path occurs, repeatedly reestablishing new routes incurs delay and substantial overhead. To address this limitation, we introduce the Virtual Router abstraction in this dissertation. A virtual router is a dynamically-created logical router that is associated with a particular geographical area. Its routing functionality is provided by the physical nodes (i.e., mobile devices) currently within the geographical region served by the virtual router. These physical nodes take turns in forwarding data packets for the virtual router. In this environment, data packets are transmitted from a source node to a destination node over a series of virtual routers. Since virtual routers do not move, this scheme is much less susceptible to node mobility. There can be two virtual router approaches: Static Virtual Router (SVR) and Dynamic Virtual Router (DVR). In SVR, the virtual routers are predetermined and shared by all communication sessions over time. This scheme requires each mobile node to have a map of the virtual routers, and use a global positioning system (GPS) to determine if the node is within the geographical region of a given router. DVR is different from SVR with the following distinctions: (1) virtual routers are dynamically created for each communication sessions as needed, and deprecated after their use; (2) mobile nodes do not need to have a GPS; and (3) mobile nodes do not need to know whereabouts of the virtual routers.In this dissertation, we apply Virtual Router approach to address mobility challenges in routing data. We first propose a data routing protocol that uses SVR to overcome the extreme fast topology change in a street environment. We then propose a routing protocol that does not require node locations by adapting a DVR approach. We also explore how the Virtual Router Approach can reduce the overhead associated with initial route or location requests used by many existing routing protocols to find a destination. An initial request for a destination is expensive because all the nodes need to be reached to locate the destination. We propose two broadcast protocols; one in an open terrain environment and the other in a street environment. Both broadcast protocols apply SVR. We provide simulation results to demonstrate the effectiveness of the proposed protocols in handling high mobility. They show Virtual Router approach can achieve several times better performance than traditional routing and broadcast approach based on physical routers (i.e., relay nodes).
Show less - Date Issued
- 2011
- Identifier
- CFE0004119, ucf:49090
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004119
- Title
- Networking and security solutions for VANET initial deployment stage.
- Creator
-
Aslam, Baber, Zou, Changchun, Turgut, Damla, Bassiouni, Mostafa, Wang, Chung-Ching, University of Central Florida
- Abstract / Description
-
Vehicular ad hoc network (VANET) is a special case of mobile networks, where vehicles equipped with computing/communicating devices (called (")smart vehicles(")) are the mobile wireless nodes. However, the movement pattern of these mobile wireless nodes is no more random, as in case of mobile networks, rather it is restricted to roads and streets. Vehicular networks have hybrid architecture; it is a combination of both infrastructure and infrastructure-less architectures. The direct vehicle...
Show moreVehicular ad hoc network (VANET) is a special case of mobile networks, where vehicles equipped with computing/communicating devices (called (")smart vehicles(")) are the mobile wireless nodes. However, the movement pattern of these mobile wireless nodes is no more random, as in case of mobile networks, rather it is restricted to roads and streets. Vehicular networks have hybrid architecture; it is a combination of both infrastructure and infrastructure-less architectures. The direct vehicle to vehicle (V2V) communication is infrastructure-less or ad hoc in nature. Here the vehicles traveling within communication range of each other form an ad hoc network. On the other hand, the vehicle to infrastructure (V2I) communication has infrastructure architecture where vehicles connect to access points deployed along roads. These access points are known as road side units (RSUs) and vehicles communicate with other vehicles/wired nodes through these RSUs. To provide various services to vehicles, RSUs are generally connected to each other and to the Internet. The direct RSU to RSU communication is also referred as I2I communication. The success of VANET depends on the existence of pervasive roadside infrastructure and sufficient number of smart vehicles. Most VANET applications and services are based on either one or both of these requirements. A fully matured VANET will have pervasive roadside network and enough vehicle density to enable VANET applications. However, the initial deployment stage of VANET will be characterized by the lack of pervasive roadside infrastructure and low market penetration of smart vehicles. It will be economically infeasible to initially install a pervasive and fully networked roadside infrastructure, which could result in the failure of applications and services that depend on V2I or I2I communications. Further, low market penetration means there are insufficient number of smart vehicles to enable V2V communication, which could result in failure of services and applications that depend on V2V communications. Non-availability of pervasive connectivity to certification authorities and dynamic locations of each vehicle will make it difficult and expensive to implement security solutions that are based on some central certificate management authority. Non-availability of pervasive connectivity will also affect the backend connectivity of vehicles to the Internet or the rest of the world. Due to economic considerations, the installation of roadside infrastructure will take a long time and will be incremental thus resulting in a heterogeneous infrastructure with non-consistent capabilities. Similarly, smart vehicles will also have varying degree of capabilities. This will result in failure of applications and services that have very strict requirements on V2I or V2V communications. We have proposed several solutions to overcome the challenges described above that will be faced during the initial deployment stage of VANET. Specifically, we have proposed: 1) a VANET architecture that can provide services with limited number of heterogeneous roadside units and smart vehicles with varying capabilities, 2) a backend connectivity solution that provides connectivity between the Internet and smart vehicles without requiring pervasive roadside infrastructure or large number of smart vehicles, 3) a security architecture that does not depend on pervasive roadside infrastructure or a fully connected V2V network and fulfills all the security requirements, and 4) optimization solutions for placement of a limited number of RSUs within a given area to provide best possible service to smart vehicles. The optimal placement solutions cover both urban areas and highways environments.
Show less - Date Issued
- 2012
- Identifier
- CFE0004186, ucf:48993
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004186
- Title
- A Human-Centric Approach to Data Fusion in Post-Disaster Managment: The Development of a Fuzzy Set Theory Based Model.
- Creator
-
Banisakher, Mubarak, McCauley, Pamular, Geiger, Christopher, Lee, Gene, Shi, Fuqian, Zou, Changchun, University of Central Florida
- Abstract / Description
-
It is critical to provide an efficient and accurate information system in the post-disaster phase for individuals' in order to access and obtain the necessary resources in a timely manner; but current map based post-disaster management systems provide all emergency resource lists without filtering them which usually leads to high levels of energy consumed in calculation. Also an effective post-disaster management system (PDMS) will result in distribution of all emergency resources such as,...
Show moreIt is critical to provide an efficient and accurate information system in the post-disaster phase for individuals' in order to access and obtain the necessary resources in a timely manner; but current map based post-disaster management systems provide all emergency resource lists without filtering them which usually leads to high levels of energy consumed in calculation. Also an effective post-disaster management system (PDMS) will result in distribution of all emergency resources such as, hospital, storage and transportation much more reasonably and be more beneficial to the individuals in the post disaster period. In this Dissertation, firstly, semi-supervised learning (SSL) based graph systems was constructed for PDMS. A Graph-based PDMS' resource map was converted to a directed graph that presented by adjacent matrix and then the decision information will be conducted from the PDMS by two ways, one is clustering operation, and another is graph-based semi-supervised optimization process. In this study, PDMS was applied for emergency resource distribution in post-disaster (responses phase), a path optimization algorithm based ant colony optimization (ACO) was used for minimizing the cost in post-disaster, simulation results show the effectiveness of the proposed methodology. This analysis was done by comparing it with clustering based algorithms under improvement ACO of tour improvement algorithm (TIA) and Min-Max Ant System (MMAS) and the results also show that the SSL based graph will be more effective for calculating the optimization path in PDMS. This research improved the map by combining the disaster map with the initial GIS based map which located the target area considering the influence of disaster. First, all initial map and disaster map will be under Gaussian transformation while we acquired the histogram of all map pictures. And then all pictures will be under discrete wavelet transform (DWT), a Gaussian fusion algorithm was applied in the DWT pictures. Second, inverse DWT (iDWT) was applied to generate a new map for a post-disaster management system. Finally, simulation works were proposed and the results showed the effectiveness of the proposed method by comparing it to other fusion algorithms, such as mean-mean fusion and max-UD fusion through the evaluation indices including entropy, spatial frequency (SF) and image quality index (IQI). Fuzzy set model were proposed to improve the presentation capacity of nodes in this GIS based PDMS.
Show less - Date Issued
- 2014
- Identifier
- CFE0005128, ucf:50702
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005128