Current Search: Guha, Ratan (x)
View All Items
Pages
- Title
- ENHANCING MESSAGE PRIVACY IN WIRED EQUIVALENT PRIVACY.
- Creator
-
Purandare, Darshan, Guha, Ratan, University of Central Florida
- Abstract / Description
-
The 802.11 standard defines the Wired Equivalent Privacy (WEP) and encapsulation of data frames. It is intended to provide data privacy to the level of a wired network. WEP suffered threat of attacks from hackers owing to certain security shortcomings in the WEP protocol. Lately, many new protocols like WiFi Protected Access (WPA), WPA2, Robust Secure Network (RSN) and 802.11i have come into being, yet their implementation is fairly limited. Despite its shortcomings one cannot undermine the...
Show moreThe 802.11 standard defines the Wired Equivalent Privacy (WEP) and encapsulation of data frames. It is intended to provide data privacy to the level of a wired network. WEP suffered threat of attacks from hackers owing to certain security shortcomings in the WEP protocol. Lately, many new protocols like WiFi Protected Access (WPA), WPA2, Robust Secure Network (RSN) and 802.11i have come into being, yet their implementation is fairly limited. Despite its shortcomings one cannot undermine the importance of WEP as it still remains the most widely used system and we chose to address certain security issues and propose some modifications to make it more secure. In this thesis we have proposed a modification to the existing WEP protocol to make it more secure. We achieve Message Privacy by ensuring that the encryption is not breached. The idea is to update the shared secret key frequently based on factors like network traffic and number of transmitted frames. We also develop an Initialization Vector (IV) avoidance algorithm that eliminates IV collision problem. The idea is to partition the IV bits among different wireless hosts in a predetermined manner unique to every node. We can use all possible 224 different IVs without making them predictable for an attacker. Our proposed algorithm eliminates the IV collision ensuring Message Privacy that further strengthens security of the existing WEP. We show that frequent rekeying thwarts all kinds of cryptanalytic attacks on the WEP.
Show less - Date Issued
- 2005
- Identifier
- CFE0000479, ucf:46371
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000479
- Title
- EXTENDING DISTRIBUTED TEMPORAL PROTOCOL LOGIC TO A PROOF BASED FRAMEWORK FOR AUTHENTICATION PROTOCOLS.
- Creator
-
Muhammad, Shahabuddin, Guha, Ratan, University of Central Florida
- Abstract / Description
-
Running critical applications, such as e-commerce, in a distributed environment requires assurance of the identities of the participants communicating with each other. Providing such assurance in a distributed environment is a difficult task. The goal of a security protocol is to overcome the vulnerabilities of a distributed environment by providing a secure way to disseminate critical information into the network. However, designing a security protocol is itself an error-prone process. In...
Show moreRunning critical applications, such as e-commerce, in a distributed environment requires assurance of the identities of the participants communicating with each other. Providing such assurance in a distributed environment is a difficult task. The goal of a security protocol is to overcome the vulnerabilities of a distributed environment by providing a secure way to disseminate critical information into the network. However, designing a security protocol is itself an error-prone process. In addition to employing an authentication protocol, one also needs to make sure that the protocol successfully achieves its authentication goals. The Distributed Temporal Protocol Logic (DTPL) provides a language for formalizing both local and global properties of distributed communicating processes. The DTPL can be effectively applied to security protocol analysis as a model checker. Although, a model checker can determine flaws in a security protocol, it can not provide proof of the security properties of a protocol. In this research, we extend the DTPL language and construct a set of axioms by transforming the unified framework of SVO logic into DTPL. This results into a deductive style proof-based framework for the verification of authentication protocols. The proposed framework represents authentication protocols and concisely proves their security properties. We formalize various features essential for achieving authentication, such as message freshness, key association, and source association in our framework. Since analyzing security protocols greatly depends upon associating a received message to its source, we separately analyze the source association axioms, translate them into our framework, and extend the idea for public-key protocols. Developing a proof-based framework in temporal logic gives us another verification tool in addition to the existing model checker. A security property of a protocol can either be verified using our approach, or a design flaw can be identified using the model checker. In this way, we can analyze a security protocol from both perspectives while benefiting from the representation of distributed temporal protocol logic. A challenge-response strategy provides a higher level of abstraction for authentication protocols. Here, we also develop a set of formulae using the challenge-response strategy to analyze a protocol at an abstract level. This abstraction has been adapted from the authentication tests of the graph-theoretic approach of strand space method. First, we represent a protocol in logic and then use the challenge-response strategy to develop authentication tests. These tests help us find the possibility of attacks on authentication protocols by investigating the originator of its received messages. Identifying the unintended originator of a received message indicates the existence of possible flaws in a protocol. We have applied our strategy on several well-known protocols and have successfully identified the attacks.
Show less - Date Issued
- 2007
- Identifier
- CFE0001799, ucf:47281
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001799
- Title
- DEVELOPING STRAND SPACE BASED MODELS AND PROVING THE CORRECTNESS OF THE IEEE 802.11I AUTHENTICATION PROTOCOL WITH RESTRICTED SECURITY OBJECTIVES.
- Creator
-
Furqan, Zeeshan, Guha, Ratan, University of Central Florida
- Abstract / Description
-
The security objectives enforce the security policy, which defines what is to be protected in a network environment. The violation of these security objectives induces security threats. We introduce an explicit notion of security objectives for a security protocol. This notion should precede the formal verification process. In the absence of such a notion, the security protocol may be proven correct despite the fact that it is not equipped to defend against all potential threats. In order to...
Show moreThe security objectives enforce the security policy, which defines what is to be protected in a network environment. The violation of these security objectives induces security threats. We introduce an explicit notion of security objectives for a security protocol. This notion should precede the formal verification process. In the absence of such a notion, the security protocol may be proven correct despite the fact that it is not equipped to defend against all potential threats. In order to establish the correctness of security objectives, we present a formal model that provides basis for the formal verification of security protocols. We also develop the modal logic, proof based, and multi-agent approaches using the Strand Space framework. In our modal logic approach, we present the logical constructs to model a protocol's behavior in such a way that the participants can verify different security parameters by looking at their own run of the protocol. In our proof based model, we present a generic set of proofs to establish the correctness of a security protocol. We model the 802.11i protocol into our proof based system and then perform the formal verification of the authentication property. The intruder in our model is imbued with powerful capabilities and repercussions to possible attacks are evaluated. Our analysis proves that the authentication of 802.11i is not compromised in the presented model. We further demonstrate how changes in our model will yield a successful man-in-the-middle attack. Our multi-agent approach includes an explicit notion of multi-agent, which was missing in the Strand Space framework. The limitation of Strand Space framework is the assumption that all the information available to a principal is either supplied initially or is contained in messages received by that principal. However, other important information may also be available to a principal in a security setting, such as a principal may combine information from different roles played by him in a protocol to launch a powerful attack. Our presented approach models the behavior of a distributed system as a multi-agent system. The presented model captures the combined information, the formal model of knowledge, and the belief of agents over time. After building this formal model, we present a formal proof of authentication of the 4-way handshake of the 802.11i protocol.
Show less - Date Issued
- 2007
- Identifier
- CFE0001801, ucf:47380
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001801
- Title
- RESOURCE-CONSTRAINT AND SCALABLE DATA DISTRIBUTION MANAGEMENT FOR HIGH LEVEL ARCHITECTURE.
- Creator
-
Gupta, Pankaj, Guha, Ratan, University of Central Florida
- Abstract / Description
-
In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six...
Show moreIn this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences.
Show less - Date Issued
- 2007
- Identifier
- CFE0001949, ucf:47447
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001949
- Title
- AN INTERACTIVE DISTRIBUTED SIMULATION FRAMEWORK WITH APPLICATION TO WIRELESS NETWORKS AND INTRUSION DETECTION.
- Creator
-
Kachirski, Oleg, Guha, Ratan, University of Central Florida
- Abstract / Description
-
In this dissertation, we describe the portable, open-source distributed simulation framework (WINDS) targeting simulations of wireless network infrastructures that we have developed. We present the simulation framework which uses modular architecture and apply the framework to studies of mobility pattern effects, routing and intrusion detection mechanisms in simulations of large-scale wireless ad hoc, infrastructure, and totally mobile networks. The distributed simulations within the...
Show moreIn this dissertation, we describe the portable, open-source distributed simulation framework (WINDS) targeting simulations of wireless network infrastructures that we have developed. We present the simulation framework which uses modular architecture and apply the framework to studies of mobility pattern effects, routing and intrusion detection mechanisms in simulations of large-scale wireless ad hoc, infrastructure, and totally mobile networks. The distributed simulations within the framework execute seamlessly and transparently to the user on a symmetric multiprocessor cluster computer or a network of computers with no modifications to the code or user objects. A visual graphical interface precisely depicts simulation object states and interactions throughout the simulation execution, giving the user full control over the simulation in real time. The network configuration is detected by the framework, and communication latency is taken into consideration when dynamically adjusting the simulation clock, allowing the simulation to run on a heterogeneous computing system. The simulation framework is easily extensible to multi-cluster systems and computing grids. An entire simulation system can be constructed in a short time, utilizing user-created and supplied simulation components, including mobile nodes, base stations, routing algorithms, traffic patterns and other objects. These objects are automatically compiled and loaded by the simulation system, and are available for dynamic simulation injection at runtime. Using our distributed simulation framework, we have studied modern intrusion detection systems (IDS) and assessed applicability of existing intrusion detection techniques to wireless networks. We have developed a mobile agent-based IDS targeting mobile wireless networks, and introduced load-balancing optimizations aimed at limited-resource systems to improve intrusion detection performance. Packet-based monitoring agents of our IDS employ a CASE-based reasoner engine that performs fast lookups of network packets in the existing SNORT-based intrusion rule-set. Experiments were performed using the intrusion data from MIT Lincoln Laboratories studies, and executed on a cluster computer utilizing our distributed simulation system.
Show less - Date Issued
- 2005
- Identifier
- CFE0000642, ucf:46545
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000642
- Title
- REAL-TIME MONOCULAR VISION-BASED TRACKING FOR INTERACTIVE AUGMENTED REALITY.
- Creator
-
Spencer, Lisa, Guha, Ratan, University of Central Florida
- Abstract / Description
-
The need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous...
Show moreThe need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous behavior, classifying video clips, and measuring athletic performance. In this dissertation we focus on augmented reality, but the methods and conclusions are applicable to a wide variety of other areas. In particular, our work deals with achieving real-time performance while tracking with augmented reality systems using a minimum set of commercial hardware. We have built prototypes that use both existing technologies and new algorithms we have developed. While performance improvements would be possible with additional hardware, such as multiple cameras or parallel processors, we have concentrated on getting the most performance with the least equipment. Tracking is a broad research area, but an essential component of an augmented reality system. Tracking of some sort is needed to determine the location of scene augmentation. First, we investigated the effects of illumination on the pixel values recorded by a color video camera. We used the results to track a simple solid-colored object in our first augmented reality application. Our second augmented reality application tracks complex non-rigid objects, namely human faces. In the color experiment, we studied the effects of illumination on the color values recorded by a real camera. Human perception is important for many applications, but our focus is on the RGB values available to tracking algorithms. Since the lighting in most environments where video monitoring is done is close to white, (e.g., fluorescent lights in an office, incandescent lights in a home, or direct and indirect sunlight outside,) we looked at the response to "white" light sources as the intensity varied. The red, green, and blue values recorded by the camera can be converted to a number of other color spaces which have been shown to be invariant to various lighting conditions, including view angle, light angle, light intensity, or light color, using models of the physical properties of reflection. Our experiments show how well these derived quantities actually remained constant with real materials, real lights, and real cameras, while still retaining the ability to discriminate between different colors. This color experiment enabled us to find color spaces that were more invariant to changes in illumination intensity than the ones traditionally used. The first augmented reality application tracks a solid colored rectangle and replaces the rectangle with an image, so it appears that the subject is holding a picture instead. Tracking this simple shape is both easy and hard; easy because of the single color and the shape that can be represented by four points or four lines, and hard because there are fewer features available and the color is affected by illumination changes. Many algorithms for tracking fixed shapes do not run in real time or require rich feature sets. We have created a tracking method for simple solid colored objects that uses color and edge information and is fast enough for real-time operation. We also demonstrate a fast deinterlacing method to avoid "tearing" of fast moving edges when recorded by an interlaced camera, and optimization techniques that usually achieved a speedup of about 10 from an implementation that already used optimized image processing library routines. Human faces are complex objects that differ between individuals and undergo non-rigid transformations. Our second augmented reality application detects faces, determines their initial pose, and then tracks changes in real time. The results are displayed as virtual objects overlaid on the real video image. We used existing algorithms for motion detection and face detection. We present a novel method for determining the initial face pose in real time using symmetry. Our face tracking uses existing point tracking methods as well as extensions to Active Appearance Models (AAMs). We also give a new method for integrating detection and tracking data and leveraging the temporal coherence in video data to mitigate the false positive detections. While many face tracking applications assume exactly one face is in the image, our techniques can handle any number of faces. The color experiment along with the two augmented reality applications provide improvements in understanding the effects of illumination intensity changes on recorded colors, as well as better real-time methods for detection and tracking of solid shapes and human faces for augmented reality. These techniques can be applied to other real-time video analysis tasks, such as surveillance and video analysis.
Show less - Date Issued
- 2006
- Identifier
- CFE0001075, ucf:46786
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001075
- Title
- ACCESS GAMES: A GAME THEORETIC FRAMEWORK FOR FAIR BANDWIDTH SHARING IN DISTRIBUTED SYSTEMS.
- Creator
-
Rakshit, Sudipta, Guha, Ratan, University of Central Florida
- Abstract / Description
-
In this dissertation, the central objective is to achieve fairness in bandwidth sharing amongst selfish users in a distributed system. Because of the inherent contention-based nature of the distributed medium access and the selfishness of the users, the distributed medium access is modeled as a non-cooperative game; designated as the Access Game. A p-CSMA type medium access scenario is proposed for all the users. Therefore, in the Access Game, each user has two actions to choose from: ...
Show moreIn this dissertation, the central objective is to achieve fairness in bandwidth sharing amongst selfish users in a distributed system. Because of the inherent contention-based nature of the distributed medium access and the selfishness of the users, the distributed medium access is modeled as a non-cooperative game; designated as the Access Game. A p-CSMA type medium access scenario is proposed for all the users. Therefore, in the Access Game, each user has two actions to choose from: "transmit" and "wait". The outcome of the Access Game and payoffs to each user depends on the actions taken by all the users. Further, the utility function of each user is constructed as a function of both Quality of Service (QoS) and Battery Power (BP). Various scenarios involving the relative importance of QoS and BP are considered. It is observed that, in general the Nash Equilibrium of the Access Game does not result into fairness. Therefore, Constrained Nash Equilibrium is proposed as a solution. The advantage of Constrained Nash Equilibrium is that it can be predicated on the fairness conditions and the solution will be guaranteed to result in fair sharing of bandwidth. However, Constrained Nash Equilibrium is that it is not self-enforcing. Therefore, two mechanisms are proposed to design the Access Game in such a way that in each case the Nash Equilibrium of the Access Game satisfies fairness and maximizes throughput. Hence, with any of these mechanisms the solution of the Access Game becomes self-enforcing.
Show less - Date Issued
- 2005
- Identifier
- CFE0000700, ucf:46603
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000700
- Title
- IMPLEMENTATION AND TESTING OF A BLACKBOX AND A WHITEBOX FUZZER FOR FILE COMPRESSION ROUTINES.
- Creator
-
Tobkin, Toby, Guha, Ratan, University of Central Florida
- Abstract / Description
-
Fuzz testing is a software testing technique that has risen to prominence over the past two decades. The unifying feature of all fuzz testers (fuzzers) is their ability to somehow automatically produce random test cases for software. Fuzzers can generally be placed in one of two classes: black-box or white-box. Blackbox fuzzers do not derive information from a program's source or binary in order to restrict the domain of their generated input while white-box fuzzers do. A tradeoff involved in...
Show moreFuzz testing is a software testing technique that has risen to prominence over the past two decades. The unifying feature of all fuzz testers (fuzzers) is their ability to somehow automatically produce random test cases for software. Fuzzers can generally be placed in one of two classes: black-box or white-box. Blackbox fuzzers do not derive information from a program's source or binary in order to restrict the domain of their generated input while white-box fuzzers do. A tradeoff involved in the choice between blackbox and whitebox fuzzing is the rate at which inputs can be produced; since blackbox fuzzers need not do any "thinking" about the software under test to generate inputs, blackbox fuzzers can generate more inputs per unit time if all other factors are equal. The question of how blackbox and whitebox fuzzing should be used together for ideal economy of software testing has been posed and even speculated about, however, to my knowledge, no publically available study with the intent of characterizing an answer exists. The purpose of this thesis is to provide an initial exploration of the bug-finding characteristics of blackbox and whitebox fuzzers. A blackbox fuzzer is implemented and extended with a concolic execution program to make it whitebox. Both versions of the fuzzer are then used to run tests on some small programs and some parts of a file compression library.
Show less - Date Issued
- 2013
- Identifier
- CFH0004463, ucf:45110
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004463
- Title
- VCLUSTER: A PORTABLE VIRTUAL COMPUTING LIBRARY FOR CLUSTER COMPUTING.
- Creator
-
Zhang, Hua, Guha, Ratan, University of Central Florida
- Abstract / Description
-
Message passing has been the dominant parallel programming model in cluster computing, and libraries like Message Passing Interface (MPI) and Portable Virtual Machine (PVM) have proven their novelty and efficiency through numerous applications in diverse areas. However, as clusters of Symmetric Multi-Processor (SMP) and heterogeneous machines become popular, conventional message passing models must be adapted accordingly to support this new kind of clusters efficiently. In addition, Java...
Show moreMessage passing has been the dominant parallel programming model in cluster computing, and libraries like Message Passing Interface (MPI) and Portable Virtual Machine (PVM) have proven their novelty and efficiency through numerous applications in diverse areas. However, as clusters of Symmetric Multi-Processor (SMP) and heterogeneous machines become popular, conventional message passing models must be adapted accordingly to support this new kind of clusters efficiently. In addition, Java programming language, with its features like object oriented architecture, platform independent bytecode, and native support for multithreading, makes it an alternative language for cluster computing. This research presents a new parallel programming model and a library called VCluster that implements this model on top of a Java Virtual Machine (JVM). The programming model is based on virtual migrating threads to support clusters of heterogeneous SMP machines efficiently. VCluster is implemented in 100% Java, utilizing the portability of Java to address the problems of heterogeneous machines. VCluster virtualizes computational and communication resources such as threads, computation states, and communication channels across multiple separate JVMs, which makes a mobile thread possible. Equipped with virtual migrating thread, it is feasible to balance the load of computing resources dynamically. Several large scale parallel applications have been developed using VCluster to compare the performance and usage of VCluster with other libraries. The results of the experiments show that VCluster makes it easier to develop multithreading parallel applications compared to conventional libraries like MPI. At the same time, the performance of VCluster is comparable to MPICH, a widely used MPI library, combined with popular threading libraries like POSIX Thread and OpenMP. In the next phase of our work, we implemented thread group and thread migration to demonstrate the feasibility of dynamic load balancing in VCluster. We carried out experiments to show that the load can be dynamically balanced in VCluster, resulting in a better performance. Thread group also makes it possible to implement collective communication functions between threads, which have been proved to be useful in process based libraries.
Show less - Date Issued
- 2008
- Identifier
- CFE0002339, ucf:47809
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002339
- Title
- A FRAMEWORK FOR EFFICIENT DATA DISTRIBUTION IN PEER-TO-PEER NETWORKS.
- Creator
-
Purandare, Darshan, Guha, Ratan, University of Central Florida
- Abstract / Description
-
Peer to Peer (P2P) models are based on user altruism, wherein a user shares its content with other users in the pool and it also has an interest in the content of the other nodes. Most P2P systems in their current form are not fair in terms of the content served by a peer and the service obtained from swarm. Most systems suffer from free rider's problem where many high uplink capacity peers contribute much more than they should while many others get a free ride for downloading the content...
Show morePeer to Peer (P2P) models are based on user altruism, wherein a user shares its content with other users in the pool and it also has an interest in the content of the other nodes. Most P2P systems in their current form are not fair in terms of the content served by a peer and the service obtained from swarm. Most systems suffer from free rider's problem where many high uplink capacity peers contribute much more than they should while many others get a free ride for downloading the content. This leaves high capacity nodes with very little or no motivation to contribute. Many times such resourceful nodes exit the swarm or don't even participate. The whole scenario is unfavorable and disappointing for P2P networks in general, where participation is a must and a very important feature. As the number of users increases in the swarm, the swarm becomes robust and scalable. Other important issues in the present day P2P system are below optimal Quality of Service (QoS) in terms of download time, end-to-end latency and jitter rate, uplink utilization, excessive cross ISP traffic, security and cheating threats etc. These current day problems in P2P networks serve as a motivation for present work. To this end, we present an efficient data distribution framework in Peer-to-Peer (P2P) networks for media streaming and file sharing domain. The experiments with our model, an alliance based peering scheme for media streaming, show that such a scheme distributes data to the swarm members in a near-optimal way. Alliances are small groups of nodes that share data and other vital information for symbiotic association. We show that alliance formation is a loosely coupled and an effective way to organize the peers and our model maps to a small world network, which form efficient overlay structures and are robust to network perturbations such as churn. We present a comparative simulation based study of our model with CoolStreaming/DONet (a popular model) and present a quantitative performance evaluation. Simulation results show that our model scales well under varying workloads and conditions, delivers near optimal levels of QoS, reduces cross ISP traffic considerably and for most cases, performs at par or even better than Cool-Streaming/DONet. In the next phase of our work, we focussed on BitTorrent P2P model as it the most widely used file sharing protocol. Many studies in academia and industry have shown that though BitTorrent scales very well but is far from optimal in terms of fairness to end users, download time and uplink utilization. Furthermore, random peering and data distribution in such model lead to suboptimal performance. Lately, new breed of BitTorrent clients like BitTyrant have shown successful strategic attacks against BitTorrent. Strategic peers configure the BitTorrent client software such that for very less or no contribution, they can obtain good download speeds. Such strategic nodes exploit the altruism in the swarm and consume resources at the expense of other honest nodes and create an unfair swarm. More unfairness is generated in the swarm with the presence of heterogeneous bandwidth nodes. We investigate and propose a new token-based anti-strategic policy that could be used in BitTorrent to minimize the free-riding by strategic clients. We also proposed other policies against strategic attacks that include using a smart tracker that denies the request of strategic clients for peer listmultiple times, and black listing the non-behaving nodes that do not follow the protocol policies. These policies help to stop the strategic behavior of peers to a large extent and improve overall system performance. We also quantify and validate the benefits of using bandwidth peer matching policy. Our simulations results show that with the above proposed changes, uplink utilization and mean download time in BitTorrent network improves considerably. It leaves strategic clients with little or no incentive to behave greedily. This reduces free riding and creates fairer swarm with very little computational overhead. Finally, we show that our model is self healing model where user behavior changes from selfish to altruistic in the presence of the aforementioned policies.
Show less - Date Issued
- 2008
- Identifier
- CFE0002260, ucf:47864
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002260
- Title
- MODELING AND SIMULATION OF SOFT BODIES.
- Creator
-
Mesit, Jaruwan, Guha, Ratan, University of Central Florida
- Abstract / Description
-
As graphics and simulations become more realistic, techniques for approximating soft body objects, that is, non-solid objects such as liquids, gases, and cloth, are becoming increasingly common. The proposed generalized soft body method encompasses some specific cases of other existing models enabling simulation of a variety of soft body materials by parameter adjustment. This research presents a general method of soft body model and simulation in which parameters for body control, surface...
Show moreAs graphics and simulations become more realistic, techniques for approximating soft body objects, that is, non-solid objects such as liquids, gases, and cloth, are becoming increasingly common. The proposed generalized soft body method encompasses some specific cases of other existing models enabling simulation of a variety of soft body materials by parameter adjustment. This research presents a general method of soft body model and simulation in which parameters for body control, surface deformation, volume control, and gravitation, can be adjusted to simulate different types of soft bodies. In this method, the soft body mesh structure maintains configuration among surface points while fluid modeling deforms the details of the surface. To maintain volume, an internal pressure is approximated by simulated molecules within the soft body. Free fall motion of soft body is generated by gravitational field. Additionally, a constraint is specified based on the property of the soft body being modeled. There are several standard methods to control soft body volume. This work illustrates the simplicity of simulation by selecting a mass-spring system for the deformation of the connected points of a three-dimensional mesh, while an internal pressure force acts upon the surface triangles. To incorporate fluidity, smooth particles hydrodynamics (SPH) is applied where surface points are considered as free moving particles interacting with neighboring surface points within a SPH radius. Because SPH is computationally expensive, it requires an efficient method to determine neighboring surface points. Collision detection with soft bodies and other rigid body objects also requires such fast neighbor detection. To determine the neighboring surface point, Axis Aligned Bounding Box (AABB), Octree, and a partitioning and hashing schemes have been investigated and the result shows that the partitioning and hashing scheme provides the best frame rate. Thus a fast partitioning and hashing scheme is used in this research to reduce both computational time and the memory requirements. The proposed soft body model aims to be applied in several types of soft body application depending on the specific types of soft body deformation. The work presented in this dissertation details experiments with a variety of visually appealing fluid-like surfaces and organic materials animated at interactive speeds. The algorithm is also used to implement animated space-blob creatures in the Galactic Arms Race video game and a human lung simulation, demonstrating the effectiveness of the algorithm in both an actual video game engine and a medical application. The simulation results show that the general model of the soft body can be applied to several applications by adjusting the soft body parameters according to the appearance results.
Show less - Date Issued
- 2010
- Identifier
- CFE0003477, ucf:48930
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003477
- Title
- DETECTING MALICIOUS SOFTWARE BY DYNAMICEXECUTION.
- Creator
-
Dai, Jianyong, Guha, Ratan, University of Central Florida
- Abstract / Description
-
Traditional way to detect malicious software is based on signature matching. However, signature matching only detects known malicious software. In order to detect unknown malicious software, it is necessary to analyze the software for its impact on the system when the software is executed. In one approach, the software code can be statically analyzed for any malicious patterns. Another approach is to execute the program and determine the nature of the program dynamically. Since the execution...
Show moreTraditional way to detect malicious software is based on signature matching. However, signature matching only detects known malicious software. In order to detect unknown malicious software, it is necessary to analyze the software for its impact on the system when the software is executed. In one approach, the software code can be statically analyzed for any malicious patterns. Another approach is to execute the program and determine the nature of the program dynamically. Since the execution of malicious code may have negative impact on the system, the code must be executed in a controlled environment. For that purpose, we have developed a sandbox to protect the system. Potential malicious behavior is intercepted by hooking Win32 system calls. Using the developed sandbox, we detect unknown virus using dynamic instruction sequences mining techniques. By collecting runtime instruction sequences in basic blocks, we extract instruction sequence patterns based on instruction associations. We build classification models with these patterns. By applying this classification model, we predict the nature of an unknown program. We compare our approach with several other approaches such as simple heuristics, NGram and static instruction sequences. We have also developed a method to identify a family of malicious software utilizing the system call trace. We construct a structural system call diagram from captured dynamic system call traces. We generate smart system call signature using profile hidden Markov model (PHMM) based on modularized system call block. Smart system call signature weakly identifies a family of malicious software.
Show less - Date Issued
- 2009
- Identifier
- CFE0002798, ucf:48141
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002798
- Title
- Experimenting with the finite element method in the calculation of radiosity form factors.
- Creator
-
Chesteen, Donna Marie, Guha, Ratan, Arts and Sciences
- Abstract / Description
-
University of Central Florida College of Arts and Sciences Thesis; Radiosity has been used to create some of the most photorealistic computer-generated images to date. The problem, however, is that radiosity algorithms are so computationally and memory expensive that few applications can employ them successfully. Form factor calculation is the most costly part of the process. This report describes an algorithm for using the finite element method to reduce the amount of time that is used in...
Show moreUniversity of Central Florida College of Arts and Sciences Thesis; Radiosity has been used to create some of the most photorealistic computer-generated images to date. The problem, however, is that radiosity algorithms are so computationally and memory expensive that few applications can employ them successfully. Form factor calculation is the most costly part of the process. This report describes an algorithm for using the finite element method to reduce the amount of time that is used in the form factor calculation portion of the radiosity algorithm. This technique for form factor calculation significantly reduces the number of projections done at each iteration by using shape functions to determine the distribution of a form factor across the surface of a patch and thus greatly reduces total run time.
Show less - Date Issued
- 1995
- Identifier
- CFR0011926, ucf:53043
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFR0011926
- Title
- Enhancing Cognitive Algorithms for Optimal Performance of Adaptive Networks.
- Creator
-
Lugo-Cordero, Hector, Guha, Ratan, Wu, Annie, Stanley, Kenneth, University of Central Florida
- Abstract / Description
-
This research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in...
Show moreThis research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in the environment. Furthermore, the diversity of the performance objectives considered (e.g., power, coverage, anonymity, etc.) makes the objective function non-continuous and therefore not have a derivative. For these reasons, we enhance Particle Swarm Optimization (PSO) algorithm with elements that aid in exploring for better configurations to obtain optimal and sub-optimal configurations. According to results, the enhanced PSO promotes population diversity, leading to more unique optimal configurations for adapting to dynamic environments. The gradual complexification process demonstrated simpler optimal solutions than those obtained via trial and error without the enhancements.Configurations obtained by the modified PSO are further tuned in real-time upon environment changes. Such tuning occurs with a Fuzzy Logic Controller (FLC) which models human decision making by monitoring certain events in the algorithm. Example of such events include diversity and quality of solution in the environment. The FLC is able to adapt the enhanced PSO to changes in the environment, causing more exploration or exploitation as needed.By adding a Probabilistic Neural Network (PNN) classifier, the enhanced PSO is again used as a filter to aid in intrusion detection classification. This approach reduces miss classifications by consulting neighbors for classification in case of ambiguous samples. The performance of ambiguous votes via PSO filtering shows an improvement in classification, causing the simple classifier perform better the commonly used classifiers.
Show less - Date Issued
- 2018
- Identifier
- CFE0007046, ucf:52003
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007046
- Title
- THE IMPLICATIONS OF VIRTUAL ENVIRONMENTS IN DIGITAL FORENSIC INVESTIGATIONS.
- Creator
-
Patterson, Farrah, Lang, Sheau-Dong, Guha, Ratan, Zou, Changchun, University of Central Florida
- Abstract / Description
-
This research paper discusses the role of virtual environments in digital forensic investigations. With virtual environments becoming more prevalent as an analysis tool in digital forensic investigations, it's becoming more important for digital forensic investigators to understand the limitation and strengths of virtual machines. The study aims to expose limitations within commercial closed source virtual machines and open source virtual machines. The study provides a brief overview of...
Show moreThis research paper discusses the role of virtual environments in digital forensic investigations. With virtual environments becoming more prevalent as an analysis tool in digital forensic investigations, it's becoming more important for digital forensic investigators to understand the limitation and strengths of virtual machines. The study aims to expose limitations within commercial closed source virtual machines and open source virtual machines. The study provides a brief overview of history digital forensic investigations and virtual environments, and concludes with an experiment with four common open and closed source virtual machines; the effects of the virtual machines on the host machine as well as the performance of the virtual machine itself. My findings discovered that while the open source tools provided more control and freedom to the operator, the closed source tools were more stable and consistent in their operation. The significance of these findings can be further researched by applying them in the context of exemplifying reliability of forensic techniques when presented as analysis tool used in litigation.
Show less - Date Issued
- 2011
- Identifier
- CFE0004152, ucf:49050
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004152
- Title
- HFS Plus File System Exposition and Forensics.
- Creator
-
Ware, Scott, Lang, Sheau-Dong, Guha, Ratan, Zou, Changchun, University of Central Florida
- Abstract / Description
-
The Macintosh Hierarchical File System Plus, HFS+, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use...
Show moreThe Macintosh Hierarchical File System Plus, HFS+, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use of B-trees to implement key data structures for maintaining meta-data about folders, files, and data. The implementation of what happens within HFS+ at volume format, or when folders, files, and data are created, moved, or deleted is largely a mystery to those who are not programmers. The vast majority of information on this subject is relegated to documentation in books, papers, and online content that direct the reader to C code, libraries, and include files. If one can't interpret the complex C or Perl code implementations the opportunity to understand the workflow within HFS+ is less than adequate to develop a basic understanding of the internals and how they work. The basic concepts learned from this research will facilitate a better understanding of the HFS+ file system and journal as changes resulting from the adding and deleting files or folders are applied in a controlled, easy to follow, process.The primary tool used to examine the file system changes is a proprietary command line interface, CLI, tool called fileXray. This tool is actually a custom implementation of the HFS+ file system that has the ability to examine file system, meta-data, and data level information that isn't available in other tools. We will also use Apple's command line interface tool, Terminal, the WinHex graphical user interface, GUI, editor, The Sleuth Kit command line tools and DiffFork 1.1.9 help to document and illustrate the file system changes. The processes used to document the pristine and changed versions of the file system, with each experiment, are very similar such that the output files are identical with the exception of the actual change. Keeping the processes the same enables baseline comparisons using a diff tool like DiffFork. Side by side and line by line comparisons of the allocation, extents overflow, catalog, and attributes files will help identify where the changes occurred. The target device in this experiment is a two-gigabyte Universal Serial Bus, USB, thumb drive formatted with Global Unit Identifier, GUID, and Partition Table. Where practical, HFS+ special files and data structures will be manually parsed; documented, and illustrated.
Show less - Date Issued
- 2012
- Identifier
- CFE0004341, ucf:49440
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004341
- Title
- Providing Context to the Clues: Recovery and Reliability of Location Data from Android Devices.
- Creator
-
Bell, Connie, Lang, Sheau-Dong, Guha, Ratan, Zou, Changchun, University of Central Florida
- Abstract / Description
-
Mobile device data continues to increase in significance in both civil and criminal investigations. Location data is often of particular interest. To date, research has established that the devices are location aware, incorporate a variety of resources to obtain location information, and cache the information in various ways. However, a review of the existing research suggests varying degrees of reliability of any such recovered location data. In an effort to clarify the issue, this project...
Show moreMobile device data continues to increase in significance in both civil and criminal investigations. Location data is often of particular interest. To date, research has established that the devices are location aware, incorporate a variety of resources to obtain location information, and cache the information in various ways. However, a review of the existing research suggests varying degrees of reliability of any such recovered location data. In an effort to clarify the issue, this project offers case studies of multiple Android mobile devices utilized in controlled conditions with known settings and applications in documented locations. The study uses data recovered from test devices to corroborate previously identified accuracy trends noted in research involving live-tracked devices, and it further offers detailed analysis strategies for the recovery of location data from devices themselves. A methodology for reviewing device data for possible artifacts that may allow an examiner to evaluate location data reliability is also presented. This paper also addresses emerging trends in device security and cloud storage, which may have significant implications for future mobile device location data recovery and analysis. Discussion of recovered cloud data introduces a distinct and potentially significant resource for investigators, and the paper addresses the cloud resources' advantages and limitations.
Show less - Date Issued
- 2015
- Identifier
- CFE0005924, ucf:50837
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005924
- Title
- Modeling Crowd Mobility and Communication in Wireless Networks.
- Creator
-
Solmaz, Gurkan, Turgut, Damla, Bassiouni, Mostafa, Guha, Ratan, Goldiez, Brian, University of Central Florida
- Abstract / Description
-
This dissertation presents contributions to the fields of mobility modeling, wireless sensor networks (WSNs) with mobile sinks, and opportunistic communication in theme parks. The two main directions of our contributions are human mobility models and strategies for the mobile sink positioning and communication in wireless networks.The first direction of the dissertation is related to human mobility modeling. Modeling the movement of human subjects is important to improve the performance of...
Show moreThis dissertation presents contributions to the fields of mobility modeling, wireless sensor networks (WSNs) with mobile sinks, and opportunistic communication in theme parks. The two main directions of our contributions are human mobility models and strategies for the mobile sink positioning and communication in wireless networks.The first direction of the dissertation is related to human mobility modeling. Modeling the movement of human subjects is important to improve the performance of wireless networks with human participants and the validation of such networks through simulations. The movements in areas such as theme parks follow specific patterns that are not taken into consideration by the general purpose mobility models. We develop two types of mobility models of theme park visitors. The first model represents the typical movement of visitors as they are visiting various attractions and landmarks of the park. The second model represents the movement of the visitors as they aim to evacuate the park after a natural or man-made disaster.The second direction focuses on the movement patterns of mobile sinks and their communication in responding to various events and incidents within the theme park. When an event occurs, the system needs to determine which mobile sink will respond to the event and its trajectory. The overall objective is to optimize the event coverage by minimizing the time needed for the chosen mobile sink to reach the incident area. We extend this work by considering the positioning problem of mobile sinks and preservation of the connected topology. We propose a new variant of p-center problem for optimal placement and communication of the mobile sinks. We provide a solution to this problem through collaborative event coverage of the WSNs with mobile sinks. Finally, we develop a network model with opportunistic communication for tracking the evacuation of theme park visitors during disasters. This model involves people with smartphones that store and carry messages. The mobile sinks are responsible for communicating with the smartphones and reaching out to the regions of the emergent events.
Show less - Date Issued
- 2015
- Identifier
- CFE0006005, ucf:51024
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006005
- Title
- Performance Evaluation of TCP Multihoming for IPV6 Anycast Networks and Proxy Placement.
- Creator
-
Alsharfa, Raya, Bassiouni, Mostafa, Guha, Ratan, Lin, Mingjie, University of Central Florida
- Abstract / Description
-
In this thesis, the impact of multihomed clients and multihomed proxy servers on the performance of modern networks is investigated. The network model used in our investigation integrates three main components: the new one-to-any Anycast communication paradigm that facilitates server replication, the next generation Internet Protocol Version 6 (IPv6) that offers larger address space for packet switched networks, and the emerging multihoming trend of connecting devices and smart phones to more...
Show moreIn this thesis, the impact of multihomed clients and multihomed proxy servers on the performance of modern networks is investigated. The network model used in our investigation integrates three main components: the new one-to-any Anycast communication paradigm that facilitates server replication, the next generation Internet Protocol Version 6 (IPv6) that offers larger address space for packet switched networks, and the emerging multihoming trend of connecting devices and smart phones to more than one Internet service provider thereby acquiring more than one IP address. The design of a previously proposed Proxy IP Anycast service is modified to integrate user device multihoming and Ipv6 routing. The impact of user device multihoming (single-homed, dual-homed, and triple-homed) on network performance is extensively analyzed using realistic network topologies and different traffic scenarios of client-server TCP flows. Network throughput, packet latency delay and packet loss rate are the three performance metrics used in our analysis. Performance comparisons between the Anycast Proxy service and the native IP Anycast protocol are presented. The number of Anycast proxy servers and their placement are studied. Five placement methods have been implemented and evaluated including random placement, highest traffic placement, highest number of active interface placements, K-DS placement and a new hybrid placement method. The work presented in this thesis provides new insight into the performance of some new emerging communication paradigms and how to improve their design. Although the work has been limited to investigating Anycast proxy servers, the results can be beneficial and applicable to other types of overlay proxy services such as multicast proxies.
Show less - Date Issued
- 2015
- Identifier
- CFE0005919, ucf:50825
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005919
- Title
- MongoDB Incidence Response.
- Creator
-
Morales, Cory, Lang, Sheau-Dong, Zou, Changchun, Guha, Ratan, University of Central Florida
- Abstract / Description
-
NoSQL (Not only SQL) databases have been gaining some popularity over the last few years. Such big companies as Expedia, Shutterfly, MetLife, and Forbes use NoSQL databases to manage data on different projects. These databases can contain a variety of information ranging from nonproprietary data to personally identifiable information like social security numbers. Databases run the risk of cyber intrusion at all times. This paper gives a brief explanation of NoSQL and thoroughly explains a...
Show moreNoSQL (Not only SQL) databases have been gaining some popularity over the last few years. Such big companies as Expedia, Shutterfly, MetLife, and Forbes use NoSQL databases to manage data on different projects. These databases can contain a variety of information ranging from nonproprietary data to personally identifiable information like social security numbers. Databases run the risk of cyber intrusion at all times. This paper gives a brief explanation of NoSQL and thoroughly explains a method of Incidence Response with MongoDB, a NoSQL database provider. This method involves an automated process with a new self-built software tool that analyzing MongoDB audit log's and generates an html page with indicators to show possible intrusions and activities on the instance of MongoDB. When dealing with NoSQL databases there is a lot more to consider than with the traditional RDMS's, and since there is not a lot of out of the box support forensics tools can be very helpful.
Show less - Date Issued
- 2016
- Identifier
- CFE0006538, ucf:51356
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006538