Current Search: Lang, Sheau-Dong (x)
View All Items
- Title
- NETWORK INTRUSION DETECTION: MONITORING, SIMULATION ANDVISUALIZATION.
- Creator
-
Zhou, Mian, Lang, Sheau-Dong, University of Central Florida
- Abstract / Description
-
This dissertation presents our work on network intrusion detection and intrusion sim- ulation. The work in intrusion detection consists of two different network anomaly-based approaches. The work in intrusion simulation introduces a model using explicit traffic gen- eration for the packet level traffic simulation. The process of anomaly detection is to first build profiles for the normal network activity and then mark any events or activities that deviate from the normal profiles as...
Show moreThis dissertation presents our work on network intrusion detection and intrusion sim- ulation. The work in intrusion detection consists of two different network anomaly-based approaches. The work in intrusion simulation introduces a model using explicit traffic gen- eration for the packet level traffic simulation. The process of anomaly detection is to first build profiles for the normal network activity and then mark any events or activities that deviate from the normal profiles as suspicious. Based on the different schemes of creating the normal activity profiles, we introduce two approaches for intrusion detection. The first one is a frequency-based approach which creates a normal frequency profile based on the periodical patterns existed in the time-series formed by the traffic. It aims at those attacks that are conducted by running pre-written scripts, which automate the process of attempting connections to various ports or sending packets with fabricated payloads, etc. The second approach builds the normal profile based on variations of connection-based behavior of each single computer. The deviations resulted from each individual computer are carried out by a weight assignment scheme and further used to build a weighted link graph representing the overall traffic abnormalities. The functionality of this system is of a distributed personal IDS system that also provides a centralized traffic analysis by graphical visualization. It provides a finer control over the internal network by focusing on connection-based behavior of each single computer. For network intrusion simulation, we explore an alternative method for network traffic simulation using explicit traffic generation. In particular, we build a model to replay the standard DARPA traffic data or the traffic data captured from a real environment. The replayed traffic data is mixed with the attacks, such as DOS and Probe attack, which can create apparent abnormal traffic flow patterns. With the explicit traffic generation, every packet that has ever been sent by the victim and attacker is formed in the simulation model and travels around strictly following the criteria of time and path that extracted from the real scenario. Thus, the model provides a promising aid in the study of intrusion detection techniques.
Show less - Date Issued
- 2005
- Identifier
- CFE0000679, ucf:46484
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000679
- Title
- ON THE APPLICATION OF LOCALITY TO NETWORK INTRUSION DETECTION: WORKING-SET ANALYSIS OF REAL AND SYNTHETIC NETWORK SERVER TRAFFIC.
- Creator
-
Lee, Robert, Lang, Sheau-Dong, University of Central Florida
- Abstract / Description
-
Keeping computer networks safe from attack requires ever-increasing vigilance. Our work on applying locality to network intrusion detection is presented in this dissertation. Network servers that allow connections from both the internal network and the Internet are vulnerable to attack from all sides. Analysis of the behavior of incoming connections for properties of locality can be used to create a normal profile for such network servers. Intrusions can then be detected due to their abnormal...
Show moreKeeping computer networks safe from attack requires ever-increasing vigilance. Our work on applying locality to network intrusion detection is presented in this dissertation. Network servers that allow connections from both the internal network and the Internet are vulnerable to attack from all sides. Analysis of the behavior of incoming connections for properties of locality can be used to create a normal profile for such network servers. Intrusions can then be detected due to their abnormal behavior. Data was collected from a typical network server both under normal conditions and under specific attacks. Experiments show that connections to the server do in fact exhibit locality, and attacks on the server can be detected through their violation of locality. Key to the detection of locality is a data structure called a working-set, which is a kind of cache of certain data related to network connections. Under real network conditions, we have demonstrated that the working-set behaves in a manner consistent with locality. Determining the reasons for this behavior is our next goal. A model that generates synthetic traffic based on actual network traffic allows us to study basic traffic characteristics. Simulation of working-set processing of the synthetic traffic shows that it behaves much like actual traffic. Attacks inserted into a replay of the synthetic traffic produce working-set responses similar to those produced in actual traffic. In the future, our model can be used to further the development of intrusion detection strategies.
Show less - Date Issued
- 2009
- Identifier
- CFE0002718, ucf:48171
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002718
- Title
- THE IMPLICATIONS OF VIRTUAL ENVIRONMENTS IN DIGITAL FORENSIC INVESTIGATIONS.
- Creator
-
Patterson, Farrah, Lang, Sheau-Dong, Guha, Ratan, Zou, Changchun, University of Central Florida
- Abstract / Description
-
This research paper discusses the role of virtual environments in digital forensic investigations. With virtual environments becoming more prevalent as an analysis tool in digital forensic investigations, it's becoming more important for digital forensic investigators to understand the limitation and strengths of virtual machines. The study aims to expose limitations within commercial closed source virtual machines and open source virtual machines. The study provides a brief overview of...
Show moreThis research paper discusses the role of virtual environments in digital forensic investigations. With virtual environments becoming more prevalent as an analysis tool in digital forensic investigations, it's becoming more important for digital forensic investigators to understand the limitation and strengths of virtual machines. The study aims to expose limitations within commercial closed source virtual machines and open source virtual machines. The study provides a brief overview of history digital forensic investigations and virtual environments, and concludes with an experiment with four common open and closed source virtual machines; the effects of the virtual machines on the host machine as well as the performance of the virtual machine itself. My findings discovered that while the open source tools provided more control and freedom to the operator, the closed source tools were more stable and consistent in their operation. The significance of these findings can be further researched by applying them in the context of exemplifying reliability of forensic techniques when presented as analysis tool used in litigation.
Show less - Date Issued
- 2011
- Identifier
- CFE0004152, ucf:49050
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004152
- Title
- HFS Plus File System Exposition and Forensics.
- Creator
-
Ware, Scott, Lang, Sheau-Dong, Guha, Ratan, Zou, Changchun, University of Central Florida
- Abstract / Description
-
The Macintosh Hierarchical File System Plus, HFS+, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use...
Show moreThe Macintosh Hierarchical File System Plus, HFS+, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use of B-trees to implement key data structures for maintaining meta-data about folders, files, and data. The implementation of what happens within HFS+ at volume format, or when folders, files, and data are created, moved, or deleted is largely a mystery to those who are not programmers. The vast majority of information on this subject is relegated to documentation in books, papers, and online content that direct the reader to C code, libraries, and include files. If one can't interpret the complex C or Perl code implementations the opportunity to understand the workflow within HFS+ is less than adequate to develop a basic understanding of the internals and how they work. The basic concepts learned from this research will facilitate a better understanding of the HFS+ file system and journal as changes resulting from the adding and deleting files or folders are applied in a controlled, easy to follow, process.The primary tool used to examine the file system changes is a proprietary command line interface, CLI, tool called fileXray. This tool is actually a custom implementation of the HFS+ file system that has the ability to examine file system, meta-data, and data level information that isn't available in other tools. We will also use Apple's command line interface tool, Terminal, the WinHex graphical user interface, GUI, editor, The Sleuth Kit command line tools and DiffFork 1.1.9 help to document and illustrate the file system changes. The processes used to document the pristine and changed versions of the file system, with each experiment, are very similar such that the output files are identical with the exception of the actual change. Keeping the processes the same enables baseline comparisons using a diff tool like DiffFork. Side by side and line by line comparisons of the allocation, extents overflow, catalog, and attributes files will help identify where the changes occurred. The target device in this experiment is a two-gigabyte Universal Serial Bus, USB, thumb drive formatted with Global Unit Identifier, GUID, and Partition Table. Where practical, HFS+ special files and data structures will be manually parsed; documented, and illustrated.
Show less - Date Issued
- 2012
- Identifier
- CFE0004341, ucf:49440
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004341
- Title
- Providing Context to the Clues: Recovery and Reliability of Location Data from Android Devices.
- Creator
-
Bell, Connie, Lang, Sheau-Dong, Guha, Ratan, Zou, Changchun, University of Central Florida
- Abstract / Description
-
Mobile device data continues to increase in significance in both civil and criminal investigations. Location data is often of particular interest. To date, research has established that the devices are location aware, incorporate a variety of resources to obtain location information, and cache the information in various ways. However, a review of the existing research suggests varying degrees of reliability of any such recovered location data. In an effort to clarify the issue, this project...
Show moreMobile device data continues to increase in significance in both civil and criminal investigations. Location data is often of particular interest. To date, research has established that the devices are location aware, incorporate a variety of resources to obtain location information, and cache the information in various ways. However, a review of the existing research suggests varying degrees of reliability of any such recovered location data. In an effort to clarify the issue, this project offers case studies of multiple Android mobile devices utilized in controlled conditions with known settings and applications in documented locations. The study uses data recovered from test devices to corroborate previously identified accuracy trends noted in research involving live-tracked devices, and it further offers detailed analysis strategies for the recovery of location data from devices themselves. A methodology for reviewing device data for possible artifacts that may allow an examiner to evaluate location data reliability is also presented. This paper also addresses emerging trends in device security and cloud storage, which may have significant implications for future mobile device location data recovery and analysis. Discussion of recovered cloud data introduces a distinct and potentially significant resource for investigators, and the paper addresses the cloud resources' advantages and limitations.
Show less - Date Issued
- 2015
- Identifier
- CFE0005924, ucf:50837
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005924
- Title
- MongoDB Incidence Response.
- Creator
-
Morales, Cory, Lang, Sheau-Dong, Zou, Changchun, Guha, Ratan, University of Central Florida
- Abstract / Description
-
NoSQL (Not only SQL) databases have been gaining some popularity over the last few years. Such big companies as Expedia, Shutterfly, MetLife, and Forbes use NoSQL databases to manage data on different projects. These databases can contain a variety of information ranging from nonproprietary data to personally identifiable information like social security numbers. Databases run the risk of cyber intrusion at all times. This paper gives a brief explanation of NoSQL and thoroughly explains a...
Show moreNoSQL (Not only SQL) databases have been gaining some popularity over the last few years. Such big companies as Expedia, Shutterfly, MetLife, and Forbes use NoSQL databases to manage data on different projects. These databases can contain a variety of information ranging from nonproprietary data to personally identifiable information like social security numbers. Databases run the risk of cyber intrusion at all times. This paper gives a brief explanation of NoSQL and thoroughly explains a method of Incidence Response with MongoDB, a NoSQL database provider. This method involves an automated process with a new self-built software tool that analyzing MongoDB audit log's and generates an html page with indicators to show possible intrusions and activities on the instance of MongoDB. When dealing with NoSQL databases there is a lot more to consider than with the traditional RDMS's, and since there is not a lot of out of the box support forensics tools can be very helpful.
Show less - Date Issued
- 2016
- Identifier
- CFE0006538, ucf:51356
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006538
- Title
- Spatial Models with Specific Error Structures.
- Creator
-
Adu, Nathaniel, Richardson, Gary, Mohapatra, Ram, Song, Zixia, Lang, Sheau-Dong, University of Central Florida
- Abstract / Description
-
The purpose of this dissertation is to study the first order autoregressive model in the spatial context with specific error structures. We begin by supposing that the error structure has a long memory in both the i and the j components. Whenever the model parameters alpha and beta equal one, the limiting distribution of the sequence of normalized Fourier coefficients of the spatial process is shown to be a function of a two parameter fractional Brownian sheet. This result is used to find the...
Show moreThe purpose of this dissertation is to study the first order autoregressive model in the spatial context with specific error structures. We begin by supposing that the error structure has a long memory in both the i and the j components. Whenever the model parameters alpha and beta equal one, the limiting distribution of the sequence of normalized Fourier coefficients of the spatial process is shown to be a function of a two parameter fractional Brownian sheet. This result is used to find the limiting distribution of the periodogram ordinate of the spatial process under the null hypothesis that alpha equals one and beta equals one. We then give the limiting distribution of the normalized Fourier coefficients of the spatial process for both a moving average and autoregressive error structure. Two cases of autoregressive errors are considered. The first error model is autoregressive in one component and the second is autoregressive in both components. We show that the normalizing factor needed to ensure convergence in distribution of the sequence of Fourier coefficients is different in the moving average case, and the two autoregressive cases. In other words, the normalizing factor differs in each of these three cases.Finally, a specific case of the functional central limit theorem in the spatial setting is stated and proved. The assumptions made here are placed on the autocovariance functions. We then discuss some specific examples and provide a test statistics based on the periodogram ordinate.
Show less - Date Issued
- 2019
- Identifier
- CFE0007772, ucf:52385
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007772
- Title
- The Response of American Police Agencies to Digital Evidence.
- Creator
-
Yesilyurt, Hamdi, Wan, Thomas, Potter, Roberto, Applegate, Brandon, Lang, Sheau-Dong, University of Central Florida
- Abstract / Description
-
Little is known about the variation in digital forensics practice in the United States as adopted by large local police agencies. This study investigated how environmental constraints, contextual factors, organizational complexity, and organizational control relate to the adoption of digital forensics practice. This study integrated 3 theoretical perspectives in organizational studies to guide the analysis of the relations: institutional theory, contingency theory, and adoption-of-innovation...
Show moreLittle is known about the variation in digital forensics practice in the United States as adopted by large local police agencies. This study investigated how environmental constraints, contextual factors, organizational complexity, and organizational control relate to the adoption of digital forensics practice. This study integrated 3 theoretical perspectives in organizational studies to guide the analysis of the relations: institutional theory, contingency theory, and adoption-of-innovation theory. Institutional theory was used to analyze the impact of environmental constraints on the adoption of innovation, and contingency theory was used to examine the impacts of organizational control on the adoption of innovation. Adoption of innovation theory was employed to describe the degree to which digital forensics practice has been adopted by large municipal police agencies having 100 or more sworn police officers.The data set was assembled primarily by using Law Enforcement Management and Administrative Statistics (LEMAS) 2003 and 1999. Dr. Edward Maguire`s survey was used to obtain 1 variable. The joining up of the data set to construct the sample resulted in 345 large local police agencies. The descriptive results on the degree of adoption of digital forensics practice indicate that 37.7% of large local police agencies have dedicated personnel to address digital evidence, 32.8% of police agencies address digital evidence but do not have dedicated personnel, and only 24.3% of police agencies have a specialized unit with full-time personnel to address digital evidence. About 5% of local police agencies do nothing to address digital evidence in any circumstance. These descriptive statistics indicate that digital evidence is a matter of concern for most large local police agencies and that they respond to varying degrees to digital evidence at the organizational level. Agencies that have not adopted digital forensics practice are in the minority. The structural equation model was used to test the hypothesized relations, easing the rigorous analysis of relations between latent constructs and several indicator variables. Environmental constraints have the largest impact on the adoption of innovation, exerting a positive influence. No statistically significant relation was found between organizational control and adoption of digital forensic practice. Contextual factors (task scope and personnel size) positively influence the adoption of digital forensics. Structural control factors, including administrative weight and formalization, have no significant influence on the adoption of innovation. The conclusions of the study are as follows. Police agencies adopt digital forensics practice primarily by relying on environmental constraints. Police agencies exposed to higher environmental constraints are more frequently expected to adopt digital forensics practice. Because organizational control of police agencies is not significantly related to digital forensics practice adoption, police agencies do not take their organizational control extensively into consideration when they consider adopting digital forensics practice. The positive influence of task scope and size on digital forensics practice adoption was expected. The extent of task scope and the number of personnel indicate a higher capacity for police agencies to adopt digital forensics practice. Administrative weight and formalization do not influence the adoption of digital forensics practice. Therefore, structural control and coordination are not important for large local police agencies to adopt digital forensics practice.The results of the study indicate that the adoption of digital forensics practice is based primarily on environmental constraints. Therefore, more drastic impacts on digital forensics practice should be expected from local police agencies' environments than from internal organizational factors. Researchers investigating the influence of various factors on the adoption of digital forensics practice should further examine environmental variables. The unexpected results concerning the impact of administrative weight and formalization should be researched with broader considerations.
Show less - Date Issued
- 2011
- Identifier
- CFE0004181, ucf:49081
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004181
- Title
- Lattice-Valued T-Filters and Induced Structures.
- Creator
-
Reid, Frederick, Richardson, Gary, Brennan, Joseph, Han, Deguang, Lang, Sheau-Dong, University of Central Florida
- Abstract / Description
-
A complete lattice is called a frame provided meets distribute over arbitrary joins. The implication operation in this context plays a central role. Intuitively, it measures the degree to which one element is less than or equal to another. In this setting, a category is defined by equipping each set with a T-convergence structure which is defined in terms of T-filters. This category is shown to be topological, strongly Cartesian closed, and extensional. It is well known that the category of...
Show moreA complete lattice is called a frame provided meets distribute over arbitrary joins. The implication operation in this context plays a central role. Intuitively, it measures the degree to which one element is less than or equal to another. In this setting, a category is defined by equipping each set with a T-convergence structure which is defined in terms of T-filters. This category is shown to be topological, strongly Cartesian closed, and extensional. It is well known that the category of topological spaces and continuous maps is neither Cartesian closed nor extensional.Subcategories of compact and of complete spaces are investigated. It is shown that each T-convergence space has a compactification with the extension property provided the frame is a Boolean algebra. T-Cauchy spaces are defined and sufficient conditions for the existence of a completion are given. T-uniform limit spaces are also defined and their completions are given in terms of the T-Cauchy spaces they induce. Categorical properties of these subcategories are also investigated. Further, for a fixed T-convergence space, under suitable conditions, it is shown that there exists an order preserving bijection between the set of all strict, regular, Hausdorff compactifications and the set of all totally bounded T-Cauchy spaces which induce the fixed space.
Show less - Date Issued
- 2019
- Identifier
- CFE0007520, ucf:52586
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007520
- Title
- Resource Allocation and Pricing in Secondary Dynamic Spectrum Access Networks.
- Creator
-
Khairullah, Enas, Chatterjee, Mainak, Zou, Changchun, Lang, Sheau-Dong, Catbas, Necati, University of Central Florida
- Abstract / Description
-
The paradigm shift from static spectrum allocation to a dynamic one has opened many challenges that need to be addressed for the true vision of Dynamic Spectrum Access (DSA) to materialize. This dissertation proposes novel solutions that include: spectrum allocation, routing, and scheduling in DSA networks. First, we propose an auction-based spectrum allocation scheme in a multi-channel environment where secondary users (SUs) bid to buy channels from primary users (PUs) based on the signal to...
Show moreThe paradigm shift from static spectrum allocation to a dynamic one has opened many challenges that need to be addressed for the true vision of Dynamic Spectrum Access (DSA) to materialize. This dissertation proposes novel solutions that include: spectrum allocation, routing, and scheduling in DSA networks. First, we propose an auction-based spectrum allocation scheme in a multi-channel environment where secondary users (SUs) bid to buy channels from primary users (PUs) based on the signal to interference and noise ratio (SINR). The channels are allocated such that i) the SUs get their preferred channels, ii) channels are re-used, and iii) there is no interference. Then, we propose a double auction-based spectrum allocation technique by considering multiple bids from SUs and heterogeneity of channels. We use virtual grouping of conflict-free buyers to transform multi-unit bids to single-unit bids. For routing, we propose a market-based model where the PUs determine the optimal price based on the demand for bandwidth by the SUs. Routes are determined through a series of price evaluations between message senders and forwarders. Also, we consider auction-based routing for two cases where buyers can bid for only one channel or they could bid for a combination of non-substitutable channels. For a centralized DSA, we propose two scheduling algorithms-- the first one focuses on maximizing the throughput and the second one focuses on fairness. We extend the scheduling algorithms to multi-channel environment. Expected throughput for every channel is computed by modelling channel state transitions using a discrete-time Markov chain. The state transition probabilities are calculated which occur at the frame/slot boundaries. All proposed algorithms are validated using simulation experiments with different network settings and their performance are studied.
Show less - Date Issued
- 2017
- Identifier
- CFE0006890, ucf:51723
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006890
- Title
- IMPROVED INTERNET SECURITY PROTOCOLS USING CRYPTOGRAPHIC ONE-WAY HASH CHAINS.
- Creator
-
Alabrah, Amerah, Bassiouni, Mostafa, Zou, Changchun, Lang, Sheau-Dong, Bai, Yuanli, University of Central Florida
- Abstract / Description
-
In this dissertation, new approaches that utilize the one-way cryptographic hash functions in designing improved network security protocols are investigated. The proposed approaches are designed to be scalable and easy to implement in modern technology.The first contribution explores session cookies with emphasis on the threat of session hijacking attacks resulting from session cookie theft or sniffing. In the proposed scheme, these cookies are replaced by easily computed authentication...
Show moreIn this dissertation, new approaches that utilize the one-way cryptographic hash functions in designing improved network security protocols are investigated. The proposed approaches are designed to be scalable and easy to implement in modern technology.The first contribution explores session cookies with emphasis on the threat of session hijacking attacks resulting from session cookie theft or sniffing. In the proposed scheme, these cookies are replaced by easily computed authentication credentials using Lamport's well-known one-time passwords. The basic idea in this scheme revolves around utilizing sparse caching units, where authentication credentials pertaining to cookies are stored and fetched once needed, thereby, mitigating computational overhead generally associated with one-way hash constructions.The second and third proposed schemes rely on dividing the one-way hash construction into a hierarchical two-tier construction. Each tier component is responsible for some aspect of authentication generated by using two different hash functions. By utilizing different cryptographic hash functions arranged in two tiers, the hierarchical two-tier protocol (our second contribution) gives significant performance improvement over previously proposed solutions for securing Internet cookies. Through indexing authentication credentials by their position within the hash chain in a multi-dimensional chain, the third contribution achieves improved performance.In the fourth proposed scheme, an attempt is made to apply the one-way hash construction to achieve user and broadcast authentication in wireless sensor networks. Due to known energy and memory constraints, the one-way hash scheme is modified to mitigate computational overhead so it can be easily applied in this particular setting.The fifth scheme tries to reap the benefits of the sparse cache-supported scheme and the hierarchical scheme. The resulting hybrid approach achieves efficient performance at the lowest cost of caching possible.In the sixth proposal, an authentication scheme tailored for the multi-server single sign-on (SSO) environment is presented. The scheme utilizes the one-way hash construction in a Merkle Hash Tree and a hash calendar to avoid impersonation and session hijacking attacks. The scheme also explores the optimal configuration of the one-way hash chain in this particular environment.All the proposed protocols are validated by extensive experimental analyses. These analyses are obtained by running simulations depicting the many scenarios envisioned. Additionally, these simulations are supported by relevant analytical models derived by mathematical formulas taking into consideration the environment under investigation.
Show less - Date Issued
- 2014
- Identifier
- CFE0005453, ucf:50392
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005453