Current Search: compression (x)
View All Items
Pages
- Title
- TRANSFORM BASED AND SEARCH AWARE TEXT COMPRESSION SCHEMES AND COMPRESSED DOMAIN TEXT RETRIEVAL.
- Creator
-
Zhang, Nan, Mukherjee, Amar, University of Central Florida
- Abstract / Description
-
In recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need...
Show moreIn recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need from this huge mass of data. Sometimes, a single site may also contain large collections of data such as a library database, thereby requiring an efficient search mechanism even to search within the local data. To facilitate the information retrieval, an emerging ad hoc standard for uncompressed text is XML which preprocesses the text by putting additional user defined metadata such as DTD or hyperlinks to enable searching with better efficiency and effectiveness. This increases the file size considerably, underscoring the importance of applying text compression. On account of efficiency (in terms of both space and time), there is a need to keep the data in compressed form for as much as possible. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. . Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors. Text compression is concerned with techniques for representing the digital text data in alternate representations that takes less space. Not only does it help conserve the storage space for archival and online data, it also helps system performance by requiring less number of secondary storage (disk or CD Rom) accesses and improves the network transmission bandwidth utilization by reducing the transmission time. Unlike static images or video, there is no international standard for text compression, although compressed formats like .zip, .gz, .Z files are increasingly being used. In general, data compression methods are classified as lossless or lossy. Lossless compression allows the original data to be recovered exactly. Although used primarily for text data, lossless compression algorithms are useful in special classes of images such as medical imaging, finger print data, astronomical images and data bases containing mostly vital numerical data, tables and text information. Many lossy algorithms use lossless methods at the final stage of the encoding stage underscoring the importance of lossless methods for both lossy and lossless compression applications. In order to be able to effectively utilize the full potential of compression techniques for the future retrieval systems, we need efficient information retrieval in the compressed domain. This means that techniques must be developed to search the compressed text without decompression or only with partial decompression independent of whether the search is done on the text or on some inversion table corresponding to a set of key words for the text. In this dissertation, we make the following contributions: (1) Star family compression algorithms: We have proposed an approach to develop a reversible transformation that can be applied to a source text that improves existing algorithm's ability to compress. We use a static dictionary to convert the English words into predefined symbol sequences. These transformed sequences create additional context information that is superior to the original text. Thus we achieve some compression at the preprocessing stage. We have a series of transforms which improve the performance. Star transform requires a static dictionary for a certain size. To avoid the considerable complexity of conversion, we employ the ternary tree data structure that efficiently converts the words in the text to the words in the star dictionary in linear time. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. (3) Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors.
Show less - Date Issued
- 2005
- Identifier
- CFE0000438, ucf:46396
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000438
- Title
- TRANSFORM BASED AND SEARCH AWARE TEXT COMPRESSION SCHEMES AND COMPRESSED DOMAIN TEXT RETRIEVAL.
- Creator
-
Zhang, Nan, Mukherjee, Amar, University of Central Florida
- Abstract / Description
-
In recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need...
Show moreIn recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need from this huge mass of data. Sometimes, a single site may also contain large collections of data such as a library database, thereby requiring an efficient search mechanism even to search within the local data. To facilitate the information retrieval, an emerging ad hoc standard for uncompressed text is XML which preprocesses the text by putting additional user defined metadata such as DTD or hyperlinks to enable searching with better efficiency and effectiveness. This increases the file size considerably, underscoring the importance of applying text compression. On account of efficiency (in terms of both space and time), there is a need to keep the data in compressed form for as much as possible. Text compression is concerned with techniques for representing the digital text data in alternate representations that takes less space. Not only does it help conserve the storage space for archival and online data, it also helps system performance by requiring less number of secondary storage (disk or CD Rom) accesses and improves the network transmission bandwidth utilization by reducing the transmission time. Unlike static images or video, there is no international standard for text compression, although compressed formats like .zip, .gz, .Z files are increasingly being used. In general, data compression methods are classified as lossless or lossy. Lossless compression allows the original data to be recovered exactly. Although used primarily for text data, lossless compression algorithms are useful in special classes of images such as medical imaging, finger print data, astronomical images and data bases containing mostly vital numerical data, tables and text information. Many lossy algorithms use lossless methods at the final stage of the encoding stage underscoring the importance of lossless methods for both lossy and lossless compression applications. In order to be able to effectively utilize the full potential of compression techniques for the future retrieval systems, we need efficient information retrieval in the compressed domain. This means that techniques must be developed to search the compressed text without decompression or only with partial decompression independent of whether the search is done on the text or on some inversion table corresponding to a set of key words for the text. In this dissertation, we make the following contributions: (1) Star family compression algorithms: We have proposed an approach to develop a reversible transformation that can be applied to a source text that improves existing algorithm's ability to compress. We use a static dictionary to convert the English words into predefined symbol sequences. These transformed sequences create additional context information that is superior to the original text. Thus we achieve some compression at the preprocessing stage. We have a series of transforms which improve the performance. Star transform requires a static dictionary for a certain size. To avoid the considerable complexity of conversion, we employ the ternary tree data structure that efficiently converts the words in the text to the words in the star dictionary in linear time. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. (3) Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors.
Show less - Date Issued
- 2005
- Identifier
- CFE0000488, ucf:46358
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000488
- Title
- COMPARISON OF SPARSE CODING AND JPEG CODING SCHEMES FOR BLURRED RETINAL IMAGES.
- Creator
-
Chandrasekaran, Balaji, Wei, Lei, University of Central Florida
- Abstract / Description
-
Overcomplete representations are currently one of the highly researched areas especially in the field of signal processing due to their strong potential to generate sparse representation of signals. Sparse representation implies that given signal can be represented with components that are only rarely significantly active. It has been strongly argued that the mammalian visual system is highly related towards sparse and overcomplete representations. The primary visual cortex has overcomplete...
Show moreOvercomplete representations are currently one of the highly researched areas especially in the field of signal processing due to their strong potential to generate sparse representation of signals. Sparse representation implies that given signal can be represented with components that are only rarely significantly active. It has been strongly argued that the mammalian visual system is highly related towards sparse and overcomplete representations. The primary visual cortex has overcomplete responses in representing an input signal which leads to the use of sparse neuronal activity for further processing. This work investigates the sparse coding with an overcomplete basis set representation which is believed to be the strategy employed by the mammalian visual system for efficient coding of natural images. This work analyzes the Sparse Code Learning algorithm in which the given image is represented by means of linear superposition of sparse statistically independent events on a set of overcomplete basis functions. This algorithm trains and adapts the overcomplete basis functions such as to represent any given image in terms of sparse structures. The second part of the work analyzes an inhibition based sparse coding model in which the Gabor based overcomplete representations are used to represent the image. It then applies an iterative inhibition algorithm based on competition between neighboring transform coefficients to select subset of Gabor functions such as to represent the given image with sparse set of coefficients. This work applies the developed models for the image compression applications and tests the achievable levels of compression of it. The research towards these areas so far proves that sparse coding algorithms are inefficient in representing high frequency sharp image features. So this work analyzes the performance of these algorithms only on the natural images which does not have sharp features and compares the compression results with the current industrial standard coding schemes such as JPEG and JPEG 2000. It also models the characteristics of an image falling on the retina after the distortion effects of the eye and then applies the developed algorithms towards these images and tests compression results.
Show less - Date Issued
- 2007
- Identifier
- CFE0001701, ucf:47328
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001701
- Title
- DESIGN OF SEA WATER HEAT EXCHANGERFOR MINIATURE VAPOR COMPRESSION CYCLE.
- Creator
-
Hughes, James, Chow, Louis, University of Central Florida
- Abstract / Description
-
Recent advances in the development of miniature vapor compression cycle components have created unique opportunities for heating and cooling applications, specifically to human physiological requirements that arise in extreme environments. Diving in very cold water between 1.7 and 5°C requires active heating because passive thermal insulation has proven inadequate for long durations. To maintain diver mobility and cognitive performance, it is desirable to provide 250 to 300 W of heat from...
Show moreRecent advances in the development of miniature vapor compression cycle components have created unique opportunities for heating and cooling applications, specifically to human physiological requirements that arise in extreme environments. Diving in very cold water between 1.7 and 5°C requires active heating because passive thermal insulation has proven inadequate for long durations. To maintain diver mobility and cognitive performance, it is desirable to provide 250 to 300 W of heat from an un-tethered power source. The use of a miniature vapor compression cycle reduces the amount of power (batteries or fuel cell) that the diver must carry by 2.5 times over a standard resistive heater. This study develops the compact evaporator used to extract heat from the sea water to provide heat to the diver. The performance is calculated through the application of traditional single-phase and two-phase heat transfer correlations using numerical methods. Fabrication methods were investigated and then a prototype was manufactured. A test stand was developed to fully characterize the evaporator at various conditions. The evaporator is then evaluated for the conditions of interest. Test results suggest the correlations applied over predict performance up to 20%. The evaporator tested meets the performance specifications and design criteria and is ready for system integration.
Show less - Date Issued
- 2009
- Identifier
- CFE0002917, ucf:48016
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002917
- Title
- Compressible Turbulent Flame Speed of Highly Turbulent Standing Flames.
- Creator
-
Sosa, Jonathan, Ahmed, Kareem, Kassab, Alain, Kapat, Jayanta, University of Central Florida
- Abstract / Description
-
This work presents the first measurement of turbulent burning velocities of a highly-turbulent compressible standing flame induced by shock-driven turbulence in a Turbulent Shock Tube. High-speed schlieren, chemiluminescence, PIV, and dynamic pressure measurements are made to quantify flame-turbulence interaction for high levels of turbulence at elevated temperatures and pressure. Distributions of turbulent velocities, vorticity and turbulent strain are provided for regions ahead and behind...
Show moreThis work presents the first measurement of turbulent burning velocities of a highly-turbulent compressible standing flame induced by shock-driven turbulence in a Turbulent Shock Tube. High-speed schlieren, chemiluminescence, PIV, and dynamic pressure measurements are made to quantify flame-turbulence interaction for high levels of turbulence at elevated temperatures and pressure. Distributions of turbulent velocities, vorticity and turbulent strain are provided for regions ahead and behind the standing flame. The turbulent flame speed is directly measured for the high-Mach standing turbulent flame. From measurements of the flame turbulent speed and turbulent Mach number, transition into a non-linear compressibility regime at turbulent Mach numbers above 0.4 is confirmed, and a possible mechanism for flame generated turbulence and deflagration-to-detonation transition is established.
Show less - Date Issued
- 2018
- Identifier
- CFE0007102, ucf:51955
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007102
- Title
- IMPULSE FORMULATIONS OF THE EULER EQUATIONS FOR INCOMPRESSIBLE AND COMPRESSIBLE FLUIDS.
- Creator
-
Pareja, Victor, Shivamoggi, Bhimsen, University of Central Florida
- Abstract / Description
-
The purpose of this paper is to consider the impulse formulations of the Euler equations for incompressible and compressible fluids. Different gauges are considered. In particular, the Kuz'min gauge provides an interesting case as it allows the fluid impulse velocity to describe the evolution of material surface elements. This result affords interesting physical interpretations of the Kuz'min invariant. Some exact solutions in the impulse formulation are studied. Finally,...
Show moreThe purpose of this paper is to consider the impulse formulations of the Euler equations for incompressible and compressible fluids. Different gauges are considered. In particular, the Kuz'min gauge provides an interesting case as it allows the fluid impulse velocity to describe the evolution of material surface elements. This result affords interesting physical interpretations of the Kuz'min invariant. Some exact solutions in the impulse formulation are studied. Finally, generalizations to compressible fluids are considered as an extension of these results. The arrangement of the paper is as follows: in the first chapter we will give a brief explanation on the importance of the study of fluid impulse. In chapters two and three we will derive the Kuz'min, E & Liu, Maddocks & Pego and the Zero gauges for the evolution equation of the impulse density, as well as their properties. The first three of these gauges have been named after their authors. Chapter four will study two exact solutions in the impulse formulation. Physical interpretations are examined in chapter five. In chapter six, we will begin with the generalization to the compressible case for the Kuz'min gauge, based on Shivamoggi et al. (2007), and we will derive similar results for the remaining gauges. In Chapter seven we will examine physical interpretations for the compressible case.
Show less - Date Issued
- 2007
- Identifier
- CFE0001907, ucf:47492
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001907
- Title
- Applications of Compressive Sensing To Surveillance Problems.
- Creator
-
Huff, Christopher, Mohapatra, Ram, Sun, Qiyu, Han, Deguang, University of Central Florida
- Abstract / Description
-
In many surveillance scenarios, one concern that arises is how to construct an imager that is capable of capturing the scene with high fidelity. This could be problematic for two reasons: first, the optics and electronics in the camera may have difficulty in dealing with so much information; secondly, bandwidth constraints, may pose difficulty in transmitting information from the imager to the user efficiently for reconstruction or realization. In this thesis, we will discuss a mathematical...
Show moreIn many surveillance scenarios, one concern that arises is how to construct an imager that is capable of capturing the scene with high fidelity. This could be problematic for two reasons: first, the optics and electronics in the camera may have difficulty in dealing with so much information; secondly, bandwidth constraints, may pose difficulty in transmitting information from the imager to the user efficiently for reconstruction or realization. In this thesis, we will discuss a mathematical framework that is capable of skirting the two aforementioned issues. This framework is rooted in a technique commonly referred to as compressive sensing. We will explore two of the seminal works in compressive sensing and will present the key theorems and definitions from these two papers. We will then survey three different surveillance scenarios and their respective compressive sensing solutions. The original contribution of this thesis is the development of a distributed compressive sensing model.
Show less - Date Issued
- 2012
- Identifier
- CFE0004317, ucf:49473
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004317
- Title
- COMPRESSED PATTERN MATCHING FOR TEXT AND IMAGES.
- Creator
-
Tao, Tao, Mukherjee, Amar, University of Central Florida
- Abstract / Description
-
The amount of information that we are dealing with today is being generated at an ever-increasing rate. On one hand, data compression is needed to efficiently store, organize the data and transport the data over the limited-bandwidth network. On the other hand, efficient information retrieval is needed to speedily find the relevant information from this huge mass of data using available resources. The compressed pattern matching problem can be stated as: given the compressed format of a text...
Show moreThe amount of information that we are dealing with today is being generated at an ever-increasing rate. On one hand, data compression is needed to efficiently store, organize the data and transport the data over the limited-bandwidth network. On the other hand, efficient information retrieval is needed to speedily find the relevant information from this huge mass of data using available resources. The compressed pattern matching problem can be stated as: given the compressed format of a text or an image and a pattern string or a pattern image, report the occurrence(s) of the pattern in the text or image with minimal (or no) decompression. The main advantages of compressed pattern matching versus the naïve decompress-then-search approach are: First, reduced storage cost. Since there is no need to decompress the data or there is only minimal decompression required, the disk space and the memory cost is reduced. Second, less search time. Since the size of the compressed data is smaller than that of the original data, a searching performed on the compressed data will result in a shorter search time. The challenge of efficient compressed pattern matching can be met from two inseparable aspects: First, to utilize effectively the full potential of compression for the information retrieval systems, there is a need to develop search-aware compression algorithms. Second, for data that is compressed using a particular compression technique, regardless whether the compression is search-aware or not, we need to develop efficient searching techniques. This means that techniques must be developed to search the compressed data with no or minimal decompression and with not too much extra cost. Compressed pattern matching algorithms can be categorized as either for text compression or for image compression. Although compressed pattern matching for text compression has been studied for a few years and many publications are available in the literature, there is still room to improve the efficiency in terms of both compression and searching. None of the search engines available today make explicit use of compressed pattern matching. Compressed pattern matching for image compression, on the other hand, has been relatively unexplored. However, it is getting more attention because lossless compression has become more important for the ever-increasing large amount of medical images, satellite images and aerospace photos, which requires the data to be losslessly stored. Developing efficient information retrieval techniques from the losslessly compressed data is therefore a fundamental research challenge. In this dissertation, we have studied compressed pattern matching problem for both text and images. We present a series of novel compressed pattern matching algorithms, which are divided into two major parts. The first major work is done for the popular LZW compression algorithm. The second major work is done for the current lossless image compression standard JPEG-LS. Specifically, our contributions from the first major work are: 1. We have developed an "almost-optimal" compressed pattern matching algorithm that reports all pattern occurrences. An earlier "almost-optimal" algorithm reported in the literature is only capable of detecting the first occurrence of the pattern and the practical performance of the algorithm is not clear. We have implemented our algorithm and provide extensive experimental results measuring the speed of our algorithm. We also developed a faster implementation for so-called "simple patterns". The simple patterns are patterns that no unique symbol appears more than once. The algorithm takes advantage of this property and runs in optimal time. 2. We have developed a novel compressed pattern matching algorithm for multiple patterns using the Aho-Corasick algorithm. The algorithm takes O(mt+n+r) time with O(mt) extra space, where n is the size of the compressed file, m is the total size of all patterns, t is the size of the LZW trie and r is the number of occurrences of the patterns. The algorithm is particularly efficient when being applied on archival search if the archives are compressed with a common LZW trie. All the above algorithms have been implemented and extensive experiments have been conducted to test the performance of our algorithms and to compare with the best existing algorithms. The experimental results show that our compressed pattern matching algorithm for multiple patterns is competitive among the best algorithms and is practically the fastest among all approaches when the number of patterns is not very large. Therefore, our algorithm is preferable for general string matching applications. LZW is one of the most efficient and popular compression algorithms used extensively and both of our algorithms require no modification on the compression algorithm. Our work, therefore, has great economical and market potential Our contributions from the second major work are: 1 We have developed a new global context variation of the JPEG-LS compression algorithm and the corresponding compressed pattern matching algorithm. Comparing to the original JPEG-LS, the global context variation is search-aware and has faster encoding and decoding speeds. The searching algorithm based on the global-context variation requires partial decompression of the compressed image. The experimental results show that it improves the search speed by about 30% comparing to the decompress-then-search approach. Based on our best knowledge, this is the first two-dimensional compressed pattern matching work for the JPEG-LS standard. 2 We have developed a two-pass variation of the JPEG-LS algorithm and the corresponding compressed pattern matching algorithm. The two-pass variation achieves search-awareness through a common compression technique called semi-static dictionary. Comparing to the original algorithm, the compression of the new algorithm is equally well but the encoding takes slightly longer. The searching algorithm based on the two-pass variation requires no decompression at all and therefore works in the fully compressed domain. It runs in time O(nc+mc+nm+m^2) with extra space O(n+m+mc), where n is the number of columns of the image, m is the number of rows and columns of the pattern, nc is the compressed image size and mc is the compressed pattern size. The algorithm is the first known two-dimensional algorithm that works in the fully compressed domain.
Show less - Date Issued
- 2005
- Identifier
- CFE0000471, ucf:46366
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000471
- Title
- A Localized Blended RBF Collocation Method for Effective Shock Capturing.
- Creator
-
Harris, Michael, Kassab, Alain, Moslehy, Faissal, Divo, Eduardo, Chopra, Manoj, University of Central Florida
- Abstract / Description
-
Solving partial differential equations (PDEs) can require numerical methods, especially for non-linear problems and complex geometry. Common numerical methods used today are the finite difference method (FDM), finite element method (FEM) and the finite volume method (FVM). These methods require a mesh or grid before a solution is attempted. Developing the mesh can require expensive preprocessing time and the quality of the mesh can have major effects on the solution. In recent years, meshless...
Show moreSolving partial differential equations (PDEs) can require numerical methods, especially for non-linear problems and complex geometry. Common numerical methods used today are the finite difference method (FDM), finite element method (FEM) and the finite volume method (FVM). These methods require a mesh or grid before a solution is attempted. Developing the mesh can require expensive preprocessing time and the quality of the mesh can have major effects on the solution. In recent years, meshless methods have become a research interest due to the simplicity of using scattered data points. Many types of meshless methods exist stemming from the spectral or pseudo-spectral methods, but the focus of this research involves a meshless method using radial basis function (RBF) interpolation. Radial basis functions (RBF) interpolation is a class of meshless method and can be used in solving partial differential equations. Radial basis functions are impressive because of the capability of multivariate interpolation over scattered data, even for data with discontinuities. Also, radial basis function interpolation is capable of spectral accuracy and exponential convergence. For infinitely smooth radial basis functions such as the Hardy Multiquadric and inverse Multiquadric, the RBF is dependent on a shape parameter that must be chosen properly to obtain accurate approximations. The optimum shape parameter can vary depending on the smoothness of the field. Typically, the shape parameter is chosen to be a large value rendering the RBF flat and yielding high condition number interpolation matrix. This strategy works well for smooth data and as shown to produce phenomenal results for problems in heat transfer and incompressible fluid dynamics. The approach of flat RBF or high condition matrices tends to fail for steep gradients and shocks. Instead, a low-value shape parameter rendering the RBF steep and the condition number of the interpolation matrix small should be used in the presence of steep gradients or shocks. This work demonstrates a method to capture steep gradients and shocks using a blended RBF approach. The method switches between flat and steep RBF interpolation depending on the smoothness of the data. Flat RBF or high condition number RBF interpolation is used for smooth regions maintaining high accuracy. Steep RBF or low condition number RBF interpolation provides stability for steep gradients and shocks. This method is demonstrated using several numerical experiments such as 1-D advection equation, 2-D advection equation, Burgers equation, 2-D inviscid compressible Euler equations, and the Navier-Stokes equations.
Show less - Date Issued
- 2018
- Identifier
- CFE0007332, ucf:52108
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007332
- Title
- Spatial and Temporal Compressive Sensing for Vibration-based Monitoring: Fundamental Studies with Beam Vibrations.
- Creator
-
Ganesan, Vaahini, Das, Tuhin, Kauffman, Jeffrey L., Raghavan, Seetha, University of Central Florida
- Abstract / Description
-
Vibration data from mechanical systems carry important information that is useful for characterization and diagnosis. Standard approaches rely on continually streaming data at a fixed sampling frequency. For applications involving continuous monitoring, such as Structural Health Monitoring (SHM), such approaches result in high data volume and require powering sensors for prolonged duration. Furthermore, adequate spatial resolution, typically involves instrumenting structures with a large...
Show moreVibration data from mechanical systems carry important information that is useful for characterization and diagnosis. Standard approaches rely on continually streaming data at a fixed sampling frequency. For applications involving continuous monitoring, such as Structural Health Monitoring (SHM), such approaches result in high data volume and require powering sensors for prolonged duration. Furthermore, adequate spatial resolution, typically involves instrumenting structures with a large array of sensors. This research shows that applying Compressive Sensing (CS) can significantly reduce both the volume of data and number of sensors in vibration monitoring applications. Random sampling and the inherent sparsity of vibration signals in the frequency domain enables this reduction. Additionally, by exploiting the sparsity of mode shapes, CS can also enable efficient spatial reconstruction using fewer spatially distributed sensors than a traditional approach. CS can thereby reduce the cost and power requirement of sensing as well as streamline data storage and processing in monitoring applications. In well-instrumented structures, CS can enable continuous monitoring in case of sensor or computational failures. The scope of this research was to establish CS as a viable method for SHM with application to beam vibrations. Finite element based simulations demonstrated CS-based frequency recovery from free vibration response of simply supported, fixed-fixed and cantilever beams. Specifically, CS was used to detect shift in natural frequencies of vibration due to structural change using considerably less data than required by traditional sampling. Experimental results using a cantilever beam provided further insight into this approach. In the experimental study, impulse response of the beam was used to recover natural frequencies of vibration with CS. It was shown that CS could discern changes in natural frequencies under modified beam parameters. When the basis functions were modified to accommodate the effect of damping, the performance of CS-based recovery further improved. Effect of noise in CS-based frequency recovery was also studied. In addition to incorporating damping, formulating noise-handling as a part of the CS algorithm for beam vibrations facilitated detecting shift in frequencies from even fewer samples. In the spatial domain, CS was primarily developed to focus on image processing applications, where the signals and basis functions are very different from those required for mechanical beam vibrations. Therefore, it mandated reformulation of the CS problem that would handle related challenges and enable the reconstruction of spatial beam response using very few sensor data. Specifically, this research addresses CS-based reconstruction of deflection shape of beams with fixed boundary conditions. Presence of a fixed end makes hyperbolic terms indispensable in the basis, which in turn causes numerical inconsistencies. Two approaches are discussed to mitigate this problem. The first approach is to restrict the hyperbolic terms in the basis to lower frequencies to ensure well conditioning. The second, a more systematic approach, is to generate an augmented basis function that will combine harmonic and hyperbolic terms. At higher frequencies, the combined hyperbolic terms will limit each other's magnitude, thus ensuring boundedness. This research thus lays the foundation for formulating the CS problem for the field of mechanical vibrations. It presents fundamental studies and discusses open-ended challenges while implementing CS to this field that will pave way for further research.
Show less - Date Issued
- 2017
- Identifier
- CFE0007120, ucf:51954
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007120
- Title
- ATTAINABLE COMPRESSIVE STRENGTH OF PERVIOUS CONCRETE PAVING SYSTEMS.
- Creator
-
Mulligan, Ann Marie, Chopra, Manoj, University of Central Florida
- Abstract / Description
-
The pervious concrete system and its corresponding strength are as important as its permeability characteristics. The strength of the system not only relies on the compressive strength of the pervious concrete but also on the strength of the soil beneath it for support. Previous studies indicate that pervious concrete has lower compressive strength capabilities than conventional concrete and will only support light traffic loadings. This thesis investigated prior studies on the compressive...
Show moreThe pervious concrete system and its corresponding strength are as important as its permeability characteristics. The strength of the system not only relies on the compressive strength of the pervious concrete but also on the strength of the soil beneath it for support. Previous studies indicate that pervious concrete has lower compressive strength capabilities than conventional concrete and will only support light traffic loadings. This thesis investigated prior studies on the compressive strength on pervious concrete as it relates to water-cement ratio, aggregate-cement ratio, aggregate size, and compaction and compare those results with results obtained in laboratory experiments conducted on samples of pervious concrete cylinders created for this purpose. The loadings and types of vehicles these systems can withstand will also be examined as well as the design of appropriate thickness levels for the pavement. Since voids are supposed to reduce the strength of concrete 1% for every 5% voids(Klieger, 2003), the goal is to find a balance between water, aggregate, and cement in order to increase strength and permeability, two characteristics which tend to counteract one another. In this study, also determined are appropriate traffic loads and volumes so that the pervious concrete is able to maintain its structural integrity. The end result of this research will be a recommendation as to the water-cement ratio, the aggregate-cement ratio, aggregate size, and compaction necessary to maximize compressive strength without having detrimental effects on the permeability of the pervious concrete system. This research confirms that pervious concrete does in fact provide a lower compressive strength than that of conventional concrete; compressive strengths in acceptable mixtures only reached 1700 psi. Extremely high permeability rates were achieved in most all mixtures regardless of the compressive strength. Analysis of traffic loadings reinforce the fact that pervious concrete cannot be subjected to large numbers of heavy vehicle loadings over time although pervious concrete would be able to sustain low volumes of heavy loads if designed properly. Calculations of pavement thickness levels indicate these levels are dependent on the compressive strength of the concrete, the quality of the subgrade beneath the pavement, as well as vehicle volumes and loadings.
Show less - Date Issued
- 2005
- Identifier
- CFE0000634, ucf:46539
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000634
- Title
- Fast Compressed Automatic Target Recognition for a Compressive Infrared Imager.
- Creator
-
Millikan, Brian, Foroosh, Hassan, Rahnavard, Nazanin, Muise, Robert, Atia, George, Mahalanobis, Abhijit, Sun, Qiyu, University of Central Florida
- Abstract / Description
-
Many military systems utilize infrared sensors which allow an operator to see targets at night. Several of these are either mid-wave or long-wave high resolution infrared sensors, which are expensive to manufacture. But compressive sensing, which has primarily been demonstrated in medical applications, can be used to minimize the number of measurements needed to represent a high-resolution image. Using these techniques, a relatively low cost mid-wave infrared sensor can be realized which has...
Show moreMany military systems utilize infrared sensors which allow an operator to see targets at night. Several of these are either mid-wave or long-wave high resolution infrared sensors, which are expensive to manufacture. But compressive sensing, which has primarily been demonstrated in medical applications, can be used to minimize the number of measurements needed to represent a high-resolution image. Using these techniques, a relatively low cost mid-wave infrared sensor can be realized which has a high effective resolution. In traditional military infrared sensing applications, like targeting systems, automatic targeting recognition algorithms are employed to locate and identify targets of interest to reduce the burden on the operator. The resolution of the sensor can increase the accuracy and operational range of a targeting system. When using a compressive sensing infrared sensor, traditional decompression techniques can be applied to form a spatial-domain infrared image, but most are iterative and not ideal for real-time environments. A more efficient method is to adapt the target recognition algorithms to operate directly on the compressed samples. In this work, we will present a target recognition algorithm which utilizes a compressed target detection method to identify potential target areas and then a specialized target recognition technique that operates directly on the same compressed samples. We will demonstrate our method on the U.S. Army Night Vision and Electronic Sensors Directorate ATR Algorithm Development Image Database which has been made available by the Sensing Information Analysis Center.
Show less - Date Issued
- 2018
- Identifier
- CFE0007408, ucf:52739
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007408
- Title
- The Development of Soil Compressibility Prediction Models and Application to Site Settlement.
- Creator
-
Kirts, Scott, Nam, Boo Hyun, Chopra, Manoj, Sallam, Amr, Xanthopoulos, Petros, University of Central Florida
- Abstract / Description
-
The magnitude of the overall settlement depends on several variables such as the Compression Index, Cc, and Recompression Index, Cr, which are determined by a consolidation test; however, the test is time consuming and labor intensive. Correlations have been developed to approximate these compressibility indexes. In this study, a data driven approach has been employed in order to estimate Cc and Cr. Support Vector Machines classification is used to determine the number of distinct models to...
Show moreThe magnitude of the overall settlement depends on several variables such as the Compression Index, Cc, and Recompression Index, Cr, which are determined by a consolidation test; however, the test is time consuming and labor intensive. Correlations have been developed to approximate these compressibility indexes. In this study, a data driven approach has been employed in order to estimate Cc and Cr. Support Vector Machines classification is used to determine the number of distinct models to be developed. The statistical models are built through a forward selection stepwise regression procedure. Ten variables were used, including the moisture content (w), initial void ratio (eo), dry unit weight (?dry), wet unit weight (?wet), automatic hammer SPT blow count (N), overburden stress (?), fines content (-200), liquid limit (LL), plasticity index (PI), and specific gravity (Gs). The results confirm the need for separate models for three out of four soil types, these being Coarse Grained, Fine Grained, and Organic Peat. The models for each classification have varying degrees of accuracy. The correlations were tested through a series of field tests, settlement analysis, and comparison to known site settlement. The first analysis incorporates developed correlations for Cr, and the second utilizes measured Cc and Cr for each soil layer. The predicted settlements from these two analyses were compared to the measured settlement taken in close proximity. Upon conclusion of the analyses, the results indicate that settlement predictions applying a rule of thumb equating Cc to Cr, accounting for elastic settlement, and using a conventional influence zone of settlement, compares more favorably to measured settlement than that of predictions using measured compressibility index(s). Accuracy of settlement predictions is contingent on a thorough field investigation.
Show less - Date Issued
- 2018
- Identifier
- CFE0007208, ucf:52284
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007208
- Title
- Sparse signal recovery under sensing and physical hardware constraints.
- Creator
-
Mardaninajafabadi, Davood, Atia, George, Mikhael, Wasfy, Vosoughi, Azadeh, Rahnavard, Nazanin, Abouraddy, Ayman, University of Central Florida
- Abstract / Description
-
This dissertation focuses on information recovery under two general types of sensing constraints and hardware limitations that arise in practical data acquisition systems. We study the effects of these practical limitations in the context of signal recovery problems from interferometric measurements such as for optical mode analysis.The first constraint stems from the limited number of degrees of freedom of an information gathering system, which gives rise to highly constrained sensing...
Show moreThis dissertation focuses on information recovery under two general types of sensing constraints and hardware limitations that arise in practical data acquisition systems. We study the effects of these practical limitations in the context of signal recovery problems from interferometric measurements such as for optical mode analysis.The first constraint stems from the limited number of degrees of freedom of an information gathering system, which gives rise to highly constrained sensing structures. In contrast to prior work on compressive signal recovery which relies for the most part on introducing additional hardware components to emulate randomization, we establish performance guarantees for successful signal recovery from a reduced number of measurements even with the constrained interferometer structure obviating the need for non-native components. Also, we propose control policies to guide the collection of informative measurements given prior knowledge about the constrained sensing structure. In addition, we devise a sequential implementation with a stopping rule, shown to reduce the sample complexity for a target performance in reconstruction.The second limitation considered is due to physical hardware constraints, such as the finite spatial resolution of the used components and their finite aperture size. Such limitations introduce non-linearities in the underlying measurement model. We first develop a more accurate measurement model with structured noise representing a known non-linear function of the input signal, obtained by leveraging side information about the sampling structure. Then, we devise iterative denoising algorithms shown to enhance the quality of sparse recovery in the presence of physical constraints by iteratively estimating and eliminating the non-linear term from the measurements. We also develop a class of clipping-cognizant reconstruction algorithms for modal reconstruction from interferometric measurements that compensate for clipping effects due to the finite aperture size of the used components and show they yield significant gains over schemes oblivious to such effects.
Show less - Date Issued
- 2019
- Identifier
- CFE0007675, ucf:52467
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007675
- Title
- Dynamic Behavior and Performance of Different Types of Multi-Effect Desalination Plants.
- Creator
-
Abdelkareem, Mohamed, Chow, Louis, Mansy, Hansen, Das, Tuhin, Duranceau, Steven, University of Central Florida
- Abstract / Description
-
Water and energy are two of the most vital resources for the socio-economic development and sustenance of humanity on earth. Desalination of seawater has been practiced for some decades and is a well-established means of water supply. However, this process consumes large amounts of energy and the global energy supply is also faced with some challenges. In this research, multi-effect desalination (MED) has been selected due to lower cost, lower operating temperature and efficient in terms of...
Show moreWater and energy are two of the most vital resources for the socio-economic development and sustenance of humanity on earth. Desalination of seawater has been practiced for some decades and is a well-established means of water supply. However, this process consumes large amounts of energy and the global energy supply is also faced with some challenges. In this research, multi-effect desalination (MED) has been selected due to lower cost, lower operating temperature and efficient in terms of primary energy and electricity consumption compared to other thermal desalination systems. The motivation for this research is to address thermo-economics and dynamic behavior of different MED feed configurations with/without vapor compression (VC). A new formulation for the steady-state models was developed to simulate different MED systems. Adding a thermal vapor compressor (TVC) or mechanical vapor compression (MVC) unit to the MED system is also studied to show the advantage of this type of integration. For MED-TVC systems, results indicate that the parallel cross feed (PCF) configuration has better performance characteristics than other configurations. A similar study of MED-MVC systems indicates that the PCF and forward feed (FF) configurations require less work to achieve equal distillate production. Reducing the steam temperature supplied by the MVC unit leads to an increase in second law efficiency and a decrease in specific power consumption (SPC) and total water price. Following the fact that the MED may be exposed to fluctuations (disturbances) in input parameters during operation. Therefore, there is a requirement to analyze their transient behavior. In the current study, the dynamic model is developed based on solving the basic conservation equations of mass, energy, and salt. In the case of heat source disturbance, MED plants operating in the backward feed (BF) may be exposed to shut down due to flooding in the first effect. For all applied disturbances, the change in the brine level is the slowest compared to the changes in vapor temperature, and brine and vapor flow rates. For MED-TVC, it is recommended to limit the seawater cooling flow rate reduction to under 12% of the steady-state value to avoid dryout in the evaporators. A reduction in the motive steam flow rate and cooling seawater temperature of more than 20% and 35% of steady-state values, respectively, may lead to flooding in evaporators and plant shutdown. Simultaneous combinations of two different disturbances with opposing effects have only a modest effect on plant operation and they can be used to control and mitigate the flooding/drying effects caused by the disturbances. For the MED-MVC, the compressor work reduction could lead to plant shutdown, while a reduction in the seawater temperature will lead to a reduction in plant production and an increase in SPC.
Show less - Date Issued
- 2019
- Identifier
- CFE0007423, ucf:52735
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007423
- Title
- MECHANICAL PROPERTIES OF THE SKELETON OF ACROPORA CERVICORNIS.
- Creator
-
Masa, Bridget, Orlovskaya, Nina, University of Central Florida
- Abstract / Description
-
This research explores the instantaneous mechanical behavior of the skeleton of the critically endangered staghorn coral Acropora cervicornis. Both bleached and sanded skeletons were used in this experiment. The Raman spectroscopy test showed that there was no significant change in the Raman shift between the three branches tested. The shifts were nearly identical to Raman shifts of calcium carbonate. Vickers hardness test found that 1 Bleached had the average hardness of 3.44 GPa with a...
Show moreThis research explores the instantaneous mechanical behavior of the skeleton of the critically endangered staghorn coral Acropora cervicornis. Both bleached and sanded skeletons were used in this experiment. The Raman spectroscopy test showed that there was no significant change in the Raman shift between the three branches tested. The shifts were nearly identical to Raman shifts of calcium carbonate. Vickers hardness test found that 1 Bleached had the average hardness of 3.44 GPa with a standard deviation of 0.12 GPa. The sanded sample also had a similar value of 3.54 GPa with a standard deviation of 0.13 GPa. Samples from 2 Bleached had a hardness value that was significantly lower at only 2.68 GPa with a standard deviation of 0.37 GPa. The axial compressive stress test determined that the average strength for the bleached samples was 18.98 MPa and for the sanded, 29.16 MPa. This information can be used to assist in the restoration of this species.
Show less - Date Issued
- 2018
- Identifier
- CFH2000396, ucf:45852
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH2000396
- Title
- Failure Analysis of Impact-Damaged Metallic Poles Repaired With Fiber Reinforced Polymer Composites.
- Creator
-
Slade, Robert, Mackie, Kevin, Yun, Hae-Bum, Gou, Jihua, University of Central Florida
- Abstract / Description
-
Metallic utility poles, light poles, and mast arms are intermittently damaged by vehicle collision. In many cases the vehicular impact does not cause immediate failure of the structure, but induces localized damage that may result in failure under extreme service loadings or can promote degradation and corrosion within the damaged region. Replacement of these poles is costly and often involves prolonged lane closures, service interruption, and temporary loss of functionality. Therefore, an in...
Show moreMetallic utility poles, light poles, and mast arms are intermittently damaged by vehicle collision. In many cases the vehicular impact does not cause immediate failure of the structure, but induces localized damage that may result in failure under extreme service loadings or can promote degradation and corrosion within the damaged region. Replacement of these poles is costly and often involves prolonged lane closures, service interruption, and temporary loss of functionality. Therefore, an in situ repair of these structures is required.This thesis examines the failure modes of damaged metallic poles reinforced with externally-bonded fiber reinforced polymer (FRP) composites. Several FRP repair systems were selected for comparison, and a set of medium and full-scale tests were conducted to identify the critical failure modes. The material properties of each component of the repair were experimentally determined, and then combined into a numerical model capable of predicting global response.Four possible failure modes are discussed: yielding of the unreinforced substrate, tensile rupture of the FRP, compressive buckling of the FRP, and debonding of the FRP from the substrate. It was found that simple linear, bilinear, and trilinear stress-strain relationships accurately describe the response of the composite and substrate components, whereas a more complex bond-slip relationship is required to characterize debonding. These constitutive properties were then incorporated into MSC.Marc, a versatile nonlinear finite element program. The output of the FEM analysis showed good agreement with the results of the experimental bond-slip tests.
Show less - Date Issued
- 2012
- Identifier
- CFE0004262, ucf:49514
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004262
- Title
- LABELED SAMPLING CONSENSUS: A NOVEL ALGORITHM FOR ROBUSTLY FITTING MULTIPLE STRUCTURES USING COMPRESSED SAMPLING.
- Creator
-
Messina, Carl, Foroosh, Hassan, University of Central Florida
- Abstract / Description
-
The ability to robustly fit structures in datasets that contain outliers is a very important task in Image Processing, Pattern Recognition and Computer Vision. Random Sampling Consensus or RANSAC is a very popular method for this task, due to its ability to handle over 50% outliers. The problem with RANSAC is that it is only capable of finding a single structure. Therefore, if a dataset contains multiple structures, they must be found sequentially by finding the best fit, removing the points,...
Show moreThe ability to robustly fit structures in datasets that contain outliers is a very important task in Image Processing, Pattern Recognition and Computer Vision. Random Sampling Consensus or RANSAC is a very popular method for this task, due to its ability to handle over 50% outliers. The problem with RANSAC is that it is only capable of finding a single structure. Therefore, if a dataset contains multiple structures, they must be found sequentially by finding the best fit, removing the points, and repeating the process. However, removing incorrect points from the dataset could prove disastrous. This thesis offers a novel approach to sampling consensus that extends its ability to discover multiple structures in a single iteration through the dataset. The process introduced is an unsupervised method, requiring no previous knowledge to the distribution of the input data. It uniquely assigns labels to different instances of similar structures. The algorithm is thus called Labeled Sampling Consensus or L-SAC. These unique instances will tend to cluster around one another allowing the individual structures to be extracted using simple clustering techniques. Since divisions instead of modes are analyzed, only a single instance of a structure need be recovered. This ability of L-SAC allows a novel sampling procedure to be presented "compressing" the required samples needed compared to traditional sampling schemes while ensuring all structures have been found. L-SAC is a flexible framework that can be applied to many problem domains.
Show less - Date Issued
- 2011
- Identifier
- CFE0003893, ucf:48727
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003893
- Title
- Compressive Sensing and Recovery of Structured Sparse Signals.
- Creator
-
Shahrasbi, Behzad, Rahnavard, Nazanin, Vosoughi, Azadeh, Wei, Lei, Atia, George, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
In the recent years, numerous disciplines including telecommunications, medical imaging, computational biology, and neuroscience benefited from increasing applications of high dimensional datasets. This calls for efficient ways of data capturing and data processing. Compressive sensing (CS), which is introduced as an efficient sampling (data capturing) method, is addressing this need. It is well-known that the signals, which belong to an ambient high-dimensional space, have much smaller...
Show moreIn the recent years, numerous disciplines including telecommunications, medical imaging, computational biology, and neuroscience benefited from increasing applications of high dimensional datasets. This calls for efficient ways of data capturing and data processing. Compressive sensing (CS), which is introduced as an efficient sampling (data capturing) method, is addressing this need. It is well-known that the signals, which belong to an ambient high-dimensional space, have much smaller dimensionality in an appropriate domain. CS taps into this principle and dramatically reduces the number of samples that is required to be captured to avoid any distortion in the information content of the data. This reduction in the required number of samples enables many new applications that were previously infeasible using classical sampling techniques.Most CS-based approaches take advantage of the inherent low-dimensionality in many datasets. They try to determine a sparse representation of the data, in an appropriately chosen basis using only a few significant elements. These approaches make no extra assumptions regarding possible relationships among the significant elements of that basis. In this dissertation, different ways of incorporating the knowledge about such relationships are integrated into the data sampling and the processing schemes.We first consider the recovery of temporally correlated sparse signals and show that using the time correlation model. The recovery performance can be significantly improved. Next, we modify the sampling process of sparse signals to incorporate the signal structure in a more efficient way. In the image processing application, we show that exploiting the structure information in both signal sampling and signal recovery improves the efficiency of the algorithm. In addition, we show that region-of-interest information can be included in the CS sampling and recovery steps to provide a much better quality for the region-of-interest area compared the rest of the image or video. In spectrum sensing applications, CS can dramatically improve the sensing efficiency by facilitating the coordination among spectrum sensors. A cluster-based spectrum sensing with coordination among spectrum sensors is proposed for geographically disperse cognitive radio networks. Further, CS has been exploited in this problem for simultaneous sensing and localization. Having access to this information dramatically facilitates the implementation of advanced communication technologies as required by 5G communication networks.
Show less - Date Issued
- 2015
- Identifier
- CFE0006392, ucf:51509
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006392
- Title
- MAC LAYER AND ROUTING PROTOCOLS FOR WIRELESS AD HOC NETWORKS WITH ASYMMETRIC LINKS AND PERFORMANCE EVALUATION STUDIES.
- Creator
-
Wang, Guoqiang, Marinescu, Dan, University of Central Florida
- Abstract / Description
-
In a heterogeneous mobile ad hoc network (MANET), assorted devices with different computation and communication capabilities co-exist. In this thesis, we consider the case when the nodes of a MANET have various degrees of mobility and range, and the communication links are asymmetric. Many routing protocols for ad hoc networks routinely assume that all communication links are symmetric, if node A can hear node B and node B can also hear node A. Most current MAC layer protocols are unable to...
Show moreIn a heterogeneous mobile ad hoc network (MANET), assorted devices with different computation and communication capabilities co-exist. In this thesis, we consider the case when the nodes of a MANET have various degrees of mobility and range, and the communication links are asymmetric. Many routing protocols for ad hoc networks routinely assume that all communication links are symmetric, if node A can hear node B and node B can also hear node A. Most current MAC layer protocols are unable to exploit the asymmetric links present in a network, thus leading to an inefficient overall bandwidth utilization, or, in the worst case, to lack of connectivity. To exploit the asymmetric links, the protocols must deal with the asymmetry of the path from a source node to a destination node which affects either the delivery of the original packets, or the paths taken by acknowledgments, or both. Furthermore, the problem of hidden nodes requires a more careful analysis in the case of asymmetric links. MAC layer and routing protocols for ad hoc networks with asymmetric links require a rigorous performance analysis. Analytical models are usually unable to provide even approximate solutions to questions such as end-to-end delay, packet loss ratio, throughput, etc. Traditional simulation techniques for large-scale wireless networks require vast amounts of storage and computing cycles rarely available on single computing systems. In our search for an effective solution to study the performance of wireless networks we investigate the time-parallel simulation. Time-parallel simulation has received significant attention in the past. The advantages, as well as, the theoretical and practical limitations of time-parallel simulation have been extensively researched for many applications when the complexity of the models involved severely limits the applicability of analytical studies and is unfeasible with traditional simulation techniques. Our goal is to study the behavior of large systems consisting of possibly thousands of nodes over extended periods of time and obtain results efficiently, and time-parallel simulation enables us to achieve this objective. We conclude that MAC layer and routing protocols capable of using asymmetric links are more complex than traditional ones, but can improve the connectivity, and provide better performance. We are confident that approximate results for various performance metrics of wireless networks obtained using time-parallel simulation are sufficiently accurate and able to provide the necessary insight into the inner workings of the protocols.
Show less - Date Issued
- 2007
- Identifier
- CFE0001736, ucf:47302
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001736