Current Search: Patterning (x)
View All Items
Pages
- Title
- Infrared Tapered Slot Antennas Coupled to Tunnel Diodes.
- Creator
-
Florence, Louis, Boreman, Glenn, Likamwa, Patrick, Schoenfeld, Winston, Lail, Brian, University of Central Florida
- Abstract / Description
-
Tapered slot antennas (TSAs) have seen considerable application in the millimeter-wave portion of the spectrum. Desirable characteristics of TSAs include symmetric E- and H-plane antenna patterns, and broad non-resonant bandwidths. We investigate extension of TSA operation toward higher frequencies in the thermal infrared (IR), using a metal-oxide-metal diode as the detector. Several different infrared TSA design forms are fabricated using electron-beam lithography and specially developed...
Show moreTapered slot antennas (TSAs) have seen considerable application in the millimeter-wave portion of the spectrum. Desirable characteristics of TSAs include symmetric E- and H-plane antenna patterns, and broad non-resonant bandwidths. We investigate extension of TSA operation toward higher frequencies in the thermal infrared (IR), using a metal-oxide-metal diode as the detector. Several different infrared TSA design forms are fabricated using electron-beam lithography and specially developed thin-film processes. The angular antenna patterns of TSA-coupled diodes are measured at 10.6 micrometer wavelength in both E- and H-planes, and are compared to results of finite-element electromagnetic modeling using Ansoft HFSS. Parameter studies are carried out, correlating the geometric and material properties of several TSA design forms to numerical-model results and to measurements. A significant increase in antenna gain is noted for a dielectric-overcoat design. The traveling-wave behavior of the IR TSA structure is investigated using scattering near-field microscopy. The measured near-field data is compared to HFSS results. Suggestions for future research are included.
Show less - Date Issued
- 2012
- Identifier
- CFE0004376, ucf:49395
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004376
- Title
- SELF DESIGNING PATTERN RECOGNITION SYSTEM EMPLOYING MULTISTAGE CLASSIFICATION.
- Creator
-
ABDELWAHAB, MANAL MAHMOUD, Mikhael, Wasfy, University of Central Florida
- Abstract / Description
-
Recently, pattern recognition/classification has received a considerable attention in diverse engineering fields such as biomedical imaging, speaker identification, fingerprint recognition, etc. In most of these applications, it is desirable to maintain the classification accuracy in the presence of corrupted and/or incomplete data. The quality of a given classification technique is measured by the computational complexity, execution time of algorithms, and the number of patterns that can be...
Show moreRecently, pattern recognition/classification has received a considerable attention in diverse engineering fields such as biomedical imaging, speaker identification, fingerprint recognition, etc. In most of these applications, it is desirable to maintain the classification accuracy in the presence of corrupted and/or incomplete data. The quality of a given classification technique is measured by the computational complexity, execution time of algorithms, and the number of patterns that can be classified correctly despite any distortion. Some classification techniques that are introduced in the literature are described in Chapter one.In this dissertation, a pattern recognition approach that can be designed to have evolutionary learning by developing the features and selecting the criteria that are best suited for the recognition problem under consideration is proposed. Chapter two presents some of the features used in developing the set of criteria employed by the system to recognize different types of signals. It also presents some of the preprocessing techniques used by the system. The system operates in two modes, namely, the learning (training) mode, and the running mode. In the learning mode, the original and preprocessed signals are projected into different transform domains. The technique automatically tests many criteria over the range of parameters for each criterion. A large number of criteria are developed from the features extracted from these domains. The optimum set of criteria, satisfying specific conditions, is selected. This set of criteria is employed by the system to recognize the original or noisy signals in the running mode. The modes of operation and the classification structures employed by the system are described in details in Chapter three.The proposed pattern recognition system is capable of recognizing an enormously large number of patterns by virtue of the fact that it analyzes the signal in different domains and explores the distinguishing characteristics in each of these domains. In other words, this approach uses available information and extracts more characteristics from the signals, for classification purposes, by projecting the signal in different domains. Some experimental results are given in Chapter four showing the effect of using mathematical transforms in conjunction with preprocessing techniques on the classification accuracy. A comparison between some of the classification approaches, in terms of classification rate in case of distortion, is also given.A sample of experimental implementations is presented in chapter 5 and chapter 6 to illustrate the performance of the proposed pattern recognition system. Preliminary results given confirm the superior performance of the proposed technique relative to the single transform neural network and multi-input neural network approaches for image classification in the presence of additive noise.
Show less - Date Issued
- 2004
- Identifier
- CFE0000020, ucf:46077
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000020
- Title
- DETECTING CURVED OBJECTS AGAINST CLUTTERED BACKGROUNDS.
- Creator
-
Prokaj, Jan, Lobo, Niels, University of Central Florida
- Abstract / Description
-
Detecting curved objects against cluttered backgrounds is a hard problem in computer vision. We present new low-level and mid-level features to function in these environments. The low-level features are fast to compute, because they employ an integral image approach, which makes them especially useful in real-time applications. The mid-level features are built from low-level features, and are optimized for curved object detection. The usefulness of these features is tested by designing an...
Show moreDetecting curved objects against cluttered backgrounds is a hard problem in computer vision. We present new low-level and mid-level features to function in these environments. The low-level features are fast to compute, because they employ an integral image approach, which makes them especially useful in real-time applications. The mid-level features are built from low-level features, and are optimized for curved object detection. The usefulness of these features is tested by designing an object detection algorithm using these features. Object detection is accomplished by transforming the mid-level features into weak classifiers, which then produce a strong classifier using AdaBoost. The resulting strong classifier is then tested on the problem of detecting heads with shoulders. On a database of over 500 images of people, cropped to contain head and shoulders, and with a diverse set of backgrounds, the detection rate is 90% while the false positive rate on a database of 500 negative images is less than 2%.
Show less - Date Issued
- 2008
- Identifier
- CFE0002102, ucf:47535
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002102
- Title
- EXPERIMENTAL AND NUMERICAL INVESTIGATIONS OF MICRODROPLET EVAPORATION WITH A FORCED PINNED CONTACT LINE.
- Creator
-
Gleason, Kevin, Putnam, Shawn, University of Central Florida
- Abstract / Description
-
Experimental and numerical investigations of water microdroplet evaporation on heated, laser patterned polymer substrates are reported. The study is focused on both (1) validating numerical models with experimental data, (2) identifying how changes in the contact line influences evaporative heat transfer and (3) determining methods of controlling contact line dynamics during evaporation. Droplets are formed using a bottom-up methodology, where a computer-controlled syringe pump supplies water...
Show moreExperimental and numerical investigations of water microdroplet evaporation on heated, laser patterned polymer substrates are reported. The study is focused on both (1) validating numerical models with experimental data, (2) identifying how changes in the contact line influences evaporative heat transfer and (3) determining methods of controlling contact line dynamics during evaporation. Droplets are formed using a bottom-up methodology, where a computer-controlled syringe pump supplies water to a ~200 um in diameter fluid channel within the heated substrate. This methodology facilitates precise control of the droplets growth rate, size, and inlet temperature. In addition to this microchannel supply line, the substrate surfaces are laser patterned with a moat-like trench around the fluid-channel outlet, adding additional control of the droplets contact line motion, area, and contact angle. In comparison to evaporation on non-patterned substrate surfaces, this method increases the contact line pinning time by ~60% of the droplets lifetime. The evaporation rates are compared to the predictions of a commonly reported model based on a solution of the Laplace equation, providing the local evaporation flux along the droplets liquid-vapor interface. The model consistently overpredicts the evaporation rate, which is presumable due to the models constant saturated vapor concentration along the droplets liquid-vapor interface. In result, a modified version of the model is implemented to account for variations in temperature along the liquid-vapor interface. A vapor concentration distribution is then imposed using this temperature distribution, increasing the accuracy of predicting the evaporation rate by ~7.7% and ~9.9% for heated polymer substrates at Ts = 50C and 65C, respectively.
Show less - Date Issued
- 2014
- Identifier
- CFH0004566, ucf:45212
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004566
- Title
- COMPRESSED PATTERN MATCHING FOR TEXT AND IMAGES.
- Creator
-
Tao, Tao, Mukherjee, Amar, University of Central Florida
- Abstract / Description
-
The amount of information that we are dealing with today is being generated at an ever-increasing rate. On one hand, data compression is needed to efficiently store, organize the data and transport the data over the limited-bandwidth network. On the other hand, efficient information retrieval is needed to speedily find the relevant information from this huge mass of data using available resources. The compressed pattern matching problem can be stated as: given the compressed format of a text...
Show moreThe amount of information that we are dealing with today is being generated at an ever-increasing rate. On one hand, data compression is needed to efficiently store, organize the data and transport the data over the limited-bandwidth network. On the other hand, efficient information retrieval is needed to speedily find the relevant information from this huge mass of data using available resources. The compressed pattern matching problem can be stated as: given the compressed format of a text or an image and a pattern string or a pattern image, report the occurrence(s) of the pattern in the text or image with minimal (or no) decompression. The main advantages of compressed pattern matching versus the naïve decompress-then-search approach are: First, reduced storage cost. Since there is no need to decompress the data or there is only minimal decompression required, the disk space and the memory cost is reduced. Second, less search time. Since the size of the compressed data is smaller than that of the original data, a searching performed on the compressed data will result in a shorter search time. The challenge of efficient compressed pattern matching can be met from two inseparable aspects: First, to utilize effectively the full potential of compression for the information retrieval systems, there is a need to develop search-aware compression algorithms. Second, for data that is compressed using a particular compression technique, regardless whether the compression is search-aware or not, we need to develop efficient searching techniques. This means that techniques must be developed to search the compressed data with no or minimal decompression and with not too much extra cost. Compressed pattern matching algorithms can be categorized as either for text compression or for image compression. Although compressed pattern matching for text compression has been studied for a few years and many publications are available in the literature, there is still room to improve the efficiency in terms of both compression and searching. None of the search engines available today make explicit use of compressed pattern matching. Compressed pattern matching for image compression, on the other hand, has been relatively unexplored. However, it is getting more attention because lossless compression has become more important for the ever-increasing large amount of medical images, satellite images and aerospace photos, which requires the data to be losslessly stored. Developing efficient information retrieval techniques from the losslessly compressed data is therefore a fundamental research challenge. In this dissertation, we have studied compressed pattern matching problem for both text and images. We present a series of novel compressed pattern matching algorithms, which are divided into two major parts. The first major work is done for the popular LZW compression algorithm. The second major work is done for the current lossless image compression standard JPEG-LS. Specifically, our contributions from the first major work are: 1. We have developed an "almost-optimal" compressed pattern matching algorithm that reports all pattern occurrences. An earlier "almost-optimal" algorithm reported in the literature is only capable of detecting the first occurrence of the pattern and the practical performance of the algorithm is not clear. We have implemented our algorithm and provide extensive experimental results measuring the speed of our algorithm. We also developed a faster implementation for so-called "simple patterns". The simple patterns are patterns that no unique symbol appears more than once. The algorithm takes advantage of this property and runs in optimal time. 2. We have developed a novel compressed pattern matching algorithm for multiple patterns using the Aho-Corasick algorithm. The algorithm takes O(mt+n+r) time with O(mt) extra space, where n is the size of the compressed file, m is the total size of all patterns, t is the size of the LZW trie and r is the number of occurrences of the patterns. The algorithm is particularly efficient when being applied on archival search if the archives are compressed with a common LZW trie. All the above algorithms have been implemented and extensive experiments have been conducted to test the performance of our algorithms and to compare with the best existing algorithms. The experimental results show that our compressed pattern matching algorithm for multiple patterns is competitive among the best algorithms and is practically the fastest among all approaches when the number of patterns is not very large. Therefore, our algorithm is preferable for general string matching applications. LZW is one of the most efficient and popular compression algorithms used extensively and both of our algorithms require no modification on the compression algorithm. Our work, therefore, has great economical and market potential Our contributions from the second major work are: 1 We have developed a new global context variation of the JPEG-LS compression algorithm and the corresponding compressed pattern matching algorithm. Comparing to the original JPEG-LS, the global context variation is search-aware and has faster encoding and decoding speeds. The searching algorithm based on the global-context variation requires partial decompression of the compressed image. The experimental results show that it improves the search speed by about 30% comparing to the decompress-then-search approach. Based on our best knowledge, this is the first two-dimensional compressed pattern matching work for the JPEG-LS standard. 2 We have developed a two-pass variation of the JPEG-LS algorithm and the corresponding compressed pattern matching algorithm. The two-pass variation achieves search-awareness through a common compression technique called semi-static dictionary. Comparing to the original algorithm, the compression of the new algorithm is equally well but the encoding takes slightly longer. The searching algorithm based on the two-pass variation requires no decompression at all and therefore works in the fully compressed domain. It runs in time O(nc+mc+nm+m^2) with extra space O(n+m+mc), where n is the number of columns of the image, m is the number of rows and columns of the pattern, nc is the compressed image size and mc is the compressed pattern size. The algorithm is the first known two-dimensional algorithm that works in the fully compressed domain.
Show less - Date Issued
- 2005
- Identifier
- CFE0000471, ucf:46366
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000471
- Title
- TRANSFORM BASED AND SEARCH AWARE TEXT COMPRESSION SCHEMES AND COMPRESSED DOMAIN TEXT RETRIEVAL.
- Creator
-
Zhang, Nan, Mukherjee, Amar, University of Central Florida
- Abstract / Description
-
In recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need...
Show moreIn recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need from this huge mass of data. Sometimes, a single site may also contain large collections of data such as a library database, thereby requiring an efficient search mechanism even to search within the local data. To facilitate the information retrieval, an emerging ad hoc standard for uncompressed text is XML which preprocesses the text by putting additional user defined metadata such as DTD or hyperlinks to enable searching with better efficiency and effectiveness. This increases the file size considerably, underscoring the importance of applying text compression. On account of efficiency (in terms of both space and time), there is a need to keep the data in compressed form for as much as possible. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. . Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors. Text compression is concerned with techniques for representing the digital text data in alternate representations that takes less space. Not only does it help conserve the storage space for archival and online data, it also helps system performance by requiring less number of secondary storage (disk or CD Rom) accesses and improves the network transmission bandwidth utilization by reducing the transmission time. Unlike static images or video, there is no international standard for text compression, although compressed formats like .zip, .gz, .Z files are increasingly being used. In general, data compression methods are classified as lossless or lossy. Lossless compression allows the original data to be recovered exactly. Although used primarily for text data, lossless compression algorithms are useful in special classes of images such as medical imaging, finger print data, astronomical images and data bases containing mostly vital numerical data, tables and text information. Many lossy algorithms use lossless methods at the final stage of the encoding stage underscoring the importance of lossless methods for both lossy and lossless compression applications. In order to be able to effectively utilize the full potential of compression techniques for the future retrieval systems, we need efficient information retrieval in the compressed domain. This means that techniques must be developed to search the compressed text without decompression or only with partial decompression independent of whether the search is done on the text or on some inversion table corresponding to a set of key words for the text. In this dissertation, we make the following contributions: (1) Star family compression algorithms: We have proposed an approach to develop a reversible transformation that can be applied to a source text that improves existing algorithm's ability to compress. We use a static dictionary to convert the English words into predefined symbol sequences. These transformed sequences create additional context information that is superior to the original text. Thus we achieve some compression at the preprocessing stage. We have a series of transforms which improve the performance. Star transform requires a static dictionary for a certain size. To avoid the considerable complexity of conversion, we employ the ternary tree data structure that efficiently converts the words in the text to the words in the star dictionary in linear time. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. (3) Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors.
Show less - Date Issued
- 2005
- Identifier
- CFE0000438, ucf:46396
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000438
- Title
- TRANSFORM BASED AND SEARCH AWARE TEXT COMPRESSION SCHEMES AND COMPRESSED DOMAIN TEXT RETRIEVAL.
- Creator
-
Zhang, Nan, Mukherjee, Amar, University of Central Florida
- Abstract / Description
-
In recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need...
Show moreIn recent times, we have witnessed an unprecedented growth of textual information via the Internet, digital libraries and archival text in many applications. While a good fraction of this information is of transient interest, useful information of archival value will continue to accumulate. We need ways to manage, organize and transport this data from one point to the other on data communications links with limited bandwidth. We must also have means to speedily find the information we need from this huge mass of data. Sometimes, a single site may also contain large collections of data such as a library database, thereby requiring an efficient search mechanism even to search within the local data. To facilitate the information retrieval, an emerging ad hoc standard for uncompressed text is XML which preprocesses the text by putting additional user defined metadata such as DTD or hyperlinks to enable searching with better efficiency and effectiveness. This increases the file size considerably, underscoring the importance of applying text compression. On account of efficiency (in terms of both space and time), there is a need to keep the data in compressed form for as much as possible. Text compression is concerned with techniques for representing the digital text data in alternate representations that takes less space. Not only does it help conserve the storage space for archival and online data, it also helps system performance by requiring less number of secondary storage (disk or CD Rom) accesses and improves the network transmission bandwidth utilization by reducing the transmission time. Unlike static images or video, there is no international standard for text compression, although compressed formats like .zip, .gz, .Z files are increasingly being used. In general, data compression methods are classified as lossless or lossy. Lossless compression allows the original data to be recovered exactly. Although used primarily for text data, lossless compression algorithms are useful in special classes of images such as medical imaging, finger print data, astronomical images and data bases containing mostly vital numerical data, tables and text information. Many lossy algorithms use lossless methods at the final stage of the encoding stage underscoring the importance of lossless methods for both lossy and lossless compression applications. In order to be able to effectively utilize the full potential of compression techniques for the future retrieval systems, we need efficient information retrieval in the compressed domain. This means that techniques must be developed to search the compressed text without decompression or only with partial decompression independent of whether the search is done on the text or on some inversion table corresponding to a set of key words for the text. In this dissertation, we make the following contributions: (1) Star family compression algorithms: We have proposed an approach to develop a reversible transformation that can be applied to a source text that improves existing algorithm's ability to compress. We use a static dictionary to convert the English words into predefined symbol sequences. These transformed sequences create additional context information that is superior to the original text. Thus we achieve some compression at the preprocessing stage. We have a series of transforms which improve the performance. Star transform requires a static dictionary for a certain size. To avoid the considerable complexity of conversion, we employ the ternary tree data structure that efficiently converts the words in the text to the words in the star dictionary in linear time. (2) Exact and approximate pattern matching in Burrows-Wheeler transformed (BWT) files: We proposed a method to extract the useful context information in linear time from the BWT transformed text. The auxiliary arrays obtained from BWT inverse transform brings logarithm search time. Meanwhile, approximate pattern matching can be performed based on the results of exact pattern matching to extract the possible candidate for the approximate pattern matching. Then fast verifying algorithm can be applied to those candidates which could be just small parts of the original text. We present algorithms for both k-mismatch and k-approximate pattern matching in BWT compressed text. A typical compression system based on BWT has Move-to-Front and Huffman coding stages after the transformation. We propose a novel approach to replace the Move-to-Front stage in order to extend compressed domain search capability all the way to the entropy coding stage. A modification to the Move-to-Front makes it possible to randomly access any part of the compressed text without referring to the part before the access point. (3) Modified LZW algorithm that allows random access and partial decoding for the compressed text retrieval: Although many compression algorithms provide good compression ratio and/or time complexity, LZW is the first one studied for the compressed pattern matching because of its simplicity and efficiency. Modifications on LZW algorithm provide the extra advantage for fast random access and partial decoding ability that is especially useful for text retrieval systems. Based on this algorithm, we can provide a dynamic hierarchical semantic structure for the text, so that the text search can be performed on the expected level of granularity. For example, user can choose to retrieve a single line, a paragraph, or a file, etc. that contains the keywords. More importantly, we will show that parallel encoding and decoding algorithm is trivial with the modified LZW. Both encoding and decoding can be performed with multiple processors easily and encoding and decoding process are independent with respect to the number of processors.
Show less - Date Issued
- 2005
- Identifier
- CFE0000488, ucf:46358
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000488
- Title
- EFFICIENT ALGORITHMS FOR CORRELATION PATTERN RECOGNITION.
- Creator
-
Ragothaman, Pradeep, Mikhael, Wasfy, University of Central Florida
- Abstract / Description
-
The mathematical operation of correlation is a very simple concept, yet has a very rich history of application in a variety of engineering fields. It is essentially nothing but a technique to measure if and to what degree two signals match each other. Since this is a very basic and universal task in a wide variety of fields such as signal processing, communications, computer vision etc., it has been an important tool. The field of pattern recognition often deals with the task of analyzing...
Show moreThe mathematical operation of correlation is a very simple concept, yet has a very rich history of application in a variety of engineering fields. It is essentially nothing but a technique to measure if and to what degree two signals match each other. Since this is a very basic and universal task in a wide variety of fields such as signal processing, communications, computer vision etc., it has been an important tool. The field of pattern recognition often deals with the task of analyzing signals or useful information from signals and classifying them into classes. Very often, these classes are predetermined, and examples (templates) are available for comparison. This task naturally lends itself to the application of correlation as a tool to accomplish this goal. Thus the field of Correlation Pattern Recognition has developed over the past few decades as an important area of research. From the signal processing point of view, correlation is nothing but a filtering operation. Thus there has been a great deal of work in using concepts from filter theory to develop Correlation Filters for pattern recognition. While considerable work has been to done to develop linear correlation filters over the years, especially in the field of Automatic Target Recognition, a lot of attention has recently been paid to the development of Quadratic Correlation Filters (QCF). QCFs offer the advantages of linear filters while optimizing a bank of these simultaneously to offer much improved performance. This dissertation develops efficient QCFs that offer significant savings in storage requirements and computational complexity over existing designs. Firstly, an adaptive algorithm is presented that is able to modify the QCF coefficients as new data is observed. Secondly, a transform domain implementation of the QCF is presented that has the benefits of lower computational complexity and computational requirements while retaining excellent recognition accuracy. Finally, a two dimensional QCF is presented that holds the potential to further save on storage and computations. The techniques are developed based on the recently proposed Rayleigh Quotient Quadratic Correlation Filter (RQQCF) and simulation results are provided on synthetic and real datasets.
Show less - Date Issued
- 2007
- Identifier
- CFE0001974, ucf:47429
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001974
- Title
- SPEAKER IDENTIFICATION BASED ON DISCRIMINATIVE VECTOR QUANTIZATION AND DATA FUSION.
- Creator
-
Zhou, Guangyu, Mikhael, Wasfy, University of Central Florida
- Abstract / Description
-
Speaker Identification (SI) approaches based on discriminative Vector Quantization (VQ) and data fusion techniques are presented in this dissertation. The SI approaches based on Discriminative VQ (DVQ) proposed in this dissertation are the DVQ for SI (DVQSI), the DVQSI with Unique speech feature vector space segmentation for each speaker pair (DVQSI-U), and the Adaptive DVQSI (ADVQSI) methods. The difference of the probability distributions of the speech feature vector sets from various...
Show moreSpeaker Identification (SI) approaches based on discriminative Vector Quantization (VQ) and data fusion techniques are presented in this dissertation. The SI approaches based on Discriminative VQ (DVQ) proposed in this dissertation are the DVQ for SI (DVQSI), the DVQSI with Unique speech feature vector space segmentation for each speaker pair (DVQSI-U), and the Adaptive DVQSI (ADVQSI) methods. The difference of the probability distributions of the speech feature vector sets from various speakers (or speaker groups) is called the interspeaker variation between speakers (or speaker groups). The interspeaker variation is the measure of template differences between speakers (or speaker groups). All DVQ based techniques presented in this contribution take advantage of the interspeaker variation, which are not exploited in the previous proposed techniques by others that employ traditional VQ for SI (VQSI). All DVQ based techniques have two modes, the training mode and the testing mode. In the training mode, the speech feature vector space is first divided into a number of subspaces based on the interspeaker variations. Then, a discriminative weight is calculated for each subspace of each speaker or speaker pair in the SI group based on the interspeaker variation. The subspaces with higher interspeaker variations play more important roles in SI than the ones with lower interspeaker variations by assigning larger discriminative weights. In the testing mode, discriminative weighted average VQ distortions instead of equally weighted average VQ distortions are used to make the SI decision. The DVQ based techniques lead to higher SI accuracies than VQSI. DVQSI and DVQSI-U techniques consider the interspeaker variation for each speaker pair in the SI group. In DVQSI, speech feature vector space segmentations for all the speaker pairs are exactly the same. However, each speaker pair of DVQSI-U is treated individually in the speech feature vector space segmentation. In both DVQSI and DVQSI-U, the discriminative weights for each speaker pair are calculated by trial and error. The SI accuracies of DVQSI-U are higher than those of DVQSI at the price of much higher computational burden. ADVQSI explores the interspeaker variation between each speaker and all speakers in the SI group. In contrast with DVQSI and DVQSI-U, in ADVQSI, the feature vector space segmentation is for each speaker instead of each speaker pair based on the interspeaker variation between each speaker and all the speakers in the SI group. Also, adaptive techniques are used in the discriminative weights computation for each speaker in ADVQSI. The SI accuracies employing ADVQSI and DVQSI-U are comparable. However, the computational complexity of ADVQSI is much less than that of DVQSI-U. Also, a novel algorithm to convert the raw distortion outputs of template-based SI classifiers into compatible probability measures is proposed in this dissertation. After this conversion, data fusion techniques at the measurement level can be applied to SI. In the proposed technique, stochastic models of the distortion outputs are estimated. Then, the posteriori probabilities of the unknown utterance belonging to each speaker are calculated. Compatible probability measures are assigned based on the posteriori probabilities. The proposed technique leads to better SI performance at the measurement level than existing approaches.
Show less - Date Issued
- 2005
- Identifier
- CFE0000720, ucf:46621
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000720
- Title
- LEARNING SEMANTIC FEATURES FOR VISUAL RECOGNITION.
- Creator
-
Liu, Jingen, Shah, Mubarak, University of Central Florida
- Abstract / Description
-
Visual recognition (e.g., object, scene and action recognition) is an active area of research in computer vision due to its increasing number of real-world applications such as video (image) indexing and search, intelligent surveillance, human-machine interaction, robot navigation, etc. Effective modeling of the objects, scenes and actions is critical for visual recognition. Recently, bag of visual words (BoVW) representation, in which the image patches or video cuboids are quantized into...
Show moreVisual recognition (e.g., object, scene and action recognition) is an active area of research in computer vision due to its increasing number of real-world applications such as video (image) indexing and search, intelligent surveillance, human-machine interaction, robot navigation, etc. Effective modeling of the objects, scenes and actions is critical for visual recognition. Recently, bag of visual words (BoVW) representation, in which the image patches or video cuboids are quantized into visual words (i.e., mid-level features) based on their appearance similarity using clustering, has been widely and successfully explored. The advantages of this representation are: no explicit detection of objects or object parts and their tracking are required; the representation is somewhat tolerant to within-class deformations, and it is e±cient for matching. However, the performance of the BoVW is sensitive to the size of the visual vocabulary. Therefore, computationally expensive cross-validation is needed to find the appropriate quantization granularity. This limitation is partially due to the fact that the visual words are not semantically meaningful. This limits the effectiveness and compactness of the representation. To overcome these shortcomings, in this thesis we present principled approach to learn a semantic vocabulary (i.e. high-level features) from a large amount of visual words (mid-level features). In this context, the thesis makes two major contributions. First, we have developed an algorithm to discover a compact yet discriminative semantic vocabulary. This vocabulary is obtained by grouping the visual-words based on their distribution in videos (images) into visual-word clusters. The mutual information (MI) be- tween the clusters and the videos (images) depicts the discriminative power of the semantic vocabulary, while the MI between visual-words and visual-word clusters measures the compactness of the vocabulary. We apply the information bottleneck (IB) algorithm to find the optimal number of visual-word clusters by ¯nding the good tradeoff between compactness and discriminative power. We tested our proposed approach on the state-of-the-art KTH dataset, and obtained average accuracy of 94.2%. However, this approach performs one-side clustering, because only visual words are clustered regardless of which video they appear in. In order to leverage the co-occurrence of visual words and images, we have developed the co-clustering algorithm to simultaneously group the visual words and images. We tested our approach on the publicly available ¯fteen scene dataset and have obtained about 4% increase in the average accuracy compared to the one side clustering approaches. Second, instead of grouping the mid-level features, we first embed the features into a low-dimensional semantic space by manifold learning, and then perform the clustering. We apply Diffusion Maps (DM) to capture the local geometric structure of the mid-level feature space. The DM embedding is able to preserve the explicitly defined diffusion distance, which reflects the semantic similarity between any two features. Furthermore, the DM provides multi-scale analysis capability by adjusting the time steps in the Markov transition matrix. The experiments on KTH dataset show that DM can perform much better (about 3% to 6% improvement in average accuracy) than other manifold learning approaches and IB method. Above methods use only single type of features. In order to combine multiple heterogeneous features for visual recognition, we further propose the Fielder Embedding to capture the complicated semantic relationships between all entities (i.e., videos, images,heterogeneous features). The discovered relationships are then employed to further increase the recognition rate. We tested our approach on Weizmann dataset, and achieved about 17% 21% improvements in the average accuracy.
Show less - Date Issued
- 2009
- Identifier
- CFE0002936, ucf:47961
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002936
- Title
- A NEW APPROACH TO IDENTIFY THE EXPECTED CRASH PATTERNS BASED ON SIGNALIZED INTERSECTION SIZE AND ANALYSIS OF VEHICLE MOVEMENTS.
- Creator
-
Salkapuram, Hari, Mohamed, Abdel-Aty, University of Central Florida
- Abstract / Description
-
Analysis of intersection crashes is a significant area in traffic safety research. This study contributes to the area by identifying traffic-geometric characteristics and driver demographics that affect different types of crashes at signalized intersections. A simple methodology to estimate crash frequency at intersections based on the size of the intersection is also developed herein. First phase of this thesis used the crash frequency data from 1,335 signalized intersections obtained from...
Show moreAnalysis of intersection crashes is a significant area in traffic safety research. This study contributes to the area by identifying traffic-geometric characteristics and driver demographics that affect different types of crashes at signalized intersections. A simple methodology to estimate crash frequency at intersections based on the size of the intersection is also developed herein. First phase of this thesis used the crash frequency data from 1,335 signalized intersections obtained from six jurisdictions in Florida, namely, Brevard, Seminole, Dade, Orange, and Hillsborough Counties and the City of Orlando. Using these data a simple methodology has been developed to identify the expected number of crashes by type and severity at signalized intersections. Intersection size, based on the total number of lanes, was used as a factor that was simple to identify and a representative of many geometric and traffic characteristics of an intersection. The results from the analysis showed that crash frequency generally increased with the increased size of intersections but the rates of increase differed for different intersection types (i.e., Four-legged intersection with both streets two-way, Four-legged intersection with at least one street one-way, and T-intersections). The results also showed that the dominant type of crashes differed at these intersection types and severity of crashes was higher at the intersections with more conflict points and larger differential in speed limits between major and minor roads. The analysis may potentially be useful for traffic engineers for evaluating safety at signalized intersections in a simple and efficient manner. The findings in this analysis provide strong evidence that the patterns of crashes by type and severity vary with the size and type of intersections. Thus, in future analysis of crashes at intersections, the size and type of intersections should be considered to account for the effects of intersection characteristics on crash frequency. In the second phase, data (crash and intersection characteristics) obtained from individual jurisdictions are linked to the Department of Highway Safety and Motor Vehicles (DHSMV) database to include characteristics of the at-fault drivers involved in crashes. These crashes are analyzed using contingency tables and binary logistic regression models. This study categorizes crashes into three major types based on relative initial movement direction of the involved vehicles. These crash types are, 1) Initial movement in same direction (IMSD) crashes. This crash type includes rear end and sideswipe crashes because the involved vehicles for these crashes would be traveling in the same direction prior to the crash. 2) Initial movement in opposite direction (IMOD) crashes comprising left-turn and head on crashes. 3) Initial movement in perpendicular direction (IMPD) crashes, which include angle and right-turn crashes. Vehicles involved in these crashes would be traveling on different roadways that constitute the intersection. Using the crash, intersection, and at-fault driver characteristics for all crashes as inputs, three logistic regression models are developed. In the logistic regression analyses total number of through lanes at an intersection is used as a surrogate measure to AADT per lane and also intersection type is introduced as a 'predictor' of crash type. The binary logistic regression analyses indicated, among other results, that at intersections with one-way roads, adverse weather conditions, older drivers and/or female drivers increase the likelihood of being at-fault at IMOD crashes. Similar factors associated with other groups of crashes (i.e., IMSD and IMPD) are also identified. These findings from the study may be used to develop specialized training programs by zooming in onto problematic intersections/maneuvers.
Show less - Date Issued
- 2006
- Identifier
- CFE0001208, ucf:46954
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001208
- Title
- ADAPTIVE INTELLIGENT USER INTERFACES WITH EMOTION RECOGNITION.
- Creator
-
NASOZ, FATMA, Christine Lisetti, Dr L., University of Central Florida
- Abstract / Description
-
The focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect...
Show moreThe focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect physiological data signals from participants experiencing specific emotions. Algorithms (k-Nearest Neighbor [KNN], Discriminant Function Analysis [DFA], Marquardt-Backpropagation [MBP], and Resilient Backpropagation [RBP]) were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. Emotion Elicitation with Movie Clips Experiment was conducted to elicit Sadness, Anger, Surprise, Fear, Frustration, and Amusement from participants. Overall, the three algorithms: KNN, DFA, and MBP, could recognize emotions with 72.3%, 75.0%, and 84.1% accuracy, respectively. Driving Simulator experiment was conducted to elicit driving-related emotions and states (panic/fear, frustration/anger, and boredom/sleepiness). The KNN, MBP and RBP Algorithms were used to classify the physiological signals by corresponding emotions. Overall, KNN could classify these three emotions with 66.3%, MBP could classify them with 76.7% and RBP could classify them with 91.9% accuracy. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users' negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the User Model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as: personality traits and preferences.
Show less - Date Issued
- 2004
- Identifier
- CFE0000126, ucf:46201
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000126
- Title
- A STUDY OF MILLENIAL STUDENTS AND THEIR REACTIVE BEHAVIOR PATTERNS IN THE ONLINE ENVIRONMENT.
- Creator
-
Yonekura, Francisca, Dziuban, Charles, University of Central Florida
- Abstract / Description
-
The goal of this study was to identify patterns or characteristics unique to online millennial students in higher education from two perspectives: the generational traits for an understanding of millennial students as a cohort, and the Long reactive behavior patterns and traits for an understanding of millennials as individuals. Based on the identified patterns and characteristics of these millennial students, the researcher highlighted instructional and curricular implications for online...
Show moreThe goal of this study was to identify patterns or characteristics unique to online millennial students in higher education from two perspectives: the generational traits for an understanding of millennial students as a cohort, and the Long reactive behavior patterns and traits for an understanding of millennials as individuals. Based on the identified patterns and characteristics of these millennial students, the researcher highlighted instructional and curricular implications for online learning. A profile depicting online millennial students based on the demographic data and their overall satisfaction levels with online learning is provided. For a holistic understanding, the study included an inquiry into measures of independence between overall satisfaction with online learning, reactive behavior patterns and traits among participating millennials, and an account of what millennial students are saying about quality, preferences, and aversions in their online learning experience. Overall, the great majority, especially aggressive dependent and compulsive millennial students were satisfied with their online learning experience. Also, more female millennial students were satisfied with their experience compared to male millennial students. The role of the instructor, course design, and learning matters were the themes most frequently mentioned by millennial students when asked about the quality of online learning. Overwhelmingly, convenience, time management, flexibility, and pace were the aspects these millennial students liked most about their online encounter. On the contrary, lack of interaction, instructor's role, course design, and technology matters were the most frequent themes regarding millennials' dislikes about their online learning experience. Finally, the study includes recommendations for future research.
Show less - Date Issued
- 2006
- Identifier
- CFE0000968, ucf:46710
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000968
- Title
- UNDERSTANDING NEWS MEDIA VIEWING AND SELECTION PATTERNS: FOMO AND USER CONSUMPTION OF NEWS CONTENT ON SOCIAL MEDIA INTERFACES.
- Creator
-
Christopher, Nicolette D, Bagley, George, Armato, Michael, University of Central Florida
- Abstract / Description
-
The current study employs a regional sample in order to investigate the phenomenon of fear-of-missing-out (FoMO), the awareness associated with the fear that other individuals are having a more pleasurable experience that one is not a part of. The current study uniquely examines the role that FoMO plays in viewing patterns associated with news content on social media interfaces. The 10-item scale created by Przybylski, Myrayama, DeHaan, and Gladwell in 2013 was used as a basis to discover the...
Show moreThe current study employs a regional sample in order to investigate the phenomenon of fear-of-missing-out (FoMO), the awareness associated with the fear that other individuals are having a more pleasurable experience that one is not a part of. The current study uniquely examines the role that FoMO plays in viewing patterns associated with news content on social media interfaces. The 10-item scale created by Przybylski, Myrayama, DeHaan, and Gladwell in 2013 was used as a basis to discover the degree of FoMO participants experience while online, while other questions of the survey serve to collect data about participants sociodemographic's, engagement with soft and hard news content, and overall social media usage. (Przybylski, Myrayama, DeHaan, Gladwell 2013). The objective is to demonstrate the influential effects that FoMO poses on media consumer viewing patterns and behaviors.
Show less - Date Issued
- 2018
- Identifier
- CFH2000413, ucf:45763
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH2000413
- Title
- STABLE ISOTOPE EVIDENCE FOR THE GEOGRAPHIC ORIGINS AND MILITARY MOVEMENT OF NAPOLEONIC SOLIDERS DURING THE MARCH FROM MOSCOW IN 1812.
- Creator
-
Pelier, Serenela, Dupras , Tosha, University of Central Florida
- Abstract / Description
-
In 2001, 3269 unidentified individuals were found in a mass grave on the Northern part of Vilnius, Lithuania. Artifactual context indicates that these individuals were likely soldiers that were a part of Napoleon's Grand Army. Stable oxygen isotope analysis was performed on bone apatite from 9 femoral bone samples to determine whether or not these individuals were Lithuanian locals and to test ratio variation. If individuals were foreigners, then geographical origins were approximated...
Show moreIn 2001, 3269 unidentified individuals were found in a mass grave on the Northern part of Vilnius, Lithuania. Artifactual context indicates that these individuals were likely soldiers that were a part of Napoleon's Grand Army. Stable oxygen isotope analysis was performed on bone apatite from 9 femoral bone samples to determine whether or not these individuals were Lithuanian locals and to test ratio variation. If individuals were foreigners, then geographical origins were approximated utilizing percentages of C4 plants from Holder (2013) and [delta]18O values that were extracted from bone apatite. The carbonate oxygen isotope compositions ([delta]18Ocarbonate) of bone apatite from the femoral samples (-4.4‰ to -6.2‰) indicate that these individuals were from central and western Europe (-4.0‰ to -6.9‰). It is significant that none of the individuals have values consistent with the area around Lithuania (-10.0‰ to -11.9‰), because it means that they all were non-local. It is also indicative that the Lithuanians were not burying their citizens in the grave and therefore strongly support that these individuals were Napoleonic soldiers. Additionally, although C4 percentages in the diet ranged from 17.8% to 31.7%, which overlaps with eastern European consumption patterns (approximately 15% to 25% of C4 plants) (Reitsema et al., 2010), the slight shift towards a higher C4 percentage is more representative of a central and western European diet. These results are significant because they provide stable isotopic evidence that these individuals were Napoleon's soldiers whom participated in the Russian campaign of 1812.
Show less - Date Issued
- 2015
- Identifier
- CFH0004822, ucf:45454
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004822
- Title
- USING GIS TO DETERMINE THE INFLUENCE OF WETLANDS ON CAYUGA IROQUOIS SETTLEMENT LOCATION STRATEGIES.
- Creator
-
Birnbaum, David, Walker, John, University of Central Florida
- Abstract / Description
-
The archaeological record of the Iroquois supports that settlements were regularly relocated during the protohistoric period (1500-1650 A.D.). With the use of Geographic Information Systems (GIS) computer software, archaeologists may analyze variables potentially resulting in or influencing the movement of settlements. Through the use of spatial analysis, I argue that Cayuga Iroquois settlement locations were influenced by the environmental characteristics of their surrounding landscape....
Show moreThe archaeological record of the Iroquois supports that settlements were regularly relocated during the protohistoric period (1500-1650 A.D.). With the use of Geographic Information Systems (GIS) computer software, archaeologists may analyze variables potentially resulting in or influencing the movement of settlements. Through the use of spatial analysis, I argue that Cayuga Iroquois settlement locations were influenced by the environmental characteristics of their surrounding landscape. Specifically, wetlands are believed to have influenced settlement location choices in central New York state. This study examines the spatial relationships between wetland habitats and protohistoric period Cayuga Iroquois settlements where swidden maize agriculture comprised most of the diet. Considering previous research that has linked the movement of settlements to Iroquois agricultural practices, I hypothesize that wetlands played a significant role in the Iroquois subsistence system by providing supplementary plant and animal resources to a diet primarily characterized by maize consumption, and thereby influenced the strategy behind settlement relocation. Nine Cayuga Iroquois settlements dating to the protohistoric period were selected for analysis using GIS. Two control groups, each consisting of nine random points, were generated for comparison. Distance buffers show the amount of wetlands that are situated within 1-, 2.5-, and 5-kilometers from Cayuga settlements and random points. The total number of wetlands within proximity of these distances to the settlements and random points are recorded and analyzed. The results indicate a statistical significance regarding the prominence of wetlands within the landscape which pertains to the Cayuga Iroquois settlement strategy.
Show less - Date Issued
- 2011
- Identifier
- CFH0004118, ucf:44873
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004118
- Title
- INVESTIGATION OF DAMAGE DETECTION METHODOLOGIES FOR STRUCTURAL HEALTH MONITORING.
- Creator
-
Gul, Mustafa, Catbas, F. Necati, University of Central Florida
- Abstract / Description
-
Structural Health Monitoring (SHM) is employed to track and evaluate damage and deterioration during regular operation as well as after extreme events for aerospace, mechanical and civil structures. A complete SHM system incorporates performance metrics, sensing, signal processing, data analysis, transmission and management for decision-making purposes. Damage detection in the context of SHM can be successful by employing a collection of robust and practical damage detection methodologies...
Show moreStructural Health Monitoring (SHM) is employed to track and evaluate damage and deterioration during regular operation as well as after extreme events for aerospace, mechanical and civil structures. A complete SHM system incorporates performance metrics, sensing, signal processing, data analysis, transmission and management for decision-making purposes. Damage detection in the context of SHM can be successful by employing a collection of robust and practical damage detection methodologies that can be used to identify, locate and quantify damage or, in general terms, changes in observable behavior. In this study, different damage detection methods are investigated for global condition assessment of structures. First, different parametric and non-parametric approaches are re-visited and further improved for damage detection using vibration data. Modal flexibility, modal curvature and un-scaled flexibility based on the dynamic properties that are obtained using Complex Mode Indicator Function (CMIF) are used as parametric damage features. Second, statistical pattern recognition approaches using time series modeling in conjunction with outlier detection are investigated as a non-parametric damage detection technique. Third, a novel methodology using ARX models (Auto-Regressive models with eXogenous output) is proposed for damage identification. By using this new methodology, it is shown that damage can be detected, located and quantified without the need of external loading information. Next, laboratory studies are conducted on different test structures with a number of different damage scenarios for the evaluation of the techniques in a comparative fashion. Finally, application of the methodologies to real life data is also presented along with the capabilities and limitations of each approach in light of analysis results of the laboratory and real life data.
Show less - Date Issued
- 2009
- Identifier
- CFE0002830, ucf:48069
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002830
- Title
- APPLICATION OF POLYELECTROLYTE MULTILAYERS FOR PHOTOLITHOGRAPHIC PATTERNING OF DIVERSE MAMMALIAN CELL TYPES IN SERUM FREE MEDIUM.
- Creator
-
Dhir, Vipra, Cho, Hyoung Jin, University of Central Florida
- Abstract / Description
-
Integration of living cells with novel microdevices requires the development of innovative technologies for manipulating cells. Chemical surface patterning has been proven as an effective method to control the attachment and growth of diverse cell populations. Patterning polyelectrolyte multilayers through the combination of layer-by-layer self-assembly technique and photolithography offers a simple, versatile and silicon compatible approach that overcomes chemical surface patterning...
Show moreIntegration of living cells with novel microdevices requires the development of innovative technologies for manipulating cells. Chemical surface patterning has been proven as an effective method to control the attachment and growth of diverse cell populations. Patterning polyelectrolyte multilayers through the combination of layer-by-layer self-assembly technique and photolithography offers a simple, versatile and silicon compatible approach that overcomes chemical surface patterning limitations, such as short-term stability and low protein adsorption resistance. In this study, direct photolithographic patterning of PAA/PAAm and PAA/PAH polyelectrolyte multilayers was developed to pattern mammalian neuronal, skeletal and cardiac muscle cells. For all studied cell types, PAA/PAAm multilayers behaved as a negative surface, completely preventing cell attachment. In contrast, PAA/PAH multilayers have shown a cell-selective behavior, promoting the attachment and growth of neuronal cells (embryonic rat hippocampal and NG108-15 cells) to a greater extent, while providing a little attachment for neonatal rat cardiac and skeletal muscle cells (C2C12 cell line). PAA/PAAm multilayer cellular patterns have also shown a remarkable protein adsorption resistance. Protein adsorption protocols commonly used for surface treatment in cell culture did not compromise the cell attachment inhibiting feature of the PAA/PAAm multilayer patterns. The combination of polyelectrolyte multilayer patterns with different adsorbed proteins could expand the applicability of this technology to cell types that require specific proteins either on the surface or in the medium for attachment or differentiation, and could not be patterned using the traditional methods.
Show less - Date Issued
- 2008
- Identifier
- CFE0002357, ucf:47783
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002357
- Title
- Business in the Estuary, Party in the Sea: Migration Patterns of Striped Mullet (Mugil cephalus) Within the Indian River Lagoon Complex.
- Creator
-
Myers, Olivia, Cook, Geoffrey, Mansfield, Kate, Reyier, Eric, University of Central Florida
- Abstract / Description
-
Commercial and recreational environmental enterprises in the Indian River Lagoon (IRL), Florida supply nearly 10,000 jobs and produce $1.6 billion dollars a year in revenue. These waters contain iconic species of sportfish, including red drum, snook, and sea trout, as well as their lower trophic level prey such as snapper and mullet. Striped mullet (Mugil cephalus) are both commercially valuable as well as an indicator species for overall ecosystem health. From September to December, mullet...
Show moreCommercial and recreational environmental enterprises in the Indian River Lagoon (IRL), Florida supply nearly 10,000 jobs and produce $1.6 billion dollars a year in revenue. These waters contain iconic species of sportfish, including red drum, snook, and sea trout, as well as their lower trophic level prey such as snapper and mullet. Striped mullet (Mugil cephalus) are both commercially valuable as well as an indicator species for overall ecosystem health. From September to December, mullet in the IRL undergo an annual migration from their inshore foraging habitats to oceanic spawning sites. However, their actual migratory pathways remain unknown. To address this knowledge gap, I utilized passive acoustic telemetry to assess the migration patterns of M. cephalus within the IRL complex, particularly focusing on movement pathways from inshore aggregation sites to oceanic inlets to spawn. Coupling environmental metrics with movement data, I evaluated catalysts for migration as well as travel routes through the estuary. Network analyses identified potential conservation areas of interest and sites needing management intervention. Impoundments around the Merritt Island National Wildlife Refuge appear to serve as an important refuge area for striped mullet while the Banana and Indian Rivers act as corridors during their inshore migratory movements. The environmental metrics of depth, temperature, dissolved oxygen, pH, barometric pressure, and photoperiod were the best predictors for the number of detections and residency time produced by two case studies of striped mullet activity. An emphasis on spatial fisheries management along with vigilant environmental monitoring will ensure the status of this species, to the benefit of both natural and human systems in the Indian River Lagoon. The knowledge generated as a result of this project may also provide a framework for sustainably managing other migratory baitfish.
Show less - Date Issued
- 2019
- Identifier
- CFE0007895, ucf:52768
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007895
- Title
- First Principles Studies of Pattern Formations and Reactions on Catalyst Surfaces.
- Creator
-
Le, Duy, Rahman, Talat, Roldan Cuenya, Beatriz, Schelling, Patrick, Sohn, Yongho, University of Central Florida
- Abstract / Description
-
This dissertation undertakes theoretical research into the adsorption, pattern formation, and reactions of atoms, molecules, and layered materials on catalyst surfaces. These investigations are carried out from first-principles calculations of electronic and geometric structures using density functional theory (DFT) for predictions and simulations at the atomic scale. The results should be useful for further study of the catalytic activities of materials and for engineering functional...
Show moreThis dissertation undertakes theoretical research into the adsorption, pattern formation, and reactions of atoms, molecules, and layered materials on catalyst surfaces. These investigations are carried out from first-principles calculations of electronic and geometric structures using density functional theory (DFT) for predictions and simulations at the atomic scale. The results should be useful for further study of the catalytic activities of materials and for engineering functional nanostructures.The first part of the dissertation focuses on systematic first-principles simulations of the energetic pathways of CO oxidation on the Cu2O(100) surface. These simulations show CO to oxidize spontaneously on the O-terminated Cu2O(100) surface by consuming surface oxygen atoms. The O-vacancy on Cu2O(100) then is subsequently healed by dissociative adsorption of atmospheric O2 molecules.The second part discusses the pattern formation of hydrogen on two and three layers of Co film grown on the Cu(111) surface. It is found that increasing the pressure of H2 changes the hydrogen structure from 2H-(2 x 2) to H-p(1 x 1) through an intermediate structure of 6H-(3 x 3).The third part compares the results of different ways of introducing van der Waals (vdW) interactions into DFT simulations of the adsorption and pattern formation of various molecules on certain substrates. Examinations of the physisorption of five nucleobases on graphene and of n-alkane on Pt(111) demonstrate the importance of taking vdW interactions into account, and of doing so in a way that is best suited to the particular system in question. More importantly, as the adsorption of 1,4 diaminebenzene molecules on Au(111) shows inclusion of vdW interactions is crucial for accurate simulation of the pattern formation.The final part carries out first-principles calculations of the geometric and electronic structure of the Moire pattern of a single layer of Molybdenum disulfide (MoS2) on Cu(111). The results reveal three possible stacking types. They also demonstrate that the MoS2 layer to be chemisorbed, albeit weakly, and that, while Cu surface atoms are vertically disordered, the layer itself is not strongly buckled.
Show less - Date Issued
- 2012
- Identifier
- CFE0004224, ucf:48991
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004224