Current Search: processing (x)
View All Items
Pages
- Title
- Modeling and Spray Pyrolysis Processing of Mixed Metal Oxide Nano-Composite Gas Sensor Films.
- Creator
-
Khatami, Seyed Mohammad Navid, Ilegbusi, Olusegun, Deng, Weiwei, Kassab, Alain, Coffey, Kevin, Divo, Eduardo, University of Central Florida
- Abstract / Description
-
The role of sensor technology is obvious in improvement and optimization of many industrial processes. The sensor films, which are considered the core of chemical sensors, have the capability to detect the presence and concentration of a specific chemical substance. Such sensor films achieve selectivity by detecting the interaction of the specific chemical substance with the sensor material through selective binding, adsorption and permeation of analyte. This research focuses on development...
Show moreThe role of sensor technology is obvious in improvement and optimization of many industrial processes. The sensor films, which are considered the core of chemical sensors, have the capability to detect the presence and concentration of a specific chemical substance. Such sensor films achieve selectivity by detecting the interaction of the specific chemical substance with the sensor material through selective binding, adsorption and permeation of analyte. This research focuses on development and verification of a comprehensive mathematical model of mixed metal oxide thin film growth using spray pyrolysis technique (SPT). An experimental setup is used to synthesize mixed metal oxide films on a heated substrate. The films are analyzed using a variety of characterization tools. The results are used to validate the mathematical model. There are three main stages to achieve this goal: 1) A Lagrangian-Eulerian method is applied to develop a CFD model of atomizing multi-component solution. The model predicts droplet characteristics in flight, such as spatial distribution of droplet size and concentration. 2) Upon reaching the droplets on the substrate, a mathematical model of multi-phase transport and chemical reaction phenomena in a single droplet is developed and used to predict the deposition of thin film. The various stages of droplet morphology associated with surface energy and evaporation are predicted. 3) The processed films are characterized for morphology and chemical composition (SEM, XPS) and the data are used to validate the models as well as investigate the influence of process parameters on the structural characteristics of mixed metal oxide films. The structural characteristics are investigated of nano structured thin films comprising of ZnO, SnO2, ZnO+In2O3 and SnO2+In2O3 composites. The model adequately predicts the size distribution and film thickness when the nanocrystals are well-structured at the controlled temperature and concentration.
Show less - Date Issued
- 2014
- Identifier
- CFE0005817, ucf:50048
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005817
- Title
- Exploring Techniques for Providing Privacy in Location-Based Services Nearest Neighbor Query.
- Creator
-
Asanya, John-Charles, Guha, Ratan, Turgut, Damla, Bassiouni, Mostafa, Zou, Changchun, Mohapatra, Ram, University of Central Florida
- Abstract / Description
-
Increasing numbers of people are subscribing to location-based services, but as the popularity grows so are the privacy concerns. Varieties of research exist to address these privacy concerns. Each technique tries to address different models with which location-based services respond to subscribers. In this work, we present ideas to address privacy concerns for the two main models namely: the snapshot nearest neighbor query model and the continuous nearest neighbor query model. First, we...
Show moreIncreasing numbers of people are subscribing to location-based services, but as the popularity grows so are the privacy concerns. Varieties of research exist to address these privacy concerns. Each technique tries to address different models with which location-based services respond to subscribers. In this work, we present ideas to address privacy concerns for the two main models namely: the snapshot nearest neighbor query model and the continuous nearest neighbor query model. First, we address snapshot nearest neighbor query model where location-based services response represents a snapshot of point in time. In this model, we introduce a novel idea based on the concept of an open set in a topological space where points belongs to a subset called neighborhood of a point. We extend this concept to provide anonymity to real objects where each object belongs to a disjointed neighborhood such that each neighborhood contains a single object. To help identify the objects, we implement a database which dynamically scales in direct proportion with the size of the neighborhood. To retrieve information secretly and allow the database to expose only requested information, private information retrieval protocols are executed twice on the data. Our study of the implementation shows that the concept of a single object neighborhood is able to efficiently scale the database with the objects in the area.The size of the database grows with the size of the grid and the objects covered by the location-based services. Typically, creating neighborhoods, computing distances between objects in the area, and running private information retrieval protocols causes the CPU to respond slowly with this increase in database size. In order to handle a large number of objects, we explore the concept of kernel and parallel computing in GPU. We develop GPU parallel implementation of the snapshot query to handle large number of objects. In our experiment, we exploit parameter tuning. The results show that with parameter tuning and parallel computing power of GPU we are able to significantly reduce the response time as the number of objects increases. To determine response time of an application without knowledge of the intricacies of GPU architecture, we extend our analysis to predict GPU execution time. We develop the run time equation for an operation and extrapolate the run time for a problem set based on the equation, and then we provide a model to predict GPU response time.As an alternative, the snapshot nearest neighbor query privacy problem can be addressed using secure hardware computing which can eliminate the need for protecting the rest of the sub-system, minimize resource usage and network transmission time. In this approach, a secure coprocessor is used to provide privacy. We process all information inside the coprocessor to deny adversaries access to any private information. To obfuscate access pattern to external memory location, we use oblivious random access memory methodology to access the server. Experimental evaluation shows that using a secure coprocessor reduces resource usage and query response time as the size of the coverage area and objects increases.Second, we address privacy concerns in the continuous nearest neighbor query model where location-based services automatically respond to a change in object's location. In this model, we present solutions for two different types known as moving query static object and moving query moving object. For the solutions, we propose plane partition using a Voronoi diagram, and a continuous fractal space filling curve using a Hilbert curve order to create a continuous nearest neighbor relationship between the points of interest in a path. Specifically, space filling curve results in multi-dimensional to 1-dimensional object mapping where values are assigned to the objects based on proximity. To prevent subscribers from issuing a query each time there is a change in location and to reduce the response time, we introduce the concept of transition and update time to indicate where and when the nearest neighbor changes. We also introduce a database that dynamically scales with the size of the objects in a path to help obscure and relate objects. By executing the private information retrieval protocol twice on the data, the user secretly retrieves requested information from the database. The results of our experiment show that using plane partitioning and a fractal space filling curve to create nearest neighbor relationships with transition time between objects reduces the total response time.
Show less - Date Issued
- 2015
- Identifier
- CFE0005757, ucf:50098
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005757
- Title
- Sampling and Subspace Methods for Learning Sparse Group Structures in Computer Vision.
- Creator
-
Jaberi, Maryam, Foroosh, Hassan, Pensky, Marianna, Gong, Boqing, Qi, GuoJun, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
The unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the...
Show moreThe unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the union of group-structures of different dimensions embedded in a high-dimensional ambient space.To address the problem of high volume, we propose a sampling method referred to as the Sparse Withdrawal of Inliers in a First Trial (SWIFT), which determines the smallest sample size in one grab so that all group-structures are adequately represented and discovered with high probability. The key features of SWIFT are: (i) sparsity, which is independent of the population size; (ii) no prior knowledge of the distribution of data, or the number of underlying group-structures; and (iii) robustness in the presence of an overwhelming number of outliers. We report a comprehensive study of the proposed sampling method in terms of accuracy, functionality, and effectiveness in reducing the computational cost in various applications of computer vision. In the second part of this dissertation, we study dimensionality reduction for multi-structural data. We propose a probabilistic subspace clustering method that unifies soft- and hard-clustering in a single framework. This is achieved by introducing a delayed association of uncertain points to subspaces of lower dimensions based on a confidence measure. Delayed association yields higher accuracy in clustering subspaces that have ambiguities, i.e. due to intersections and high-level of outliers/noise, and hence leads to more accurate self-representation of underlying subspaces. Altogether, this dissertation addresses the key theoretical and practically issues of size and dimension in big data analysis.
Show less - Date Issued
- 2018
- Identifier
- CFE0007017, ucf:52039
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007017
- Title
- Metaphoric Competence as a Means to Meta-Cognitive Awareness in First-Year Composition.
- Creator
-
Dadurka, David, Scott, John, Marinara, Martha, Wallace, David, University of Central Florida
- Abstract / Description
-
A growing body of writing research suggests college students' and teachers' conceptualizations of writing play an important role in learning to write and making the transition from secondary to post-secondary academic composition. First-year college writers are not blank slates; rather, they bring many assumptions and beliefs about academic writing to the first-year writing classroom from exposure to a wide range of literate practices throughout their lives. Metaphor acts as a way for...
Show moreA growing body of writing research suggests college students' and teachers' conceptualizations of writing play an important role in learning to write and making the transition from secondary to post-secondary academic composition. First-year college writers are not blank slates; rather, they bring many assumptions and beliefs about academic writing to the first-year writing classroom from exposure to a wide range of literate practices throughout their lives. Metaphor acts as a way for scholars to trace students' as well as their instructors' assumptions and beliefs about writing. In this study, I contend that metaphor is a pathway to meta-cognitive awareness, mindfulness, and reflection. This multi-method descriptive study applies metaphor analysis to a corpus of more than a dozen first-year composition students' end-of-semester writing portfolios; the study also employs an auto-ethnographic approach to examining this author's texts composed as a graduate student and novice teacher. In several cases writing students in this study appeared to reconfigure their metaphors for writing and subsequently reconsider their assumptions about writing. My literature review and analysis suggests that metaphor remains an underutilized inventive and reflective strategy in composition pedagogy. Based on these results, I suggest that instructors consider how metaphoric competence might offer writers and writing instructors an alternate means for operationalizing key habits of mind such as meta-cognitive awareness, reflection, openness to learning, and creativity as recommended in the Framework for Success in Post-Secondary Writing. Ultimately, I argue that writers and teachers might benefit from adopting a more flexible attitude towards metaphor. As a rhetorical trope, metaphors are contextual and, thus, writers need to learn to mix, discard, create, and obscure metaphors as required by the situation.
Show less - Date Issued
- 2012
- Identifier
- CFE0004303, ucf:49475
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004303
- Title
- UTILIZING EDGE IN IOT AND VIDEO STREAMING APPLICATIONS TO REDUCE BOTTLENECKS IN INTERNET TRAFFIC.
- Creator
-
Akpinar, Kutalmis, Hua, Kien, Zou, Changchun, Turgut, Damla, Wang, Jun, University of Central Florida
- Abstract / Description
-
There is a large increase in the surge of data over Internet due to the increasing demand on multimedia content. It is estimated that 80% of Internet traffic will be video by 2022, according to a recent study. At the same time, IoT devices on Internet will double the human population. While infrastructure standards on IoT are still nonexistent, enterprise solutions tend to encourage cloud-based solutions, causing an additional surge of data over the Internet. This study proposes solutions to...
Show moreThere is a large increase in the surge of data over Internet due to the increasing demand on multimedia content. It is estimated that 80% of Internet traffic will be video by 2022, according to a recent study. At the same time, IoT devices on Internet will double the human population. While infrastructure standards on IoT are still nonexistent, enterprise solutions tend to encourage cloud-based solutions, causing an additional surge of data over the Internet. This study proposes solutions to bring video traffic and IoT computation back to the edges of the network, so that costly Internet infrastructure upgrades are not necessary. An efficient way to prevent the Internet surge over the network for IoT is to push the application specific computation to the edge of the network, close to where the data is generated, so that large data can be eliminated before being delivered to the cloud. In this study, an event query language and processing environment is provided to process events from various devices. The query processing environment brings the application developers, sensor infrastructure providers and end users together. It uses boolean events as the streaming and processing units. This addresses the device heterogeneity and pushes the data-intense tasks to the edge of network.The second focus of the study is Video-on-Demand applications. A characteristic of VoD traffic is its high redundancy. Due to the demand on popular content, the same video traffic flows through Internet Service Provider's network as overlapping but separate streams. In previous studies on redundancy elimination, overlapping streams are merged into each other in link-level by receiving the packet only for the first stream, and re-using it for the subsequent duplicated streams. In this study, we significantly improve these techniques by introducing a merger-aware routing method.Our final focus is increasing utilization of Content Delivery Network (CDN) servers on the edge of network to reduce the long-distance traffic. The proposed system uses Software Defined Networks (SDN) to route adaptive video streaming clients to the best available CDN servers in terms of network availability. While performing the network assistance, the system does not reveal the video request information to the network provider, thus enabling privacy protection for encrypted streams. The request routing is performed in segment level for adaptive streaming. This enables to re-route the client to the best available CDN without an interruption if network conditions change during the stream.
Show less - Date Issued
- 2019
- Identifier
- CFE0007882, ucf:52774
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007882
- Title
- Improvement of Data-Intensive Applications Running on Cloud Computing Clusters.
- Creator
-
Ibrahim, Ibrahim, Bassiouni, Mostafa, Lin, Mingjie, Zhou, Qun, Ewetz, Rickard, Garibay, Ivan, University of Central Florida
- Abstract / Description
-
MapReduce, designed by Google, is widely used as the most popular distributed programmingmodel in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobehave been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in...
Show moreMapReduce, designed by Google, is widely used as the most popular distributed programmingmodel in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobehave been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in cloud computing systems. This impedance is due to the uneven distribution of input data and computation load among cluster nodes, heterogeneous data nodes, data skew in reduce phase, resource contention situations, and network configurations. All these reasons may cause delay failure and the violation of job completion time. One of the key issues that can significantly affect the performance of cloud computing is the computation load balancing among cluster nodes. Replica placement in Hadoop distributed file system plays a significant role in data availability and the balanced utilization of clusters. In the current replica placement policy (RPP) of Hadoop distributed file system (HDFS), the replicas of data blocks cannot be evenly distributed across cluster's nodes. The current HDFS must rely on a load balancing utility for balancing the distribution of replicas, which results in extra overhead for time and resources. This dissertation addresses data load balancing problem and presents an innovative replica placement policy for HDFS. It can perfectly balance the data load among cluster's nodes. The heterogeneity of cluster nodes exacerbates the issue of computational load balancing; therefore, another replica placement algorithm has been proposed in this dissertation for heterogeneous cluster environments. The timing of identifying the straggler map task is very important for straggler mitigation in data-intensive cloud computing. To mitigate the straggler map task, Present progress and Feedback based Speculative Execution (PFSE) algorithm has been proposed in this dissertation. PFSE is a new straggler identification scheme to identify the straggler map tasks based on the feedback information received from completed tasks beside the progress of the current running task. Straggler reduce task aggravates the violation of MapReduce job completion time. Straggler reduce task is typically the result of bad data partitioning during the reduce phase. The Hash partitioner employed by Hadoop may cause intermediate data skew, which results in straggler reduce task. In this dissertation a new partitioning scheme, named Balanced Data Clusters Partitioner (BDCP), is proposed to mitigate straggler reduce tasks. BDCP is based on sampling of input data and feedback information about the current processing task. BDCP can assist in straggler mitigation during the reduce phase and minimize the job completion time in MapReduce jobs. The results of extensive experiments corroborate that the algorithms and policies proposed in this dissertation can improve the performance of data-intensive applications running on cloud platforms.
Show less - Date Issued
- 2019
- Identifier
- CFE0007818, ucf:52804
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007818
- Title
- LIGHTING DESIGN FOR FROM SUN TO SUN: A DAY IN A RAILROAD CAMP.
- Creator
-
Szewczyk, Nathan, Scott, Bert, University of Central Florida
- Abstract / Description
-
In this thesis the notion of a theoretical approach to the beginning stages of designing lighting for a theatrical production will be discussed. The topic being researched is: How a theoretical approach to entering the design process will enhance the final lighting design. The target audience for this study is theatrical lighting designers. A theoretical approach, in this case to the beginning of the design process, could be described as utilizing current dramatic theories to develop a better...
Show moreIn this thesis the notion of a theoretical approach to the beginning stages of designing lighting for a theatrical production will be discussed. The topic being researched is: How a theoretical approach to entering the design process will enhance the final lighting design. The target audience for this study is theatrical lighting designers. A theoretical approach, in this case to the beginning of the design process, could be described as utilizing current dramatic theories to develop a better understanding for the design of this production. In order to better understand this topic one would need to know how the process of lighting design is typically created and where the theoretical approach is implemented. An issue with this approach is that the short period allowed for the design process does not allow sufficient time to utilize a theoretical approach in a real world setting. A way of determining if this process is effective is through personal self review. Journaling and discussion with my advisor for this production will be the method of data collection. The method of validation will be a self reflection at the end of the final performance. An issue with the collection process is its reliance on personal opinions, including the author's. There are no ethical issues relating to this study. When applied, a theoretical approach to the design process will enhance the quality of the final lighting design through allowing the designer to be better prepared for a specific scene that he/she is struggling with.
Show less - Date Issued
- 2011
- Identifier
- CFE0003609, ucf:48874
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003609
- Title
- An Integrated Design for Six Sigma-Based Framework To Align Strategy, New Process Development, and Customer Requirements In The Airlines Industry.
- Creator
-
Alghamdi, Mohammed, Elshennawy, Ahmad, Rabelo, Luis, Lee, Gene, Ahmad, Ali, University of Central Florida
- Abstract / Description
-
When organizations create new strategy maps, key new processes are often identified. This is important for organizations to stay competitive in the global marketplace. This document describes the development, implementation, and validation of a framework that properly aligns and links an organization's strategy and new process development. The proposed framework integrates the Balanced Scorecard management system (BSC) and the Design for Six Sigma (DFSS) methodology, leveraging their...
Show moreWhen organizations create new strategy maps, key new processes are often identified. This is important for organizations to stay competitive in the global marketplace. This document describes the development, implementation, and validation of a framework that properly aligns and links an organization's strategy and new process development. The proposed framework integrates the Balanced Scorecard management system (BSC) and the Design for Six Sigma (DFSS) methodology, leveraging their strengths, overcoming weaknesses, and identifying lessons learned to help bridge the gap between strategy development and execution. The critical-to-quality conceptual model is used as an integrative component for the framework. Literature search has resulted in little or no research into the development of similar frameworks. To demonstrate and evaluate the effectiveness of the framework in a real-world environment, a case study is carried out and implemented successfully. As the case study progressed, cycle time as a performance indicator was estimated and showed progression towards the targeted strategic objective. The developed framework helps decision-makers seamlessly transit from a strategic position to process development linking strategic objectives to the critical-to-quality features. This comprehensive framework can help move organizations from where they currently are to where they want to be, laying the background needed for customer satisfaction and breakthrough performance.
Show less - Date Issued
- 2016
- Identifier
- CFE0006246, ucf:51079
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006246
- Title
- Jail Mental Health Innovations: Factors Influencing Mental Health Services Innovations for Jails.
- Creator
-
Clayton, Orville, Wan, Thomas, Anderson, Kim, Winton, Mark, Zhang, Ning, University of Central Florida
- Abstract / Description
-
ABSTRACTThe U.S. is recognized for uniquely high incarceration rates. Over recent decades, there has been a concurrent dramatic increase of jail detainees with mental disorders. Provision of adequate mental health services for jail inmates is constitutionally mandated, and has legal, ethical, quality of care, and fiscal implications for jails, families, communities, and detainees. Significant variation exists in the provision of mental health services across jails, and increased understanding...
Show moreABSTRACTThe U.S. is recognized for uniquely high incarceration rates. Over recent decades, there has been a concurrent dramatic increase of jail detainees with mental disorders. Provision of adequate mental health services for jail inmates is constitutionally mandated, and has legal, ethical, quality of care, and fiscal implications for jails, families, communities, and detainees. Significant variation exists in the provision of mental health services across jails, and increased understanding of the factors that influence the adoption of such services may help guide jails to implement beneficial services, and ensure that such services reflect, reflect quality standards. This study used a mixed methods strategy to examine the influence of theoretically determined variables on the adoption of jail mental health services, and the quality assessment of such services. Data was gathered by survey instrumentation, secondary data review, and in-depth interviews with jail leaders. The study found that isomorphism has a significant effect on the structural adequacy of jail mental health services, innovation characteristics have a negligible relationship to structural adequacy and process integrity, structural adequacy mediates the effects of isomorphism on process integrity, and jail size has a significant effect on structural adequacy. This study advances the knowledge base in its specification of the roles of internal, external, and demographic factors in the adoption of jail mental health services, and in the testing and application of Donabedian's healthcare model to assess the quality of such services.
Show less - Date Issued
- 2017
- Identifier
- CFE0006866, ucf:51755
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006866
- Title
- Investigation of infrared thermography for subsurface damage detection of concrete structures.
- Creator
-
Hiasa, Shuhei, Catbas, Necati, Tatari, Omer, Nam, Boo Hyun, Zaurin, Ricardo, Xanthopoulos, Petros, University of Central Florida
- Abstract / Description
-
Deterioration of road infrastructure arises from aging and various other factors. Consequently, inspection and maintenance have been a serious worldwide problem. In the United States, degradation of concrete bridge decks is a widespread problem among several bridge components. In order to prevent the impending degradation of bridges, periodic inspection and proper maintenance are indispensable. However, the transportation system faces unprecedented challenges because the number of aging...
Show moreDeterioration of road infrastructure arises from aging and various other factors. Consequently, inspection and maintenance have been a serious worldwide problem. In the United States, degradation of concrete bridge decks is a widespread problem among several bridge components. In order to prevent the impending degradation of bridges, periodic inspection and proper maintenance are indispensable. However, the transportation system faces unprecedented challenges because the number of aging bridges is increasing under limited resources, both in terms of budget and personnel. Therefore, innovative technologies and processes that enable bridge owners to inspect and evaluate bridge conditions more effectively and efficiently with less human and monetary resources are desired. Traditionally, qualified engineers and inspectors implemented hammer sounding and/or chain drag, and visual inspection for concrete bridge deck evaluations, but these methods require substantial field labor, experience, and lane closures for bridge deck inspections. Under these circumstances, Non-Destructive Evaluation (NDE) techniques such as computer vision-based crack detection, impact echo (IE), ground-penetrating radar (GPR) and infrared thermography (IRT) have been developed to inspect and monitor aging and deteriorating structures rapidly and effectively. However, no single method can detect all kinds of defects in concrete structures as well as the traditional inspection combination of visual and sounding inspections; hence, there is still no international standard NDE methods for concrete bridges, although significant progress has been made up to the present.This research presents the potential to reduce a burden of bridge inspections, especially for bridge decks, in place of traditional chain drag and hammer sounding methods by IRT with the combination of computer vision-based technology. However, there were still several challenges and uncertainties in using IRT for bridge inspections. This study revealed those challenges and uncertainties, and explored those solutions, proper methods and ideal conditions for applying IRT in order to enhance the usability, reliability and accuracy of IRT for concrete bridge inspections. Throughout the study, detailed investigations of IRT are presented. Firstly, three different types of infrared (IR) cameras were compared under active IRT conditions in the laboratory to examine the effect of photography angle on IRT along with the specifications of cameras. The results showed that when IR images are taken from a certain angle, each camera shows different temperature readings. However, since each IR camera can capture temperature differences between sound and delaminated areas, they have a potential to detect delaminated areas under a given condition in spite of camera specifications even when they are utilized from a certain angle. Furthermore, a more objective data analysis method than just comparing IR images was explored to assess IR data. Secondly, coupled structural mechanics and heat transfer models of concrete blocks with artificial delaminations used for a field test were developed and analyzed to explore sensitive parameters for effective utilization of IRT. After these finite element (FE) models were validated, critical parameters and factors of delamination detectability such as the size of delamination (area, thickness and volume), ambient temperature and sun loading condition (different season), and the depth of delamination from the surface were explored. This study presents that the area of delamination is much more influential in the detectability of IRT than thickness and volume. It is also found that there is no significant difference depending on the season when IRT is employed. Then, FE model simulations were used to obtain the temperature differences between sound and delaminated areas in order to process IR data. By using this method, delaminated areas of concrete slabs could be detected more objectively than by judging the color contrast of IR images. However, it was also found that the boundary condition affects the accuracy of this method, and the effect varies depending on the data collection time. Even though there are some limitations, integrated use of FE model simulation with IRT showed that the combination can be reduce other pre-tests on bridges, reduce the need to have access to the bridge and also can help automate the IRT data analysis process for concrete bridge deck inspections. After that, the favorable time windows for concrete bridge deck inspections by IRT were explored through field experiment and FE model simulations. Based on the numerical simulations and experimental IRT results, higher temperature differences in the day were observed from both results around noontime and nighttime, although IRT is affected by sun loading during the daytime heating cycle resulting in possible misdetections. Furthermore, the numerical simulations show that the maximum effect occurs at night during the nighttime cooling cycle, and the temperature difference decreases gradually from that time to a few hours after sunrise of the next day. Thus, it can be concluded that the nighttime application of IRT is the most suitable time window for bridge decks. Furthermore, three IR cameras with different specifications were compared to explore several factors affecting the utilization of IRT in regards to subsurface damage detection in concrete structures, specifically when the IRT is utilized for high-speed bridge deck inspections at normal driving speeds under field laboratory conditions. The results show that IRT can detect up to 2.54 cm delamination from the concrete surface at any time period. This study revealed two important factors of camera specifications for high-speed inspection by IRT as shorter integration time and higher pixel resolution.Finally, a real bridge was scanned by three different types of IR cameras and the results were compared with other NDE technologies that were implemented by other researchers on the same bridge. When compared at fully documented locations with 8 concrete cores, a high-end IR camera with cooled detector distinguished sound and delaminated areas accurately. Furthermore, indicated location and shape of delaminations by three IR cameras were compared to other NDE methods from past research, and the result revealed that the cooled camera showed almost identical shapes to other NDE methods including chain drag. It should be noted that the data were collected at normal driving speed without any lane closures, making it a more practical and faster method than other NDE technologies. It was also presented that the factor most likely to affect high-speed application is integration time of IR camera as well as the conclusion of the field laboratory test.The notable contribution of this study for the improvement of IRT is that this study revealed the preferable conditions for IRT, specifically for high-speed scanning of concrete bridge decks. This study shows that IRT implementation under normal driving speeds has high potential to evaluate concrete bridge decks accurately without any lane closures much more quickly than other NDE methods, if a cooled camera equipped with higher pixel resolution is used during nighttime. Despite some limitations of IRT, the data collection speed is a great advantage for periodic bridge inspections compared to other NDE methods. Moreover, there is a high possibility to reduce inspection time, labor and budget drastically if high-speed bridge deck scanning by the combination of IRT and computer vision-based technology becomes a standard bridge deck inspection method. Therefore, the author recommends combined application of the high-speed scanning combination and other NDE methods to optimize bridge deck inspections.
Show less - Date Issued
- 2016
- Identifier
- CFE0006323, ucf:51575
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006323
- Title
- A Process Evaluation of a Family Involvement Program at a Title I Elementary School.
- Creator
-
Moody, Maria, Lue, Martha, Lambie, Glenn, Little, Mary, Swan, Bonnie, University of Central Florida
- Abstract / Description
-
Parental or family involvement in student academics has been an on-going topic for researchers. There is a need for studies to be conducted on parental involvement program implementation in order to determine if there is an impact on student academics when school, family, and community partnership programs are in place. For this study, a process evaluation was conducted on a parental or family involvement program newly developed and implemented at a Title I elementary school in an urban...
Show moreParental or family involvement in student academics has been an on-going topic for researchers. There is a need for studies to be conducted on parental involvement program implementation in order to determine if there is an impact on student academics when school, family, and community partnership programs are in place. For this study, a process evaluation was conducted on a parental or family involvement program newly developed and implemented at a Title I elementary school in an urban setting. The purpose of this mixed-methods process evaluation was to (a) document how the program was implemented, (b) examine the progress toward meeting its intended outcomes, and (c) use findings to make recommendations to drive improvement. The program's logic model was used to examine the program's intended short-term outcomes; including increasing parental involvement and knowledge in regard to the school's reading, mathematics, and science curricula as well as increasing the knowledge of home strategies for student academic support. Student achievement impacts were also examined. Quantitative data collection included program participant survey data and participants' student achievement data for reading and mathematics. Document analysis of the program's artifacts allowed for a qualitative analysis for the evaluation. Findings indicated the program was making progress in increasing parents' knowledge about the reading curriculum, but not for mathematics and science. There was also an increase in parents' knowledge of home strategies and improvement in parental program attendance rates.
Show less - Date Issued
- 2017
- Identifier
- CFE0006768, ucf:51857
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006768
- Title
- Facilitating Information Retrieval in Social Media User Interfaces.
- Creator
-
Costello, Anthony, Tang, Yubo, Fiore, Stephen, Goldiez, Brian, University of Central Florida
- Abstract / Description
-
As the amount of computer mediated information (e.g., emails, documents, multi-media) we need to process grows, our need to rapidly sort, organize and store electronic information likewise increases. In order to store information effectively, we must find ways to sort through it and organize it in a manner that facilitates efficient retrieval. The instantaneous and emergent nature of communications across networks like Twitter makes them suitable for discussing events (e.g., natural disasters...
Show moreAs the amount of computer mediated information (e.g., emails, documents, multi-media) we need to process grows, our need to rapidly sort, organize and store electronic information likewise increases. In order to store information effectively, we must find ways to sort through it and organize it in a manner that facilitates efficient retrieval. The instantaneous and emergent nature of communications across networks like Twitter makes them suitable for discussing events (e.g., natural disasters) that are amorphous and prone to rapid changes. It can be difficult for an individual human to filter through and organize the large amounts of information that can pass through these types of social networks when events are unfolding rapidly. A common feature of social networks is the images (e.g., human faces, inanimate objects) that are often used by those who send messages across these networks. Humans have a particularly strong ability to recognize and differentiate between human Faces. This effect may also extend to recalling information associated with each human Face. This study investigated the difference between human Face images, non-human Face images and alphanumeric labels as retrieval cues under different levels of Task Load. Participants were required to recall key pieces of event information as they emerged from a Twitter-style message feed during a simulated natural disaster. A counter-balanced within-subjects design was used for this experiment. Participants were exposed to low, medium and high Task Load while responding to five different types of recall cues: (1) Nickname, (2) Non-Face, (3) Non-Face (&) Nickname, (4) Face and (5) Face (&) Nickname. The task required participants to organize information regarding emergencies (e.g., car accidents) from a Twitter-style message feed. The messages reported various events such as fires occurring around a fictional city. Each message was associated with a different recall cue type, depending on the experimental condition. Following the task, participants were asked to recall the information associated with one of the cues they worked with during the task. Results indicate that under medium and high Task Load, both Non-Face and Face retrieval cues increased recall performance over Nickname alone with Non-Faces resulting in the highest mean recall scores. When comparing medium to high Task Load: Face (&) Nickname and Non-Face significantly outperformed the Face condition. The performance in Non-Face (&) Nickname was significantly better than Face (&) Nickname. No significant difference was found between Non-Faces and Non-Faces (&) Nickname. Subjective Task Load scores indicate that participants experienced lower mental workload when using Non-Face cues than using Nickname or Face cues. Generally, these results indicate that under medium and high Task Load levels, images outperformed alphanumeric nicknames, Non-Face images outperformed Face images, and combining alphanumeric nicknames with images may have offered a significant performance advantage only when the image is that of a Face. Both theoretical and practical design implications are provided from these findings.
Show less - Date Issued
- 2014
- Identifier
- CFE0005318, ucf:50524
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005318
- Title
- Ultra-wideband Spread Spectrum Communications using Software Defined Radio and Surface Acoustic Wave Correlators.
- Creator
-
Gallagher, Daniel, Malocha, Donald, Delfyett, Peter, Richie, Samuel, Weeks, Arthur, Youngquist, Robert, University of Central Florida
- Abstract / Description
-
Ultra-wideband (UWB) communication technology offers inherent advantages such as the ability to coexist with previously allocated Federal Communications Commission (FCC) frequencies, simple transceiver architecture, and high performance in noisy environments. Spread spectrum techniques offer additional improvements beyond the conventional pulse-based UWB communications. This dissertation implements a multiple-access UWB communication system using a surface acoustic wave (SAW) correlator...
Show moreUltra-wideband (UWB) communication technology offers inherent advantages such as the ability to coexist with previously allocated Federal Communications Commission (FCC) frequencies, simple transceiver architecture, and high performance in noisy environments. Spread spectrum techniques offer additional improvements beyond the conventional pulse-based UWB communications. This dissertation implements a multiple-access UWB communication system using a surface acoustic wave (SAW) correlator receiver with orthogonal frequency coding and software defined radio (SDR) base station transmitter.Orthogonal frequency coding (OFC) and pseudorandom noise (PN) coding provide a means for spreading of the UWB data. The use of orthogonal frequency coding (OFC) increases the correlator processing gain (PG) beyond that of code division multiple access (CDMA); providing added code diversity, improved pulse ambiguity, and superior performance in noisy environments. Use of SAW correlators reduces the complexity and power requirements of the receiver architecture by eliminating many of the components needed and reducing the signal processing and timing requirements necessary for digital matched filtering of the complex spreading signal.The OFC receiver correlator code sequence is hard-coded in the device due to the physical SAW implementation. The use of modern SDR forms a dynamic base station architecture which is able to programmatically generate a digitally modulated transmit signal. An embedded Xilinx Zynq (TM) system on chip (SoC) technology was used to implement the SDR system; taking advantage of recent advances in digital-to-analog converter (DAC) sampling rates. SDR waveform samples are generated in baseband in-phase and quadrature (I (&) Q) pairs and upconverted to a 491.52 MHz operational frequency.The development of the OFC SAW correlator ultimately used in the receiver is presented along with a variety of advanced SAW correlator device embodiments. Each SAW correlator device was fabricated on lithium niobate (LiNbO3) with fractional bandwidths in excess of 20%. The SAW correlator device presented for use in system was implemented with a center frequency of 491.52 MHz; matching SDR transmit frequency. Parasitic electromagnetic feedthrough becomes problematic in the packaged SAW correlator after packaging and fixturing due to the wide bandwidths and high operational frequency. The techniques for reduction of parasitic feedthrough are discussedwith before and after results showing approximately 10:1 improvement.Correlation and demodulation results are presented using the SAW correlator receiver under operation in an UWB communication system. Bipolar phase shift keying (BPSK) techniques demonstrate OFC modulation and demodulation for a test binary bit sequence. Matched OFC code reception is compared to a mismatched, or cross-correlated, sequence after correlation and demodulation. Finally, the signal-to-noise power ratio (SNR) performance results for the SAW correlator under corruption of a wideband noise source are presented.
Show less - Date Issued
- 2015
- Identifier
- CFE0005794, ucf:50054
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005794
- Title
- Treatment Matching in PTSD: A Confirmatory Factor Analysis Based On Therapeutic Mechanisms of Action.
- Creator
-
Trachik, Benjamin, Bowers, Clint, Beidel, Deborah, Jentsch, Florian, University of Central Florida
- Abstract / Description
-
The current study takes an initial step toward deriving a method for empirically based, theory-driven treatment matching in a military population suffering from PTSD. Along with the more overt symptoms of PTSD (e.g., persistent hyperarousal), secondary cognitive symptoms have also been shown to be significantly associated with avoidance and intrusive symptoms, as well as contribute to functional impairment. Based on the factor analytic and treatment literature for PTSD, it appears that there...
Show moreThe current study takes an initial step toward deriving a method for empirically based, theory-driven treatment matching in a military population suffering from PTSD. Along with the more overt symptoms of PTSD (e.g., persistent hyperarousal), secondary cognitive symptoms have also been shown to be significantly associated with avoidance and intrusive symptoms, as well as contribute to functional impairment. Based on the factor analytic and treatment literature for PTSD, it appears that there are two central mechanisms associated with beneficial therapeutic change that underlies both CPT and PE treatments (i.e., habituation, changes in cognitions). Additionally, different traumatic events and peritraumatic responses may be associated with unique symptom profiles and may necessitate targeted treatment. The present study proposes a novel approach to treatment matching based on the factor structure of PTSD and underlying mechanisms of treatment response. More broadly, this paper provides evidence for a broader understanding of peritraumatic responses and the potential implications of these responses for symptom profiles and illness trajectories.
Show less - Date Issued
- 2014
- Identifier
- CFE0005727, ucf:50126
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005727
- Title
- LIQUID CRYSTAL OPTICS FOR COMMUNICATIONS, SIGNAL PROCESSING AND 3-D MICROSCOPIC IMAGING.
- Creator
-
Khan, Sajjad, Riza, Nabeel, University of Central Florida
- Abstract / Description
-
This dissertation proposes, studies and experimentally demonstrates novel liquid crystal (LC) optics to solve challenging problems in RF and photonic signal processing, freespace and fiber optic communications and microscopic imaging. These include free-space optical scanners for military and optical wireless applications, variable fiber-optic attenuators for optical communications, photonic control techniques for phased array antennas and radar, and 3-D microscopic imaging. At the heart of...
Show moreThis dissertation proposes, studies and experimentally demonstrates novel liquid crystal (LC) optics to solve challenging problems in RF and photonic signal processing, freespace and fiber optic communications and microscopic imaging. These include free-space optical scanners for military and optical wireless applications, variable fiber-optic attenuators for optical communications, photonic control techniques for phased array antennas and radar, and 3-D microscopic imaging. At the heart of the applications demonstrated in this thesis are LC devices that are non-pixelated and can be controlled either electrically or optically. Instead of the typical pixel-by-pixel control as is custom in LC devices, the phase profile across the aperture of these novel LC devices is varied through the use of high impedance layers. Due to the presence of the high impedance layer, there forms a voltage gradient across the aperture of such a device which results in a phase gradient across the LC layer which in turn is accumulated by the optical beam traversing through this LC device. The geometry of the electrical contacts that are used to apply the external voltage will define the nature of the phase gradient present across the optical beam. In order to steer a laser beam in one angular dimension, straight line electrical contacts are used to form a one dimensional phase gradient while an annular electrical contact results in a circularly symmetric phase profile across the optical beam making it suitable for focusing the optical beam. The geometry of the electrical contacts alone is not sufficient to form the linear and the quadratic phase profiles that are required to either deflect or focus an optical beam. Clever use of the phase response of a typical nematic liquid crystal (NLC) is made such that the linear response region is used for the angular beam deflection while the high voltage quadratic response region is used for focusing the beam. Employing an NLC deflector, a device that uses the linear angular deflection, laser beam steering is demonstrated in two orthogonal dimensions whereas an NLC lens is used to address the third dimension to complete a three dimensional (3-D) scanner. Such an NLC deflector was then used in a variable optical attenuator (VOA), whereby a laser beam coupled between two identical single mode fibers (SMF) was mis-aligned away from the output fiber causing the intensity of the output coupled light to decrease as a function of the angular deflection. Since the angular deflection is electrically controlled, hence the VOA operation is fairly simple and repeatable. An extension of this VOA for wavelength tunable operation is also shown in this dissertation. A LC spatial light modulator (SLM) that uses a photo-sensitive high impedance electrode whose impedance can be varied by controlling the light intensity incident on it, is used in a control system for a phased array antenna. Phase is controlled on the Write side of the SLM by controlling the intensity of the Write laser beam which then is accessed by the Read beam from the opposite side of this reflective SLM. Thus the phase of the Read beam is varied by controlling the intensity of the Write beam. A variable fiber-optic delay line is demonstrated in the thesis which uses wavelength sensitive and wavelength insensitive optics to get both analog as well as digital delays. It uses a chirped fiber Bragg grating (FBG), and a 1xN optical switch to achieve multiple time delays. The switch can be implemented using the 3-D optical scanner mentioned earlier. A technique is presented for ultra-low loss laser communication that uses a combination of strong and weak thin lens optics. As opposed to conventional laser communication systems, the Gaussian laser beam is prevented from diverging at the receiving station by using a weak thin lens that places the transmitted beam waist mid-way between a symmetrical transmitter-receiver link design thus saving prime optical power. LC device technology forms an excellent basis to realize such a large aperture weak lens. Using a 1-D array of LC deflectors, a broadband optical add-drop filter (OADF) is proposed for dense wavelength division multiplexing (DWDM) applications. By binary control of the drive signal to the individual LC deflectors in the array, any optical channel can be selectively dropped and added. For demonstration purposes, microelectromechanical systems (MEMS) digital micromirrors have been used to implement the OADF. Several key systems issues such as insertion loss, polarization dependent loss, wavelength resolution and response time are analyzed in detail for comparison with the LC deflector approach. A no-moving-parts axial scanning confocal microscope (ASCM) system is designed and demonstrated using a combination of a large diameter LC lens and a classical microscope objective lens. By electrically controlling the 5 mm diameter LC lens, the 633 nm wavelength focal spot is moved continuously over a 48 Ým range with measured 3-dB axial resolution of 3.1 Ým using a 0.65 numerical aperture (NA) micro-objective lens. The ASCM is successfully used to image an Indium Phosphide twin square optical waveguide sample with a 10.2 Ým waveguide pitch and 2.3 Ým height and width. Using fine analog electrical control of the LC lens, a super-fine sub-wavelength axial resolution of 270 nm is demonstrated. The proposed ASCM can be useful in various precision three dimensional imaging and profiling applications.
Show less - Date Issued
- 2005
- Identifier
- CFE0000750, ucf:46596
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000750
- Title
- Secondary World: The Limits of Ludonarrative.
- Creator
-
Dannelly, David, Adams, JoAnne, Price, Mark, Poindexter, Carla, Kovach, Keith, University of Central Florida
- Abstract / Description
-
Secondary World: The Limits of Ludonarrative is a series of short narrative animations that are a theoretical treatise on the limitations of western storytelling in video games. The series covers specific topics relating to film theory, game design and art theory: specifically those associated with Gilles Deleuze, Jean Baudrillard, Jay Bolter, Richard Grusin and Andy Clark. The use of imagery, editing and presentation is intended to physically represent an extension of myself and my thinking...
Show moreSecondary World: The Limits of Ludonarrative is a series of short narrative animations that are a theoretical treatise on the limitations of western storytelling in video games. The series covers specific topics relating to film theory, game design and art theory: specifically those associated with Gilles Deleuze, Jean Baudrillard, Jay Bolter, Richard Grusin and Andy Clark. The use of imagery, editing and presentation is intended to physically represent an extension of myself and my thinking process and which are united through the common thread of my personal feelings, thoughts and experiences in the digital age.
Show less - Date Issued
- 2014
- Identifier
- CFE0005155, ucf:50704
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005155
- Title
- Adaptive Architectural Strategies for Resilient Energy-Aware Computing.
- Creator
-
Ashraf, Rizwan, DeMara, Ronald, Lin, Mingjie, Wang, Jun, Jha, Sumit, Johnson, Mark, University of Central Florida
- Abstract / Description
-
Reconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited...
Show moreReconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited to implement Evolvable Hardware (EHW). EHW utilize genetic algorithms to realize logic circuits at runtime, as directed by the objective function. However, the size of problems solved using EHW as compared with traditional approaches has been limited to relatively compact circuits. This is due to the increase in complexity of the genetic algorithm with increase in circuit size. To address this research challenge of scalability, the Netlist-Driven Evolutionary Refurbishment (NDER) technique was designed and implemented herein to enable on-the-fly permanent fault mitigation in FPGA circuits. NDER has been shown to achieve refurbishment of relatively large sized benchmark circuits as compared to related works. Additionally, Design Diversity (DD) techniques which are used to aid such evolutionary refurbishment techniques are also proposed and the efficacy of various DD techniques is quantified and evaluated.Similarly, there exists a growing need for adaptable logic datapaths in custom-designed nanometer-scale ICs, for ensuring operational reliability in the presence of Process, Voltage, and Temperature (PVT) and, transistor-aging variations owing to decreased feature sizes for electronic devices. Without such adaptability, excessive design guardbands are required to maintain the desired integration and performance levels. To address these challenges, the circuit-level technique of Self-Recovery Enabled Logic (SREL) was designed herein. At design-time, vulnerable portions of the circuit identified using conventional Electronic Design Automation tools are replicated to provide post-fabrication adaptability via intelligent techniques. In-situ timing sensors are utilized in a feedback loop to activate suitable datapaths based on current conditions that optimize performance and energy consumption. Primarily, SREL is able to mitigate the timing degradations caused due to transistor aging effects in sub-micron devices by reducing the stress induced on active elements by utilizing power-gating. As a result, fewer guardbands need to be included to achieve comparable performance levels which leads to considerable energy savings over the operational lifetime.The need for energy-efficient operation in current computing systems has given rise to Near-Threshold Computing as opposed to the conventional approach of operating devices at nominal voltage. In particular, the goal of exascale computing initiative in High Performance Computing (HPC) is to achieve 1 EFLOPS under the power budget of 20MW. However, it comes at the cost of increased reliability concerns, such as the increase in performance variations and soft errors. This has given rise to increased resiliency requirements for HPC applications in terms of ensuring functionality within given error thresholds while operating at lower voltages. My dissertation research devised techniques and tools to quantify the effects of radiation-induced transient faults in distributed applications on large-scale systems. A combination of compiler-level code transformation and instrumentation are employed for runtime monitoring to assess the speed and depth of application state corruption as a result of fault injection. Finally, fault propagation models are derived for each HPC application that can be used to estimate the number of corrupted memory locations at runtime. Additionally, the tradeoffs between performance and vulnerability and the causal relations between compiler optimization and application vulnerability are investigated.
Show less - Date Issued
- 2015
- Identifier
- CFE0006206, ucf:52889
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006206