Current Search: University of Central Florida (x)
View All Items
Pages
- Title
- Design Disjunction for Resilient Reconfigurable Hardware.
- Creator
-
Alzahrani, Ahmad, DeMara, Ronald, Yuan, Jiann-Shiun, Lin, Mingjie, Wang, Jun, Turgut, Damla, University of Central Florida
- Abstract / Description
-
Contemporary reconfigurable hardware devices have the capability to achieve high performance, powerefficiency, and adaptability required to meet a wide range of design goals. With scaling challenges facing current complementary metal oxide semiconductor (CMOS), new concepts and methodologies supportingefficient adaptation to handle reliability issues are becoming increasingly prominent. Reconfigurable hardware and their ability to realize self-organization features are expected to play a key...
Show moreContemporary reconfigurable hardware devices have the capability to achieve high performance, powerefficiency, and adaptability required to meet a wide range of design goals. With scaling challenges facing current complementary metal oxide semiconductor (CMOS), new concepts and methodologies supportingefficient adaptation to handle reliability issues are becoming increasingly prominent. Reconfigurable hardware and their ability to realize self-organization features are expected to play a key role in designingfuture dependable hardware architectures. However, the exponential increase in density and complexity of current commercial SRAM-based field-programmable gate arrays (FPGAs) has escalated the overheadassociated with dynamic runtime design adaptation. Traditionally, static modular redundancy techniques areconsidered to surmount this limitation; however, they can incur substantial overheads in both area andpower requirements. To achieve a better trade-off among performance, area, power, and reliability, thisresearch proposes design-time approaches that enable fine selection of redundancy level based on target reliability goals and autonomous adaptation to runtime demands. To achieve this goal, three studies were conducted:First, a graph and set theoretic approach, named Hypergraph-Cover Diversity (HCD), is introduced as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-freehypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets ofresources, each of which can be utilized by the same synthesized application netlist. The diverseimplementations provide reconfiguration-based resilience throughout the system lifetime while avoiding thesignificant overheads associated with runtime placement and routing phases. Evaluation on a Motion-JPEGimage compression core using a Xilinx 7-series-based FPGA hardware platform has demonstrated thepotential of the proposed FT method to achieve 37.5% area saving and up to 66% reduction in powerconsumption compared to the frequently-used TMR scheme while providing superior fault tolerance.Second, Design Disjunction based on non-adaptive group testing is developed to realize a low-overheadfault tolerant system capable of handling self-testing and self-recovery using runtime partial reconfiguration.Reconfiguration is guided by resource grouping procedures which employ non-linear measurements given by the constructive property of f-disjunctness to extend runtime resilience to a large fault space and realize a favorable range of tradeoffs. Disjunct designs are created using the mosaic convergence algorithmdeveloped such that at least one configuration in the library evades any occurrence of up to d resource faults, where d is lower-bounded by f. Experimental results for a set of MCNC and ISCAS benchmarks havedemonstrated f-diagnosability at the individual slice level with average isolation resolution of 96.4% (94.4%) for f=1 (f=2) while incurring an average critical path delay impact of only 1.49% and area cost roughly comparable to conventional 2-MR approaches. Finally, the proposed Design Disjunction method is evaluated as a design-time method to improve timing yield in the presence of large random within-die (WID) process variations for application with a moderately high production capacity.
Show less - Date Issued
- 2015
- Identifier
- CFE0006250, ucf:51086
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006250
- Title
- Enhanced Hardware Security Using Charge-Based Emerging Device Technology.
- Creator
-
Bi, Yu, Yuan, Jiann-Shiun, Jin, Yier, DeMara, Ronald, Lin, Mingjie, Chow, Lee, University of Central Florida
- Abstract / Description
-
The emergence of hardware Trojans has largely reshaped the traditional view that the hardware layer can be blindly trusted. Hardware Trojans, which are often in the form of maliciously inserted circuitry, may impact the original design by data leakage or circuit malfunction. Hardware counterfeiting and IP piracy are another two serious issues costing the US economy more than $200 billion annually. A large amount of research and experimentation has been carried out on the design of these...
Show moreThe emergence of hardware Trojans has largely reshaped the traditional view that the hardware layer can be blindly trusted. Hardware Trojans, which are often in the form of maliciously inserted circuitry, may impact the original design by data leakage or circuit malfunction. Hardware counterfeiting and IP piracy are another two serious issues costing the US economy more than $200 billion annually. A large amount of research and experimentation has been carried out on the design of these primitives based on the currently prevailing CMOS technology.However, the security provided by these primitives comes at the cost of large overheads mostly in terms of area and power consumption. The development of emerging technologies provides hardware security researchers with opportunities to utilize some of the otherwise unusable properties of emerging technologies in security applications. In this dissertation, we will include the security consideration in the overall performance measurements to fully compare the emerging devices with CMOS technology.The first approach is to leverage two emerging devices (Silicon NanoWire and Graphene SymFET) for hardware security applications. Experimental results indicate that emerging device based solutions can provide high level circuit protection with relatively lower performance overhead compared to conventional CMOS counterpart. The second topic is to construct an energy-efficient DPA-resilient block cipher with ultra low-power Tunnel FET. Current-mode logic is adopted as a circuit-level solution to countermeasure differential power analysis attack, which is mostly used in the cryptographic system. The third investigation targets on potential security vulnerability of foundry insider's attack. Split manufacturing is adopted for the protection on radio-frequency (RF) circuit design.
Show less - Date Issued
- 2016
- Identifier
- CFE0006264, ucf:51041
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006264
- Title
- Integrating the macroscopic and microscopic traffic safety analysis using hierarchical models.
- Creator
-
Cai, Qing, Abdel-Aty, Mohamed, Eluru, Naveen, Hasan, Samiul, Lee, JaeYoung, Yan, Xin, University of Central Florida
- Abstract / Description
-
Crash frequency analysis is a crucial tool to investigate traffic safety problems. With the objective of revealing hazardous factors which would affect crash occurrence, crash frequency analysis has been undertaken at the macroscopic and microscopic levels. At the macroscopic level, crashes from a spatial aggregation (such as traffic analysis zone or county) are considered to quantify the impacts of socioeconomic and demographic characteristics, transportation demand and network attributes so...
Show moreCrash frequency analysis is a crucial tool to investigate traffic safety problems. With the objective of revealing hazardous factors which would affect crash occurrence, crash frequency analysis has been undertaken at the macroscopic and microscopic levels. At the macroscopic level, crashes from a spatial aggregation (such as traffic analysis zone or county) are considered to quantify the impacts of socioeconomic and demographic characteristics, transportation demand and network attributes so as to provide countermeasures from a planning perspective. On the other hand, the microscopic crashes on a segment or intersection are analyzed to identify the influence of geometric design, lighting and traffic flow characteristics with the objective of offering engineering solutions (such as installing sidewalk and bike lane, adding lighting). Although numerous traffic safety studies have been conducted, still there are critical limitations at both levels. In this dissertation, several methodologies have been proposed to alleviate several limitations in the macro- and micro-level safety research. Then, an innovative method has been suggested to analyze crashes at the two levels, simultaneously. At the macro-level, the viability of dual-state models (i.e., zero-inflated and hurdle models) were explored for traffic analysis zone based pedestrian and bicycle crash analysis. Additionally, spatial spillover effects were explored in the models by employing exogenous variables from neighboring zones. Both conventional single-state model (i.e., negative binomial) and dual-state models such as zero-inflated negative binomial and hurdle negative binomial models with and without spatial effects were developed. The model comparison results for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Then, the modifiable areal unit problem for macro-level crash analysis was discussed. Macro-level traffic safety analysis has been undertaken at different spatial configurations. However, clear guidelines for the appropriate zonal system selection for safety analysis are unavailable. In this study, a comparative analysis was conducted to determine the optimal zonal system for macroscopic crash modeling considering census tracts (CTs), traffic analysis zones (TAZs), and a newly developed traffic-related zone system labeled traffic analysis districts (TADs). Poisson lognormal models for three crash types (i.e., total, severe, and non-motorized mode crashes) were developed based on the three zonal systems without and with consideration of spatial autocorrelation. The study proposed a method to compare the modeling performance of the three types of geographic units at different spatial configuration through a grid based framework. Specifically, the study region was partitioned to grids of various sizes and the model prediction accuracy of the various macro models was considered within these grids of various sizes. These model comparison results for all crash types indicated that the models based on TADs consistently offer a better performance compared to the others. Besides, the models considering spatial autocorrelation outperformed the ones that do not consider it. Finally, based on the modeling results, it is recommended to adopt TADs for transportation safety planning.After determining the optimal traffic safety analysis zonal system, further analysis was conducted for non-motorist crashes (pedestrian and bicycle crashes). This study contributed to the literature on pedestrian and bicyclist safety by building on the conventional count regression models to explore exogenous factors affecting pedestrian and bicyclist crashes at the macroscopic level. In the traditional count models, effects of exogenous factors on non-motorist crashes were investigated directly. However, the vulnerable road users' crashes are collisions between vehicles and non-motorists. Thus, the exogenous factors can affect the non-motorist crashes through the non-motorists and vehicle drivers. To accommodate for the potentially different impact of exogenous factors we converted the non-motorist crash counts as the product of total crash counts and proportion of non-motorist crashes and formulated a joint model of the negative binomial (NB) model and the logit model to deal with the two parts, respectively. The formulated joint model was estimated using non-motorist crash data based on the Traffic Analysis Districts (TADs) in Florida. Meanwhile, the traditional NB model was also estimated and compared with the joint model. The results indicated that the joint model provides better data fit and could identify more significant variables. Subsequently, a novel joint screening method was suggested based on the proposed model to identify hot zones for non-motorist crashes. The hot zones of non-motorist crashes were identified and divided into three types: hot zones with more dangerous driving environment only, hot zones with more hazardous walking and cycling conditions only, and hot zones with both. At the microscopic level, crash modeling analysis was conducted for road facilities. This study, first, explored the potential macro-level effects which are always excluded or omitted in the previous studies. A Bayesian hierarchical model was proposed to analyze crashes on segments and intersection incorporating the macro-level data, which included both explanatory variables and total crashes of all segments and intersections. Besides, a joint modeling structure was adopted to consider the potentially spatial autocorrelation between segments and their connected intersections. The proposed model was compared with three other models: a model considering micro-level factors only, one hierarchical model considering macro-level effects with random terms only, and one hierarchical model considering macro-level effects with explanatory variables. The results indicated that models considering macro-level effects outperformed the model having micro-level factors only, which supports the idea to consider macro-level effects for micro-level crash analysis. Besides, the micro-level models were even further enhanced by the proposed model. Finally, significant spatial correlation could be found between segments and their adjacent intersections, supporting the employment of the joint modeling structure to analyze crashes at various types of road facilities. In addition to the separated analysis at either the macro- or micro-level, an integrated approach has been proposed to examine traffic safety problems at the two levels, simultaneously. If conducted in the same study area, the macro- and micro-level crash analyses should investigate the same crashes but aggregating the crashes at different levels. Hence, the crash counts at the two levels should be correlated and integrating macro- and micro-level crash frequency analyses in one modeling structure might have the ability to better explain crash occurrence by realizing the effects of both macro- and micro-level factors. This study proposed a Bayesian integrated spatial crash frequency model, which linked the crash counts of macro- and micro-levels based on the spatial interaction. In addition, the proposed model considered the spatial autocorrelation of different types of road facilities (i.e., segments and intersections) at the micro-level with a joint modeling structure. Two independent non-integrated models for macro- and micro-levels were also estimated separately and compared with the integrated model. The results indicated that the integrated model can provide better model performance for estimating macro- and micro-level crash counts, which validates the concept of integrating the models for the two levels. Also, the integrated model provides more valuable insights about the crash occurrence at the two levels by revealing both macro- and micro-level factors. Subsequently, a novel hotspot identification method was suggested, which enables us to detect hotspots for both macro- and micro-levels with comprehensive information from the two levels. It is expected that the proposed integrated model and hotspot identification method can help practitioners implement more reasonable transportation safety plans and more effective engineering treatments to proactively enhance safety.
Show less - Date Issued
- 2017
- Identifier
- CFE0006724, ucf:51891
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006724
- Title
- Explicit Feedback Within Game-Based Training: Examining the Influence of Source Modality Effects on Interaction.
- Creator
-
Goldberg, Benjamin, Bowers, Clint, Cannon-Bowers, Janis, Kincaid, John, McDaniel, Thomas, Sottilare, Robert, University of Central Florida
- Abstract / Description
-
This research aims to enhance Simulation-Based Training (SBT) applications to support training events in the absence of live instruction. The overarching purpose is to explore available tools for integrating intelligent tutoring communications in game-based learning platforms and to examine theory-based techniques for delivering explicit feedback in such environments. The primary tool influencing the design of this research was the Generalized Intelligent Framework for Tutoring (GIFT), a...
Show moreThis research aims to enhance Simulation-Based Training (SBT) applications to support training events in the absence of live instruction. The overarching purpose is to explore available tools for integrating intelligent tutoring communications in game-based learning platforms and to examine theory-based techniques for delivering explicit feedback in such environments. The primary tool influencing the design of this research was the Generalized Intelligent Framework for Tutoring (GIFT), a modular domain-independent architecture that provides the tools and methods to author, deliver, and evaluate intelligent tutoring technologies within any training platform. Influenced by research surrounding Social Cognitive Theory and Cognitive Load Theory, the resulting experiment tested varying approaches for utilizing an Embodied Pedagogical Agent (EPA) to function as a tutor during interaction in a game-based environment. Conditions were authored to assess the tradeoffs between embedding an EPA directly in a game, embedding an EPA in GIFT's browser-based Tutor-User Interface (TUI), or using audio prompts alone with no social grounding.The resulting data supports the application of using an EPA embedded in GIFT's TUI to provide explicit feedback during a game-based learning event. Analyses revealed conditions with an EPA situated in the TUI to be as effective as embedding the agent directly in the game environment. This inference is based on evidence showing reliable differences across conditions on the metrics of performance and self-reported mental demand and feedback usefulness items. This research provides source modality tradeoffs linked to tactics for relaying training relevant explicit information to a user based on real-time performance in a game.
Show less - Date Issued
- 2013
- Identifier
- CFE0004850, ucf:49696
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004850
- Title
- Microscopic Assessment of Transportation Emissions on Limited Access Highways.
- Creator
-
Abou-Senna, Hatem, Radwan, Ahmed, Abdel-Aty, Mohamed, Al-Deek, Haitham, Cooper, Charles, Johnson, Mark, University of Central Florida
- Abstract / Description
-
On-road vehicles are a major source of transportation carbon dioxide (CO2) greenhouse gas emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, e.g., carbon monoxide (CO), nitrogen oxides (NOx), and particulate matter (PM). The need to accurately quantify transportation-related emissions from vehicles is essential. Transportation agencies and researchers in the past have...
Show moreOn-road vehicles are a major source of transportation carbon dioxide (CO2) greenhouse gas emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, e.g., carbon monoxide (CO), nitrogen oxides (NOx), and particulate matter (PM). The need to accurately quantify transportation-related emissions from vehicles is essential. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The new United States Environmental Protection Agency (USEPA) mobile source emissions model, MOVES2010a (MOVES) can estimate vehicle emissions on a second-by-second basis creating the opportunity to develop new software (")VIMIS 1.0(") (VISSIM/MOVES Integration Software) to facilitate the integration process. This research presents a microscopic examination of five key transportation parameters (traffic volume, speed, truck percentage, road grade and temperature) on a 10-mile stretch of Interstate 4 (I-4) test bed prototype; an urban limited access highway corridor in Orlando, Florida. The analysis was conducted utilizing VIMIS 1.0 and using an advanced custom design technique; D-Optimality and I-Optimality criteria, to identify active factors and to ensure precision in estimating the regression coefficients as well as the response variable.The analysis of the experiment identified the optimal settings of the key factors and resulted in the development of Micro-TEM (Microscopic Transportation Emissions Meta-Model). The main purpose of Micro-TEM is to serve as a substitute model for predicting transportation emissions on limited access highways to an acceptable degree of accuracy in lieu of running simulations using a traffic model and integrating the results in an emissions model. Furthermore, significant emission rate reductions were observed from the experiment on the modeled corridor especially for speeds between 55 and 60 mph while maintaining up to 80% and 90% of the freeway's capacity. However, vehicle activity characterization in terms of speed was shown to have a significant impact on the emission estimation approach.Four different approaches were further examined to capture the environmental impacts of vehicular operations on the modeled test bed prototype. First, (at the most basic level), emissions were estimated for the entire 10-mile section (")by hand(") using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link driving schedules (LDS), and second-by-second operating mode distributions (OPMODE). This research analyzed how the various approaches affect predicted emissions of CO, NOx, PM and CO2. The results demonstrated that obtaining accurate and comprehensive operating mode distributions on a second-by-second basis improves emission estimates. Specifically, emission rates were found to be highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, frequent braking/coasting and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach.Additionally, model applications and mitigation scenarios were examined on the modeled corridor to evaluate the environmental impacts in terms of vehicular emissions and at the same time validate the developed model (")Micro-TEM("). Mitigation scenarios included the future implementation of managed lanes (ML) along with the general use lanes (GUL) on the I-4 corridor, the currently implemented variable speed limits (VSL) scenario as well as a hypothetical restricted truck lane (RTL) scenario. Results of the mitigation scenarios showed an overall speed improvement on the corridor which resulted in overall reduction in emissions and emission rates when compared to the existing condition (EX) scenario and specifically on link by link basis for the RTL scenario.The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.
Show less - Date Issued
- 2012
- Identifier
- CFE0004777, ucf:49788
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004777
- Title
- The Behavior of Cerium Oxide Nanoparticles in Polymer Electrolyte Membranes in Ex-Situ and In-Situ Fuel Cell Durability Tests.
- Creator
-
Pearman, Benjamin, Hampton, Michael, Blair, Richard, Clausen, Christian, Seal, Sudipta, Campiglia, Andres, Yestrebsky, Cherie, Mohajeri, Nahid, University of Central Florida
- Abstract / Description
-
Fuel cells are known for their high efficiency and have the potential to become a major technology for producing clean energy, especially when the fuel, e.g. hydrogen, is produced from renewable energy sources such as wind or solar. Currently, the two main obstacles to wide-spread commercialization are their high cost and the short operational lifetime of certain components.Polymer electrolyte membrane (PEM) fuel cells have been a focus of attention in recent years, due to their use of...
Show moreFuel cells are known for their high efficiency and have the potential to become a major technology for producing clean energy, especially when the fuel, e.g. hydrogen, is produced from renewable energy sources such as wind or solar. Currently, the two main obstacles to wide-spread commercialization are their high cost and the short operational lifetime of certain components.Polymer electrolyte membrane (PEM) fuel cells have been a focus of attention in recent years, due to their use of hydrogen as a fuel, their comparatively low operating temperature and flexibility for use in both stationary and portable (automotive) applications.Perfluorosulfonic acid membranes are the leading ionomers for use in PEM hydrogen fuel cells. They combine essential qualities, such as high mechanical and thermal stability, with high proton conductivity. However, they are expensive and currently show insufficient chemical stability towards radicals formed during fuel cell operation, resulting in degradation that leads to premature failure. The incorporation of durability improving additives into perfluorosulfonic acid membranes is discussed in this work.Cerium oxide (ceria) is a well-known radical scavenger that has been used in the biological and medical field. It is able to quench radicals by facilely switching between its Ce(III) and Ce(IV) oxidation states.In this work, cerium oxide nanoparticles were added to perfluorosulfonic acid membranes and subjected to ex-situ and in-situ accelerated durability tests.The two ceria formulations, an in-house synthesized and commercially available material, were found to consist of crystalline particles of 2 (-) 5 nm and 20 (-) 150 nm size, respectively, that did not change size or shape when incorporated into the membranes.At higher temperature and relative humidity in gas flowing conditions, ceria in membranes is found to be reduced to its ionic form by virtue of the acidic environment. In ex-situ Fenton testing, the inclusion of ceria into membranes reduced the emission of fluoride, a strong indicator of degradation, by an order of magnitude with both liquid and gaseous hydrogen peroxide. In open-circuit voltage (OCV) hold fuel cell testing, ceria improved durability, as measured by several parameters such as OCV decay rate, fluoride emission and cell performance, over several hundred hours and influenced the formation of the platinum band typically found after durability testing.
Show less - Date Issued
- 2012
- Identifier
- CFE0004789, ucf:49731
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004789
- Title
- Analytical study of computer vision-based pavement crack quantification using machine learning techniques.
- Creator
-
Mokhtari, Soroush, Yun, Hae-Bum, Nam, Boo Hyun, Catbas, Necati, Shah, Mubarak, Xanthopoulos, Petros, University of Central Florida
- Abstract / Description
-
Image-based techniques are a promising non-destructive approach for road pavement condition evaluation. The main objective of this study is to extract, quantify and evaluate important surface defects, such as cracks, using an automated computer vision-based system to provide a better understanding of the pavement deterioration process. To achieve this objective, an automated crack-recognition software was developed, employing a series of image processing algorithms of crack extraction, crack...
Show moreImage-based techniques are a promising non-destructive approach for road pavement condition evaluation. The main objective of this study is to extract, quantify and evaluate important surface defects, such as cracks, using an automated computer vision-based system to provide a better understanding of the pavement deterioration process. To achieve this objective, an automated crack-recognition software was developed, employing a series of image processing algorithms of crack extraction, crack grouping, and crack detection. Bottom-hat morphological technique was used to remove the random background of pavement images and extract cracks, selectively based on their shapes, sizes, and intensities using a relatively small number of user-defined parameters. A technical challenge with crack extraction algorithms, including the Bottom-hat transform, is that extracted crack pixels are usually fragmented along crack paths. For de-fragmenting those crack pixels, a novel crack-grouping algorithm is proposed as an image segmentation method, so called MorphLink-C. Statistical validation of this method using flexible pavement images indicated that MorphLink-C not only improves crack-detection accuracy but also reduces crack detection time.Crack characterization was performed by analysing imagerial features of the extracted crack image components. A comprehensive statistical analysis was conducted using filter feature subset selection (FSS) methods, including Fischer score, Gini index, information gain, ReliefF, mRmR, and FCBF to understand the statistical characteristics of cracks in different deterioration stages. Statistical significance of crack features was ranked based on their relevancy and redundancy. The statistical method used in this study can be employed to avoid subjective crack rating based on human visual inspection. Moreover, the statistical information can be used as fundamental data to justify rehabilitation policies in pavement maintenance.Finally, the application of four classification algorithms, including Artificial Neural Network (ANN), Decision Tree (DT), k-Nearest Neighbours (kNN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) is investigated for the crack detection framework. The classifiers were evaluated in the following five criteria: 1) prediction performance, 2) computation time, 3) stability of results for highly imbalanced datasets in which, the number of crack objects are significantly smaller than the number of non-crack objects, 4) stability of the classifiers performance for pavements in different deterioration stages, and 5) interpretability of results and clarity of the procedure. Comparison results indicate the advantages of white-box classification methods for computer vision based pavement evaluation. Although black-box methods, such as ANN provide superior classification performance, white-box methods, such as ANFIS, provide useful information about the logic of classification and the effect of feature values on detection results. Such information can provide further insight for the image-based pavement crack detection application.
Show less - Date Issued
- 2015
- Identifier
- CFE0005671, ucf:50186
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005671
- Title
- Conditions Associated with Increased Risk of Fraud: A Model for Publicly Traded Restaurant Companies.
- Creator
-
Yost, Elizabeth, Croes, Robertico, Severt, Denver, Robinson, Edward, Murphy, Kevin, Semrad, Kelly, Jackson, Leonard, University of Central Florida
- Abstract / Description
-
The central focus of this dissertation study is to understand the impact of the Sarbanes-Oxley Act and the factors that contribute to increased risk of fraud in order to determine why fraud may occur despite the imposed regulation of the Sarbanes-Oxley Act. The main premise of the study tests the application of the fraud triangle framework constructs to publicly traded restaurant companies during the time period of 2002-2014, using proxy variables defined through literature. Essentially, the...
Show moreThe central focus of this dissertation study is to understand the impact of the Sarbanes-Oxley Act and the factors that contribute to increased risk of fraud in order to determine why fraud may occur despite the imposed regulation of the Sarbanes-Oxley Act. The main premise of the study tests the application of the fraud triangle framework constructs to publicly traded restaurant companies during the time period of 2002-2014, using proxy variables defined through literature. Essentially, the study seeks to identify the factors that may provide the optimal criteria to engage in fraudulent or opportunistic behavior. The fraud triangle theoretical framework is comprised of the constructs of pressure, opportunity and rationalization, and has mostly been utilized by external auditors to assess the fraud risk of various companies. It has never been applied to the restaurant industry, and the proxy variables selected have never before been tested in a comprehensive model. Thus, a major contribution of this study may enable executive managers to assess the fraud triangle conditions according to the model in order to afford conclusions regarding increased risk of fraud. The study first hypothesized that the Sarbanes-Oxley Act has had a significant impact on detecting increased risk of fraud for publicly traded restaurant companies. Additionally, the study controlled for and tested the proxy variables of the fraud triangle constructs to determine if any of the variables had a significant impact on detecting increased risk of fraud for publicly traded restaurant companies. The variables tested included company size, debt, employee turnover, organizational structure, international sales growth, executive stock compensation, return on assets, the Recession, and macro-economic factors of interest, inflation, and unemployment rates. The research study adopted an exploratory research design using the case of publicly traded United States restaurant companies in order to provide a better understanding of the characteristics that may contribute to increased fraud risk. The study assumed a binary distribution of the dependent variable, increased fraud risk, measured by the incidence of a reported internal control deficiency over the testable time period. Specifically, the study employed a probit model to estimate the probability that an entity or company will be at an increased risk of fraud based on the independent variables that support and are linked to the fraud triangle framework. Additionally, the model assumes equal weight to the variables of the fraud triangle framework. Through use of the probit model, the major findings of the study were as follows: First, the Sarbanes-Oxley Act does have a significant impact on highlighting areas of increased fraud risk for publicly traded restaurant companies. Second, for the total population of restaurant companies, only the Recession, interest rates, inflation rates and unemployment rates are significant indicators of increased fraud risk. None of the internal variables were significant. However, once the data was segmented by type of restaurant, the results revealed significance of both internal and external variables. These results imply a couple of theoretical notions: first, that the Sarbanes-Oxley Act is an effective means for detecting risk of fraud for publicly traded restaurant companies when considering variables that support the fraud triangle; second, that the fraud triangle is contextual when applied to the restaurant industry because only the variables that are outside of managements control were significant. Finally, from a managerial perspective, the study provides evidence that macro-economic conditions that might affect consumer demand may increase the risk of fraud for publicly traded restaurant companies.
Show less - Date Issued
- 2015
- Identifier
- CFE0005745, ucf:50101
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005745
- Title
- Comprehension of Science Text by African American Fifth and Sixth Grade Students: The Effects of a Metalinguistic Approach.
- Creator
-
Davis, Karen, Rosa-Lugo, Linda, Kent-Walsh, Jennifer, Ehren, Barbara, Hahs-Vaughn, Debbie, Rivers, Kenyatta, Crevecoeur, Edwidge, University of Central Florida
- Abstract / Description
-
Scientific literacy has been at the forefront of science education reform for the past 20 years, particularly for students from culturally and linguistically diverse (CLD) backgrounds (Lee et. al., 2005; Pearson, Moje (&) Greenleaf, 2010). The ability to extract meaning from text is an important skill. Yet many students struggle with effectively comprehending what they read, particularly in content areas of science, math and history. According to the National Assessment Educational Progress ...
Show moreScientific literacy has been at the forefront of science education reform for the past 20 years, particularly for students from culturally and linguistically diverse (CLD) backgrounds (Lee et. al., 2005; Pearson, Moje (&) Greenleaf, 2010). The ability to extract meaning from text is an important skill. Yet many students struggle with effectively comprehending what they read, particularly in content areas of science, math and history. According to the National Assessment Educational Progress (NAEP, 2013) report, adolescents are not acquiring advanced literacy skills needed to succeed in the workplace and academic setting. Literacy experts have called for the use of disciplinary literacy approaches to engage learners with the content in ways that mirror what scientists, historians and mathematicians do to gain understanding in their disciplines (Moje, 2006; Shanahan (&) Shanahan, 2008). Although disciplinary literacy instruction is promising, there is limited empirical research on the effectiveness of discipline-specific literacy approaches. The present study examined the effects of a metalinguistic approach on the comprehension of science text among African American 5 and 6th grade students. The focus of the instructional protocol was to explicitly teach adverbial clauses and assist students to unpack adverbial clauses through the use of a graphic organizer. The process of unpacking complex sentences aimed to facilitate comprehension of science text by engaging the participants in analysis and discussion of the meaning obtained from the adverbial clauses. This study employed an experimental single-case multiple-probe across participants design. Visual Analysis (VA) and the Improvement Rate Difference (IRD) were used to analyze the data. The results of VA and IRD indicated that all participants demonstrated progress between baseline and treatment phases. Overall, the results of the investigation suggest that it is possible for 5th and 6th grade African American students to benefit from instruction that closely analyzes language. Clinical implications and future research directions are discussed.
Show less - Date Issued
- 2014
- Identifier
- CFE0005322, ucf:50525
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005322
- Title
- Website Interactivity as a Branding Tool for Hotel Websites.
- Creator
-
Barreda Davila, Albert, Nusair, Khaldoon, Okumus, Fevzi, Hara, Tadayuki, Ozturk, Ahmet, Bai, Haiyan, Beldona, Srikanth, University of Central Florida
- Abstract / Description
-
The dissertation explored the relationships among Website interactivity, brand knowledge, consumer-based brand equity and behavioral intentions in the context of hotel Websites. Based on an in-depth literature review, a theory-driven model was proposed and ten hypotheses were developed. The dissertation employed an empirical study based on a survey design, and collected data via a marketing company. Respondents who booked a hotel room online using hotel branded Websites in the last 12 months...
Show moreThe dissertation explored the relationships among Website interactivity, brand knowledge, consumer-based brand equity and behavioral intentions in the context of hotel Websites. Based on an in-depth literature review, a theory-driven model was proposed and ten hypotheses were developed. The dissertation employed an empirical study based on a survey design, and collected data via a marketing company. Respondents who booked a hotel room online using hotel branded Websites in the last 12 months were approached to complete the online questionnaire. Four hundred ninety six (496) respondents completed the online questionnaire by answering to questions related to their last hotel booking experience. Analysis was conducted in two phases: (1) Confirmatory Factor Analysis (CFA) and (2) Structural Equation Modeling (SEM). The overall fit of the CFA model and the final SEM model were acceptable, indicating an adequate fit to the data. The results suggested that the two dimensions of Website interactivity, namely system interactivity and social interactivity, positively impacted the components of brand knowledge, and that system interactivity had a stronger impact as compared to social interactivity. Although, social interactivity was not found to have a significant direct effect on brand awareness, the results showed that social interactivity had a significant impact on brand image. Furthermore, the relationship between brand equity and behavioral intentions was positive and significant. The empirical study offered theoretical for utilizing Website interactivity as a branding tool in the hotel context. Additionally, the results provide practical insights into branding strategies, Website development, and behavioral intentions enhancement. Very few studies have empirically examined and incorporated Website interactivity dimensions and brand knowledge with consumer-based brand equity and behavioral intentions. This gap in the literature has been compounded by an absence of empirical studies on Website interactivity as a tool to develop brands and behavioral intentions in the context of hotel Websites. The present dissertation closes this gap in the literature by reporting on a questionnaire of US adult travelers that offered data on those theoretical associations. Conceptually, the results support the influential impact of Website interactivity on brand elements and behavioral intentions.
Show less - Date Issued
- 2014
- Identifier
- CFE0005302, ucf:50512
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005302
- Title
- Autonomous Recovery of Reconfigurable Logic Devices using Priority Escalation of Slack.
- Creator
-
Imran, Syednaveed, DeMara, Ronald, Mikhael, Wasfy, Lin, Mingjie, Yuan, Jiann-Shiun, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases.To extend these concepts to semiconductor aging and process variation in the deep...
Show moreField Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases.To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Reconfigurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric.FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria.
Show less - Date Issued
- 2013
- Identifier
- CFE0005006, ucf:50005
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005006
- Title
- A Systems Approach to Sustainable Energy Portfolio Development.
- Creator
-
Hadian Niasar, Saeed, Reinhart, Debra, Madani Larijani, Kaveh, Wang, Dingbao, Lee, Woo Hyoung, Pazour, Jennifer, University of Central Florida
- Abstract / Description
-
Adequate energy supply has become one of the vital components of human development and economic growth of nations. In fact, major components of the global economy such as transportation services, communications, industrial processes, and construction activities are dependent on adequate energy resources. Even mining and extraction of energy resources, including harnessing the forces of nature to produce energy, are dependent on accessibility of sufficient energy in the appropriate form at the...
Show moreAdequate energy supply has become one of the vital components of human development and economic growth of nations. In fact, major components of the global economy such as transportation services, communications, industrial processes, and construction activities are dependent on adequate energy resources. Even mining and extraction of energy resources, including harnessing the forces of nature to produce energy, are dependent on accessibility of sufficient energy in the appropriate form at the desired location. Therefore, energy resource planning and management to provide appropriate energy in terms of both quantity and quality has become a priority at the global level. The increasing demand for energy due to growing population, higher living standards, and economic development magnifies the importance of reliable energy plans. In addition, the uneven distribution of traditional fossil fuel energy sources on the Earth and the resulting political and economic interactions are other sources of complexity within energy planning. The competition over fossil fuels that exists due to gradual depletion of such sources and the tremendous thirst of current global economic operations for these sources, as well as the sensitivity of fossil fuel supplies and prices to global conditions, all add to the complexity of effective energy planning. In addition to diversification of fossil fuel supply sources as a means of increasing national energy security, many governments are investing in non-fossil fuels, especially renewable energy sources, to combat the risks associated with adequate energy supply. Moreover, increasing the number of energy sources also adds further complication to energy planning. Global warming, resulting from concentration of greenhouse gas emissions in the atmosphere, influences energy infrastructure investments and operations management as a result of international treaty obligations and other regulations requiring that emissions be cut to sustainable levels. Burning fossil fuel, as one of the substantial driving factors of global warming and energy insecurity, is mostly impacted by such policies, pushing forward the implementation of renewable energy polices. Thus, modern energy portfolios comprise a mix of renewable energy sources and fossil fuels, with an increasing share of renewables over time. Many governments have been setting renewable energy targets that mandate increasing energy production from such sources over time. Reliance on renewable energy sources certainly helps with reduction of greenhouse gas emissions while improving national energy security. However, the growing implementation of renewable energy has some limitations. Such energy technologies are not always as cheap as fossil fuel sources, mostly due to immaturity of these energy sources in most locations as well as high prices of the materials and equipment to harness the forces of nature and transform them to usable energy. In addition, despite the fact that renewable energy sources are traditionally considered to be environmentally friendly, compared to fossil fuels, they sometimes require more natural resources such as water and land to operate and produce energy. Hence, the massive production of energy from these sources may lead to water shortage, land use change, increasing food prices, and insecurity of water supplies. In other words, the energy production from renewables might be a solution to reduce greenhouse gas emissions, but it might become a source of other problems such as scarcity of natural resources.The fact that future energy mix will rely more on renewable sources is undeniable, mostly due to depletion of fossil fuel sources over time. However, the aforementioned limitations pose a challenge to general policies that encourage immediate substitution of fossil fuels with renewables to battle climate change. In fact, such limitations should be taken into account in developing reliable energy policies that seek adequate energy supply with minimal secondary effects. Traditional energy policies have been suggesting the expansion of least cost energy options, which were mostly fossil fuels. Such sources used to be considered riskless energy options with low volatility in the absence of competitive energy markets in which various energy technologies are competing over larger market shares. Evolution of renewable energy technologies, however, complicated energy planning due to emerging risks that emanated mostly from high price volatility. Hence, energy planning began to be seen as investment problems in which the costs of energy portfolio were minimized while attempting to manage associated price risks. So, energy policies continued to rely on risky fossil fuel options and small shares of renewables with the primary goal to reduce generation costs. With emerging symptoms of climate change and the resulting consequences, the new policies accounted for the costs of carbon emissions control in addition to other costs. Such policies also encouraged the increased use of renewable energy sources. Emissions control cost is not an appropriate measure of damages because these costs are substantially less than the economic damages resulting from emissions. In addition, the effects of such policies on natural resources such as water and land is not directly taken into account. However, sustainable energy policies should be able to capture such complexities, risks, and tradeoffs within energy planning. Therefore, there is a need for adequate supply of energy while addressing issues such as global warming, energy security, economy, and environmental impacts of energy production processes. The effort in this study is to develop an energy portfolio assessment model to address the aforementioned concerns.This research utilized energy performance data, gathered from extensive review of articles and governmental institution reports. The energy performance values, namely carbon footprint, water footprint, land footprint, and cost of energy production were carefully selected in order to have the same basis for comparison purposes. If needed, adjustment factors were applied. In addition, the Energy Information Administration (EIA) energy projection scenarios were selected as the basis for estimating the share of the energy sources over the years until 2035. Furthermore, the resource availability in different states within the U.S. was obtained from publicly available governmental institutions that provide such statistics. Specifically, the carbon emissions magnitudes (metric tons per capita) for different states were extracted from EIA databases, states' freshwater withdrawals (cubic meters per capita) were found from USGS databases, states' land availability values (square kilometers) were obtained from the U.S. Census Bureau, and economic resource availability (GDP per capita) for different states were acquired from the Bureau of Economic Analysis.In this study, first, the impacts of energy production processes on global freshwater resources are investigated based on different energy projection scenarios. Considering the need for investing on energy sources with minimum environmental impacts while securing maximum efficiency, a systems approach is adopted to quantify the resource use efficiency of energy sources under sustainability indicators. The sensitivity and robustness of the resource use efficiency scores are then investigated versus existing energy performance uncertainties and varying resource availability conditions. The resource use efficiency of the energy sources is then regionalized for different resource limitation conditions in states within the U.S. Finally, a sustainable energy planning framework is developed based on Modern Portfolio Theory (MPT) and Post-Modern Portfolio Theory (PMPT) with consideration of the resource use efficiency measures and associated efficiency risks.In the energy-water nexus investigation, the energy sources are categorized into 10 major groups with distinct water footprint magnitudes and associated uncertainties. The global water footprint of energy production processes are then estimated for different EIA energy mix scenarios over the 2012-2035 period. The outcomes indicate that the water footprint of energy production increases by almost 50% depending on the scenario. In fact, growing energy production is not the only reason for increasing the energy related water footprint. Increasing the share of water intensive energy sources in the future energy mix is another driver of increasing global water footprint of energy in the future. The results of the energies' water footprint analysis demonstrate the need for a policy to reduce the water use of energy generation. Furthermore, the outcomes highlight the importance of considering the secondary impacts of energy production processes besides their carbon footprint and costs. The results also have policy implications for future energy investments in order to increase the water use efficiency of energy sources per unit of energy production, especially those with significant water footprint such as hydropower and biofuels.In the next step, substantial efforts have been dedicated to evaluating the efficiency of different energy sources from resource use perspective. For this purpose, a system of systems approach is adopted to measure the resource use efficiency of energy sources in the presence of trade-offs between independent yet interacting systems (climate, water, land, economy). Hence, a stochastic multi-criteria decision making (MCDM) framework is developed to compute the resource use efficiency scores for four sustainability assessment criteria, namely carbon footprint, water footprint, land footprint, and cost of energy production considering existing performance uncertainties. The energy sources' performances under aforementioned sustainability criteria are represented in ranges due to uncertainties that exist because of technological and regional variations. Such uncertainties are captured by the model based on Monte-Carlo selection of random values and are translated into stochastic resource use efficiency scores. As the notion of optimality is not unique, five MCDM methods are exploited in the model to counterbalance the bias toward definition of optimality. This analysis is performed under (")no resource limitation(") conditions to highlight the quality of different energy sources from a resource use perspective. The resource use efficiency is defined as a dimensionless number in scale of 0-100, with greater numbers representing a higher efficiency. The outcomes of this analysis indicate that despite increasing popularity, not all renewable energy sources are more resource use efficient than non-renewable sources. This is especially true for biofuels and different types of ethanol that demonstrate lower resource use efficiency scores compared to natural gas and nuclear energy. It is found that geothermal energy and biomass energy from miscanthus are the most and least resource use efficient energy alternatives based on the performance data available in the literature. The analysis also shows that none of the energy sources are strictly dominant or strictly dominated by other energy sources. Following the resource use efficiency analysis, sensitivity and robustness analyses are performed to determine the impacts of resource limitations and existing performance uncertainties on resource use efficiency, respectively. Sensitivity analysis indicates that geothermal energy and ethanol from sugarcane have the lowest and highest resource use efficiency sensitivity, respectively. Also, it is found that from a resource use perspective, concentrated solar power (CSP) and hydropower are respectively the most and least robust energy options with respect to the existing performance uncertainties in the literature.In addition to resource use efficiency analysis, sensitivity analysis and robustness analysis, of energy sources, this study also investigates the scheme of the energy production mix within a specific region with certain characteristics, resource limitations, and availabilities. In fact, different energy sources, especially renewables, vary in demand for natural resources (such as water and land), environmental impacts, geographic requirements, and type of infrastructure required for energy production. In fact, the efficiency of energy sources from a resource use perspective is dependent upon regional specifications, so the energy portfolio varies for different regions due to varying resource availability conditions. Hence, the resource use efficiency scores of different energy technologies are calculated based on the aforementioned sustainability criteria and regional resource availability and limitation conditions (emissions, water resources, land, and GDP) within different U.S. states, regardless of the feasibility of energy alternatives in each state. Sustainability measures are given varying weights based on the emissions cap, available economic resources, land, and water resources in each state, upon which the resource use efficiency of energy sources is calculated by utilizing the system of systems framework developed in the previous step. Efficiency scores are graphically illustrated on GIS-based maps for different states and different energy sources. The results indicate that for some states, fossil fuels such as coal and natural gas are as efficient as renewables like wind and solar energy technologies from resource use perspective. In other words, energy sources' resource use efficiency is significantly sensitive to available resources and limitations in a certain location.Moreover, energy portfolio development models have been created in order to determine the share of different energy sources of total energy production, in order to meet energy demand, maintain energy security, and address climate change with the least possible adverse impacts on the environment. In fact, the traditional (")least cost(") energy portfolios are outdated and should be replaced with (")most efficient(") ones that are not only cost-effective, but also environmentally friendly. Hence, the calculated resource use efficiency scores and associated statistical analysis outcomes for a range of renewable and nonrenewable energy sources are fed into a portfolio selection framework to choose the appropriate energy mixes associated with the risk attitudes of decision makers. For this purpose, Modern Portfolio Theory (MPT) and Post-Modern Portfolio Theory (PMPT) are both employed to illustrate how different interpretations of (")risk of return(") yield different energy portfolios. The results indicate that 2012 energy mix and projected world's 2035 energy portfolio are not sustainable in terms of resource use efficiency and could be substituted with more reliable, more effective portfolios that address energy security and global warming with minimal environmental and economic impacts.
Show less - Date Issued
- 2013
- Identifier
- CFE0005001, ucf:50020
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005001
- Title
- Migratory connectivity and carry-over effects in Northwest Atlantic loggerhead turtles (Caretta caretta, L.).
- Creator
-
Ceriani, Simona, Weishampel, John, Ehrhart, Llewellyn, Walters, Linda, Quintana-Ascencio, Pedro, Roth, James, Valdes, Eduardo, University of Central Florida
- Abstract / Description
-
Migration is a widespread and complex phenomenon in nature that has fascinated humans for centuries. Connectivity among populations influences their demographics, genetic structure and response to environmental change. Here, I used the loggerhead turtle (Caretta caretta, L.) as a study organism to address questions related to migratory connectivity and carry-over effects using satellite telemetry, stable isotope analysis and GIS interpolation methods. Telemetry identified foraging areas...
Show moreMigration is a widespread and complex phenomenon in nature that has fascinated humans for centuries. Connectivity among populations influences their demographics, genetic structure and response to environmental change. Here, I used the loggerhead turtle (Caretta caretta, L.) as a study organism to address questions related to migratory connectivity and carry-over effects using satellite telemetry, stable isotope analysis and GIS interpolation methods. Telemetry identified foraging areas previously overlooked for loggerheads nesting in Florida. Next, I validated and evaluated the efficacy of intrinsic markers as a complementary and low cost tool to assign loggerhead foraging regions in the Northwest Atlantic Ocean (NWA), using both a spatially implicit and spatially explicit (isoscapes) approach. I then focused on the nesting beaches and developed a common currency for isotopic studies based on unhatched eggs, which provide a non-invasive and non-destructive method for more extensive sampling to elucidate isotopic patterns across broader spatiotemporal scales. Lastly, I found that intra-population variations in foraging strategies affect annual and long-term reproductive output of loggerheads nesting in Florida. Understanding geospatial linkages is critical to the fostering of appropriate management and conservation strategies for migratory species. My multi-faceted approach contributes to the growing body of literature exploring migratory connectivity and carry-over effects.
Show less - Date Issued
- 2014
- Identifier
- CFE0005470, ucf:50390
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005470
- Title
- CONAE MicroWave Radiometer (MWR) Counts to Brightness Temperature Algorithm.
- Creator
-
Ghazi, Zoubair, Jones, W Linwood, Wei, Lei, Mikhael, Wasfy, Wu, Thomas, Junek, William, Piepmeier, Jeffrey, University of Central Florida
- Abstract / Description
-
This dissertation concerns the development of the MicroWave Radiometer (MWR) brightness temperature (Tb) algorithm and the associated algorithm validation using on-orbit MWR Tb measurements. This research is sponsored by the NASA Earth Sciences Aquarius Mission, a joint international science mission, between NASA and the Argentine Space Agency (Comision Nacional de Actividades Espaciales, CONAE). The MWR is a CONAE developed passive microwave instrument operating at 23.8 GHz (K-band) H-pol...
Show moreThis dissertation concerns the development of the MicroWave Radiometer (MWR) brightness temperature (Tb) algorithm and the associated algorithm validation using on-orbit MWR Tb measurements. This research is sponsored by the NASA Earth Sciences Aquarius Mission, a joint international science mission, between NASA and the Argentine Space Agency (Comision Nacional de Actividades Espaciales, CONAE). The MWR is a CONAE developed passive microwave instrument operating at 23.8 GHz (K-band) H-pol and 36.5 GHz (Ka-band) H- (&) V-pol designed to complement the Aquarius L-band radiometer/scatterometer, which is the prime sensor for measuring sea surface salinity (SSS). MWR measures the Earth's brightness temperature and retrieves simultaneous, spatially collocated, environmental measurements (surface wind speed, rain rate, water vapor, and sea ice concentration) to assist in the measurement of SSS.This dissertation research addressed several areas including development of: 1) a signal processing procedure for determining and correcting radiometer system non-linearity; 2) an empirical method to retrieve switch matrix loss coefficients during thermal-vacuum (T/V) radiometric calibration test; and 3) an antenna pattern correction (APC) algorithm using Inter-satellite radiometric cross-calibration of MWR with the WindSat satellite radiometer. The validation of the MWR counts-to-Tb algorithm was performed using two years of on-orbit data, which included special deep space calibration measurements and routine clear sky ocean/land measurements.
Show less - Date Issued
- 2014
- Identifier
- CFE0005496, ucf:50366
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005496
- Title
- On-Chip Electro-Static Discharge (ESD) Protection for Radio-Frequency Integrated Circuits.
- Creator
-
Cui, Qiang, Liou, Juin, Yuan, Jiann-Shiun, Wu, Xinzhang, Haralambous, Michael, Shen, Zheng, Deppe, Dennis, University of Central Florida
- Abstract / Description
-
Electrostatic Discharge (ESD) phenomenon is a common phenomenon in daily life and it could damage the integrated circuit throughout the whole cycle of product from the manufacturing. Several ESD stress models and test methods have been used to reproduce ESD events and characterize ESD protection device's performance. The basic ESD stress models are: Human Body Model (HBM), Machine Model (MM), and Charged Device Model (CDM). On-chip ESD protection devices are widely used to discharge ESD...
Show moreElectrostatic Discharge (ESD) phenomenon is a common phenomenon in daily life and it could damage the integrated circuit throughout the whole cycle of product from the manufacturing. Several ESD stress models and test methods have been used to reproduce ESD events and characterize ESD protection device's performance. The basic ESD stress models are: Human Body Model (HBM), Machine Model (MM), and Charged Device Model (CDM). On-chip ESD protection devices are widely used to discharge ESD current and limit the overstress voltage under different ESD events. Some effective ESD protection devices were reported for low speed circuit applications such as analog ICs or digital ICs in CMOS process. On the contrast, only a few ESD protection devices available for radio frequency integrated circuits (RF ICs). ESD protection for RF ICs is more challenging than traditional low speed CMOS ESD protection design because of the facts that: (1) Process limitation: High-performance RF ICs are typically fabricated in compound semiconductor process such as GaAs pHEMT and SiGe HBT process. And some proved effective ESD devices (e.g. SCR) are not able to be fabricated in those processes due to process limitation. Moreover, compound semiconductor process has lower thermal conductivity which will worsen its ESD damage immunity. (2) Parasitic capacitance limitation: Even for RF CMOS process, the inherent parasitic capacitance of ESD protection devices is a big concern. Therefore, this dissertation will contribute on ESD protection designs for RF ICs in all the major processes including GaAs pHEMT, SiGe BiCMOS and standard CMOS.The ESD protection for RF ICs in GaAs pHEMT process is very difficult, and the typical HBM protection level is below 1-kV HBM level. The first part of our work is to analyze pHEMT's snapback, post-snapback saturation and thermal failure under ESD stress using TLP-like Sentaurus TCAD simulation. The snapback is caused by virtual bipolar transistor due to large electron-hole pairs impacted near drain region. Post-snapback saturation is caused by temperature-induced mobility degradation due to III-V compound semiconductor materials' poor thermal conductivity. And thermal failure is found to be caused by hot spot located in pHEMT's InGaAs layer. Understanding of these physical mechanisms is critical to design effective ESD protection device in GaAs pHEMT process. Several novel ESD protection devices were designed in 0.5um GaAs pHEMT process. The multi-gate pHEMT based ESD protection devices in both enhancement-mode and depletion-mode were reported and characterized then. Due to the multiple current paths available in the multi-gate pHEMT, the new ESD protection clamp showed significantly improved ESD performances over the conventional single-gate pHEMT ESD clamp, including higher current discharge capability, lower on-state resistance, and smaller voltage transient. We proposed another further enhanced ESD protection clamp based on a novel drain-less, multi-gate pHEMT in a 0.5um GaAs pHEMT technology. Based on Barth 4002 TLP measurement results, the ESD protection devices proposed in this chapter can improve the ESD level from 1-kV (0.6 A It2) to up to 8-kV ((>) 5.2 A It2) under HBM. Then we optimized SiGe-based silicon controlled rectifiers (SiGe SCR) in SiGe BiCMOS process. SiGe SCR is considered a good candidate ESD protection device in this process. But the possible slow turn-on issue under CDM ESD events is the major concern. In order to optimize the turn-on performance of SiGe SCR against CDM ESD, the Barth 4012 very fast TLP (vfTLP) and vfTLP-like TCAD simulation were used for characterization and analysis. It was demonstrated that a SiGe SCR implemented with a P PLUG layer and minimal PNP base width can supply the smallest peak voltage and fastest response time which is resulted from the fact that the impact ionization region and effective base width in the SiGe SCR were reduced due to the presence of the P PLUG layer. This work demonstrated a practical approach for designing optimum ESD protection solutions for the low-voltage/radio frequency integrated circuits in SiGe BiCMOS process.In the end, we optimized SCRs in standard silicon-based CMOS process to supply protection for high speed/radio-frequency ICs. SCR is again considered the best for its excellent current handling ability. But the parasitic capacitance of SCRs needs to be reduced to limit SCR's impact to RF performance. We proposed a novel SCR-based ESD structure and characterize it experimentally for the design of effective ESD protection in high-frequency CMOS based integrated circuits. The proposed SCR-based ESD protection device showed a much lower parasitic capacitance and better ESD performance than the conventional SCR and a low-capacitance SCR reported in the literature. The physics underlying the low capacitance was explained by measurements using HP 4284 capacitance meter.Throughout the dissertation work, all the measurements are mainly conducted using Barth 4002 transimission line pulsing (TLP) and Barth 4012 very fast transmission line pulsing (vfTLP) testers. All the simulation was performed using Sentaurus TCAD tool from Synopsys.
Show less - Date Issued
- 2013
- Identifier
- CFE0004668, ucf:49848
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004668
- Title
- Nonlinear dynamic modeling, simulation and characterization of the mesoscale neuron-electrode interface.
- Creator
-
Thakore, Vaibhav, Hickman, James, Mucciolo, Eduardo, Rahman, Talat, Johnson, Michael, Behal, Aman, Molnar, Peter, University of Central Florida
- Abstract / Description
-
Extracellular neuroelectronic interfacing has important applications in the fields of neural prosthetics, biological computation and whole-cell biosensing for drug screening and toxin detection. While the field of neuroelectronic interfacing holds great promise, the recording of high-fidelity signals from extracellular devices has long suffered from the problem of low signal-to-noise ratios and changes in signal shapes due to the presence of highly dispersive dielectric medium in the neuron...
Show moreExtracellular neuroelectronic interfacing has important applications in the fields of neural prosthetics, biological computation and whole-cell biosensing for drug screening and toxin detection. While the field of neuroelectronic interfacing holds great promise, the recording of high-fidelity signals from extracellular devices has long suffered from the problem of low signal-to-noise ratios and changes in signal shapes due to the presence of highly dispersive dielectric medium in the neuron-microelectrode cleft. This has made it difficult to correlate the extracellularly recorded signals with the intracellular signals recorded using conventional patch-clamp electrophysiology. For bringing about an improvement in the signal-to-noise ratio of the signals recorded on the extracellular microelectrodes and to explore strategies for engineering the neuron-electrode interface there exists a need to model, simulate and characterize the cell-sensor interface to better understand the mechanism of signal transduction across the interface. Efforts to date for modeling the neuron-electrode interface have primarily focused on the use of point or area contact linear equivalent circuit models for a description of the interface with an assumption of passive linearity for the dynamics of the interfacial medium in the cell-electrode cleft. In this dissertation, results are presented from a nonlinear dynamic characterization of the neuroelectronic junction based on Volterra-Wiener modeling which showed that the process of signal transduction at the interface may have nonlinear contributions from the interfacial medium. An optimization based study of linear equivalent circuit models for representing signals recorded at the neuron-electrode interface subsequently proved conclusively that the process of signal transduction across the interface is indeed nonlinear. Following this a theoretical framework for the extraction of the complex nonlinear material parameters of the interfacial medium like the dielectric permittivity, conductivity and diffusivity tensors based on dynamic nonlinear Volterra-Wiener modeling was developed. Within this framework, the use of Gaussian bandlimited white noise for nonlinear impedance spectroscopy was shown to offer considerable advantages over the use of sinusoidal inputs for nonlinear harmonic analysis currently employed in impedance characterization of nonlinear electrochemical systems. Signal transduction at the neuron-microelectrode interface is mediated by the interfacial medium confined to a thin cleft with thickness on the scale of 20-110 nm giving rise to Knudsen numbers (ratio of mean free path to characteristic system length) in the range of 0.015 and 0.003 for ionic electrodiffusion. At these Knudsen numbers, the continuum assumptions made in the use of Poisson-Nernst-Planck system of equations for modeling ionic electrodiffusion are not valid. Therefore, a lattice Boltzmann method (LBM) based multiphysics solver suitable for modeling ionic electrodiffusion at the mesoscale neuron-microelectrode interface was developed. Additionally, a molecular speed dependent relaxation time was proposed for use in the lattice Boltzmann equation. Such a relaxation time holds promise for enhancing the numerical stability of lattice Boltzmann algorithms as it helped recover a physically correct description of microscopic phenomena related to particle collisions governed by their local density on the lattice. Next, using this multiphysics solver simulations were carried out for the charge relaxation dynamics of an electrolytic nanocapacitor with the intention of ultimately employing it for a simulation of the capacitive coupling between the neuron and the planar microelectrode on a microelectrode array (MEA). Simulations of the charge relaxation dynamics for a step potential applied at t = 0 to the capacitor electrodes were carried out for varying conditions of electric double layer (EDL) overlap, solvent viscosity, electrode spacing and ratio of cation to anion diffusivity. For a large EDL overlap, an anomalous plasma-like collective behavior of oscillating ions at a frequency much lower than the plasma frequency of the electrolyte was observed and as such it appears to be purely an effect of nanoscale confinement. Results from these simulations are then discussed in the context of the dynamics of the interfacial medium in the neuron-microelectrode cleft. In conclusion, a synergistic approach to engineering the neuron-microelectrode interface is outlined through a use of the nonlinear dynamic modeling, simulation and characterization tools developed as part of this dissertation research.
Show less - Date Issued
- 2012
- Identifier
- CFE0004797, ucf:49718
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004797
- Title
- Remediation of Polychlorinated Biphenyl (PCB) Contaminated Building Materials Using Non-metal and Activated Metal Treatment Systems.
- Creator
-
Legron-Rodriguez, Tamra, Yestrebsky, Cherie, Clausen, Christian, Elsheimer, Seth, Sigman, Michael, Chopra, Manoj, Quinn, Jacqueline, University of Central Florida
- Abstract / Description
-
PCBs are recalcitrant compounds of no known natural origin that persist in the environment despite their ban by the United States Environmental Protection Agency in 1979 due to negative health effects. Transport of PCBs from elastic sealants into concrete, brick, and granite structures has resulted in the need for a technology capable of removing these PCBs from the materials. This research investigated the use of a nonmetal treatment system (NMTS) and an activated metal treatment system ...
Show morePCBs are recalcitrant compounds of no known natural origin that persist in the environment despite their ban by the United States Environmental Protection Agency in 1979 due to negative health effects. Transport of PCBs from elastic sealants into concrete, brick, and granite structures has resulted in the need for a technology capable of removing these PCBs from the materials. This research investigated the use of a nonmetal treatment system (NMTS) and an activated metal treatment system (AMTS) for the remediation and degradation of PCBs from concrete, brick, and granite affixed with PCB-laden caulking. The adsorption of PCBs onto the components of concrete and the feasibility of ethanol washing were also investigated.NMTS is a sorbent paste containing ethanol, acetic acid, and fillers that was developed at the University of Central Florida Environmental Chemistry Laboratory for the in situ remediation of PCBs. Combining NMTS with magnesium results in an activated treatment system used for reductive dechlorination of PCBs. NMTS was applied to laboratory-prepared concrete as well as field samples by direct contact as well as by a novel sock-type delivery. The remediation of PCBs from field samples using NMTS and AMTS resulted in a 33-98% reduction for concrete, a 65-70% reduction for brick, and an 89% reduction in PCB concentration for granite. The limit of NMTS for absorption of Aroclor 1254 was found to be roughly 22,000 mg Aroclor 1254 per kg of treatment system or greater. The activated treatment system resulted in a 94% or greater degradation of PCBs after seven days with the majority of degradation occurring in the first 24 hours. The adsorption of PCBs to individual concrete components (hydrated cement, sand, crushed limestone, and crushed granite) was found to follow the Freundlich isotherm model with greater adsorption to crushed limestone and crushed granite compared to hydrated cement and sand. Ethanol washing was shown to decrease the concentration of laboratory-prepared concrete by 68% and the concentration of PCBs in the ethanol wash were reduced by 77% via degradation with an activated magnesium system.
Show less - Date Issued
- 2013
- Identifier
- CFE0005197, ucf:50625
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005197
- Title
- GAYME: The development, design and testing of an auto-ethnographic, documentary game about quarely wandering urban/suburban spaces in Central Florida.
- Creator
-
Moran, David, Moshell, Jack, Santana, Maria, Kim, Si Jung, McDaniel, Thomas, Vie, Stephanie, Pugh, William, University of Central Florida
- Abstract / Description
-
GAYME is a transmedia story-telling world that I have created to conceptually explore the dynamics of queering game design through the development of varying game prototypes. The final iteration of GAYME is @deadquarewalking*. It is a documentary game and a performance art installation that documents a carless, gay/queer/quare man's journey on Halloween to get to and from one of Orlando's most well-known gay clubs - the Parliament House Resort. "The art of cruising" city streets to seek out...
Show moreGAYME is a transmedia story-telling world that I have created to conceptually explore the dynamics of queering game design through the development of varying game prototypes. The final iteration of GAYME is @deadquarewalking*. It is a documentary game and a performance art installation that documents a carless, gay/queer/quare man's journey on Halloween to get to and from one of Orlando's most well-known gay clubs - the Parliament House Resort. "The art of cruising" city streets to seek out queer/quare companionship particularly amongst gay, male culture(s) is well-documented in densely, populated cities like New York, San Francisco and London, but not so much in car-centric, urban environments like Orlando that are less oriented towards pedestrians. Cruising has been and continues to be risky even in pedestrian-friendly cities but in Orlando cruising takes on a whole other dimension of danger. In 2011-2012, The Advocate magazine named Orlando one of the gayest cities in America (Breen, 2012). Transportation for America (2011) also named the Orlando metropolitan region the most dangerous city in the country for pedestrians. Living in Orlando without a car can be deadly as well as a significant barrier to connecting with other people, especially queer/quare people, because of Orlando's car-centric design. In Orlando, cars are sexy. At the same time, the increasing prevalence in gay, male culture(s) of geo-social, mobile phone applications using Global Positioning Systems (GPS) and location aware services, such as Grindr (Grindr, LLC., 2009) and even FourSquare (Crowley and Selvadurai, 2009) and Instagram (Systrom and Krieger, 2010), is shifting the way gay/queer/quare Orlandoans co-create social and sexual networks both online and offline. Urban and sub-urban landscapes have transformed into hybrid "techno-scapes" overlaying "the electronic, the emotional and the social with the geographic and the physical" (Hjorth, 2011). With or without a car, gay men can still geo-socially cruise Orlando's car-centric, street life with mobile devices. As such emerging media has become more pervasive, it has created new opportunities to quarely visualize Orlando's "technoscape" through phone photography and hashtag metadata while also blurring lines between the artist and the curator, the player and the game designer.This project particularly has evolved to employ game design as an exhibition tool for the visualization of geo-social photography through hashtag play. Using hashtags as a game mechanic generates metadata that potentially identifies patterns of play and "ways of seeing" across player experiences as they attempt to make meaning of the images they encounter in the game. @deadquarewalking also demonstrates the potential of game design and geo-social, photo-sharing applications to illuminate new ways of documenting and witnessing the urban landscapes that we both collectively and uniquely inhabit.*In Irish culture, (")quare(") can mean (")very(") or (")extremely(") or it can be a spelling of the rural or Southern pronunciation of the word (")queer.(") Living in the American Southeast, I personally relate more to the term (")quare(") versus (")queer.(") Cultural theorist E. Patrick Johnson (2001) also argues for (")quareness(") as a way to question the subjective bias of whiteness in queer studies that risks discounting the lived experiences and material realities of people of color. Though I do not identify as a person of color and would be categorized as white or European American, (")quareness(") has an important critical application for considering how Orlando's urban design is intersectionally racialized, gendered and classed.
Show less - Date Issued
- 2014
- Identifier
- CFE0005214, ucf:50641
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005214
- Title
- Integral Representations of Positive Linear Functionals.
- Creator
-
Siple, Angela, Mikusinski, Piotr, Atanasiu, Dragu, Dutkay, Dorin, Han, Deguang, Lee, Junho, Brennan, Joseph, Huo, Qun, University of Central Florida
- Abstract / Description
-
In this dissertation we obtain integral representations for positive linear functionals on commutative algebras with involution and semigroups with involution. We prove Bochner and Plancherel type theorems for representations of positive functionals and show that, under some conditions, the Bochner and Plancherel representations are equivalent. We also consider the extension of positive linear functionals on a Banach algebra into a space of pseudoquotients and give under conditions in which...
Show moreIn this dissertation we obtain integral representations for positive linear functionals on commutative algebras with involution and semigroups with involution. We prove Bochner and Plancherel type theorems for representations of positive functionals and show that, under some conditions, the Bochner and Plancherel representations are equivalent. We also consider the extension of positive linear functionals on a Banach algebra into a space of pseudoquotients and give under conditions in which the space of pseudoquotients can be identified with all Radon measures on the structure space. In the final chapter we consider a system of integrated Cauchy functional equations on a semigroup, which generalizes a result of Ressel and offers a different approach to the proof.
Show less - Date Issued
- 2015
- Identifier
- CFE0005713, ucf:50144
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005713