Current Search: Powers (x)
Pages
-
-
Title
-
Optimal distribution network reconfiguration using meta-heuristic algorithms.
-
Creator
-
Asrari, Arash, Wu, Thomas, Lotfifard, Saeed, Haralambous, Michael, Atia, George, Pazour, Jennifer, University of Central Florida
-
Abstract / Description
-
Finding optimal configuration of power distribution systems topology is an NP-hard combinatorial optimization problem. It becomes more complex when time varying nature of loads in large-scale distribution systems is taken into account. In the second chapter of this dissertation, a systematic approach is proposed to tackle the computational burden of the procedure. To solve the optimization problem, a novel adaptive fuzzy based parallel genetic algorithm (GA) is proposed that employs the...
Show moreFinding optimal configuration of power distribution systems topology is an NP-hard combinatorial optimization problem. It becomes more complex when time varying nature of loads in large-scale distribution systems is taken into account. In the second chapter of this dissertation, a systematic approach is proposed to tackle the computational burden of the procedure. To solve the optimization problem, a novel adaptive fuzzy based parallel genetic algorithm (GA) is proposed that employs the concept of parallel computing in identifying the optimal configuration of the network. The integration of fuzzy logic into GA enhances the efficiency of the parallel GA by adaptively modifying the migration rates between different processors during the optimization process. A computationally efficient graph encoding method based on Dandelion coding strategy is developed which automatically generates radial topologies and prevents the construction of infeasible radial networks during the optimization process. The main shortcoming of the proposed algorithm in Chapter 2 is that it identifies only one single solution. It means that the system operator will not have any option but relying on the found solution. That is why a novel hybrid optimization algorithm is proposed in the third chapter of this dissertation that determines Pareto frontiers, as candidate solutions, for multi-objective distribution network reconfiguration problem. Implementing this model, the system operator will have more flexibility in choosing the best configuration among the alternative solutions. The proposed hybrid optimization algorithm combines the concept of fuzzy Pareto dominance (FPD) with shuffled frog leaping algorithm (SFLA) to recognize non-dominated suboptimal solutions identified by SFLA. The local search step of SFLA is also customized for power systems applications so that it automatically creates and analyzes only the feasible and radial configurations in its optimization procedure which significantly increases the convergence speed of the algorithm. In the fourth chapter, the problem of optimal network reconfiguration is solved for the case in which the system operator is going to employ an optimization algorithm that is automatically modifying its parameters during the optimization process. Defining three fuzzy functions, the probability of crossover and mutation will be adaptively tuned as the algorithm proceeds and the premature convergence will be avoided while the convergence speed of identifying the optimal configuration will not decrease. This modified genetic algorithm is considered a step towards making the parallel GA, presented in the second chapter of this dissertation, more robust in avoiding from getting stuck in local optimums. In the fifth chapter, the concentration will be on finding a potential smart grid solution to more high-quality suboptimal configurations of distribution networks. This chapter is considered an improvement for the third chapter of this dissertation for two reasons: (1) A fuzzy logic is used in the partitioning step of SFLA to improve the proposed optimization algorithm and to yield more accurate classification of frogs. (2) The problem of system reconfiguration is solved considering the presence of distributed generation (DG) units in the network. In order to study the new paradigm of integrating smart grids into power systems, it will be analyzed how the quality of suboptimal solutions can be affected when DG units are continuously added to the distribution network.The heuristic optimization algorithm which is proposed in Chapter 3 and is improved in Chapter 5 is implemented on a smaller case study in Chapter 6 to demonstrate that the identified solution through the optimization process is the same with the optimal solution found by an exhaustive search.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005575, ucf:50238
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005575
-
-
Title
-
Stochastic Optimization for Integrated Energy System with Reliability Improvement Using Decomposition Algorithm.
-
Creator
-
Huang, Yuping, Zheng, Qipeng, Xanthopoulos, Petros, Pazour, Jennifer, Liu, Andrew, University of Central Florida
-
Abstract / Description
-
As energy demands increase and energy resources change, the traditional energy system has beenupgraded and reconstructed for human society development and sustainability. Considerable studies have been conducted in energy expansion planning and electricity generation operations bymainly considering the integration of traditional fossil fuel generation with renewable generation.Because the energy market is full of uncertainty, we realize that these uncertainties have continuously challenged...
Show moreAs energy demands increase and energy resources change, the traditional energy system has beenupgraded and reconstructed for human society development and sustainability. Considerable studies have been conducted in energy expansion planning and electricity generation operations bymainly considering the integration of traditional fossil fuel generation with renewable generation.Because the energy market is full of uncertainty, we realize that these uncertainties have continuously challenged market design and operations, even a national energy policy. In fact, only a few considerations were given to the optimization of energy expansion and generation taking into account the variability and uncertainty of energy supply and demand in energy markets. This usually causes an energy system unreliable to cope with unexpected changes, such as a surge in fuel price, a sudden drop of demand, or a large renewable supply fluctuation. Thus, for an overall energy system, optimizing a long-term expansion planning and market operation in a stochastic environment are crucial to improve the system's reliability and robustness.As little consideration was paid to imposing risk measure on the power management system, this dissertation discusses applying risk-constrained stochastic programming to improve the efficiency,reliability and economics of energy expansion and electric power generation, respectively. Considering the supply-demand uncertainties affecting the energy system stability, three different optimization strategies are proposed to enhance the overall reliability and sustainability of an energy system. The first strategy is to optimize the regional energy expansion planning which focuses on capacity expansion of natural gas system, power generation system and renewable energy system, in addition to transmission network. With strong support of NG and electric facilities, the second strategy provides an optimal day-ahead scheduling for electric power generation system incorporating with non-generation resources, i.e. demand response and energy storage. Because of risk aversion, this generation scheduling enables a power system qualified with higher reliability and promotes non-generation resources in smart grid. To take advantage of power generation sources, the third strategy strengthens the change of the traditional energy reserve requirements to risk constraints but ensuring the same level of systems reliability In this way we can maximize the use of existing resources to accommodate internal or/and external changes in a power system.All problems are formulated by stochastic mixed integer programming, particularly consideringthe uncertainties from fuel price, renewable energy output and electricity demand over time. Taking the benefit of models structure, new decomposition strategies are proposed to decompose the stochastic unit commitment problems which are then solved by an enhanced Benders Decomposition algorithm. Compared to the classic Benders Decomposition, this proposed solution approachis able to increase convergence speed and thus reduce 25% of computation times on the same cases.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005506, ucf:50339
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005506
-
-
Title
-
Arrangement of Google Search Results and Imperial Ideology: Searching for Benghazi, Libya.
-
Creator
-
Stewart, Jacob, Pigg, Stacey, Rounsaville, Angela, Walls, Douglas, University of Central Florida
-
Abstract / Description
-
This project responds to an ongoing discussion in scholarship that identifies and analyzes the ideological functions of computer interfaces. In 1994, Cynthia Selfe and Richard Selfe claimed that interfaces are maps of cultural information and are therefore ideological (485). For Selfe and Selfe and other scholars, these interfaces carried a colonial ideology that resulted in Western dominance over other cultures. Since this early scholarship, our perspectives on interface have shifted with...
Show moreThis project responds to an ongoing discussion in scholarship that identifies and analyzes the ideological functions of computer interfaces. In 1994, Cynthia Selfe and Richard Selfe claimed that interfaces are maps of cultural information and are therefore ideological (485). For Selfe and Selfe and other scholars, these interfaces carried a colonial ideology that resulted in Western dominance over other cultures. Since this early scholarship, our perspectives on interface have shifted with changing technology; interfaces can no longer be treated as having persistent and predictable characteristics like texts. I argue that interfaces are interactions among dynamic information that is constantly being updated online. One of the most prominent ways users interact with information online is through the use of search engines such as Google. Interfaces like Google assist users in navigating dynamic cultural information. How this information is arranged in a Google search event has a profound impact on what meaning we make surrounding the search term.In this project, I argue that colonial ideologies are upheld in several Google search events for the term (")Benghazi, Libya.(") I claim that networked connection during Google search events leads to the creation and sustainment of a colonial ideology through patterns of arrangement. Finally, I offer a methodology for understanding how ideologies are created when search events occur. This methodology searches for patterns in connected information in order to understand how they create an ideological lens.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005267, ucf:50559
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005267
-
-
Title
-
DIVIDED GOVERNMENT AND CONGRESSIONAL FOREIGN POLICY: A CASE STUDY OF THE POST-WORLD WAR II ERA IN AMERICAN GOVERNMENT.
-
Creator
-
Feinman, David, Houghton, David, University of Central Florida
-
Abstract / Description
-
The purpose of this research is to analyze the relationship between the executive and legislative branches of American federal government, during periods within which these two branches are led by different political parties, to discover whether the legislative branch attempts to independently legislate and enact foreign policy by using "the power of the purse" to either appropriate in support of or refuse to appropriate in opposition to military engagement abroad. The methodology for this...
Show moreThe purpose of this research is to analyze the relationship between the executive and legislative branches of American federal government, during periods within which these two branches are led by different political parties, to discover whether the legislative branch attempts to independently legislate and enact foreign policy by using "the power of the purse" to either appropriate in support of or refuse to appropriate in opposition to military engagement abroad. The methodology for this research includes the analysis and comparison of certain variables, including public opinion, budgetary constraints, and the relative majority of the party that holds power in one or both chambers, and the ways these variables may impact the behavior of the legislative branch in this regard. It also includes the analysis of appropriations requests made by the legislative branch for funding military engagement in rejection of requests from the executive branch for all military engagements that occurred during periods of divided government from 1946 through 2009.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003657, ucf:48840
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003657
-
-
Title
-
On Distributed Estimation for Resource Constrained Wireless Sensor Networks.
-
Creator
-
Sani, Alireza, Vosoughi, Azadeh, Rahnavard, Nazanin, Wei, Lei, Atia, George, Chatterjee, Mainak, University of Central Florida
-
Abstract / Description
-
We study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically...
Show moreWe study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically distributed tiny sensors are tasked with collecting data from the field. Each sensor locally processes its noisy observation (local processing can include compression,dimension reduction, quantization, etc) and transmits the processed observation over communication channels to the FC, where the received data is used to form a global estimate of the unknown source such that the Mean Square Error (MSE) of the DES is minimized. The accuracy of DES depends on many factors such as intensity of observation noises in sensors, quantization errors in sensors, available power and bandwidth of the network, quality of communication channels between sensors and the FC, and the choice of fusion rule in the FC. Taking into account all of these contributing factors and implementing a DES system which minimizes the MSE and satisfies all constraints is a challenging task. In order to probe into different aspects of this challenging task we identify and formulate the following three problems and address them accordingly:1- Consider an inhomogeneous WSN where the sensors' observations is modeled linear with additive Gaussian noise. The communication channels between sensors and FC are orthogonal power and bandwidth-constrained erroneous wireless fading channels. The unknown to be estimated is a Gaussian vector. Sensors employ uniform multi-bit quantizers and BPSK modulation. Given this setup, we ask: what is the best fusion rule in the FC? what is the best transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize the MSE? In order to answer these questions, we derive some upper bounds on global MSE and through minimizing those bounds, we propose various resource allocation schemes for the problem, through which we investigate the effect of contributing factors on the MSE.2- Consider an inhomogeneous WSN with an FC which is tasked with estimating a scalar Gaussian unknown. The sensors are equipped with uniform multi-bit quantizers and the communication channels are modeled as Binary Symmetric Channels (BSC). In contrast to former problem the sensors experience independent multiplicative noises (in addition to additive noise). The natural question in this scenario is: how does multiplicative noise affect the DES system performance? how does it affect the resource allocation for sensors, with respect to the case where there is no multiplicative noise? We propose a linear fusion rule in the FC and derive the associated MSE in closed-form. We propose several rate allocation schemes with different levels of complexity which minimize the MSE. Implementing the proposed schemes lets us study the effect of multiplicative noise on DES system performance and its dynamics. We also derive Bayesian Cramer-Rao Lower Bound (BCRLB) and compare the MSE performance of our porposed methods against the bound.As a dual problem we also answer the question: what is the minimum required bandwidth of thenetwork to satisfy a predetermined target MSE?3- Assuming the framework of Bayesian DES of a Gaussian unknown with additive and multiplicative Gaussian noises involved, we answer the following question: Can multiplicative noise improve the DES performance in any case/scenario? the answer is yes, and we call the phenomena as 'enhancement mode' of multiplicative noise. Through deriving different lower bounds, such as BCRLB,Weiss-Weinstein Bound (WWB), Hybrid CRLB (HCRLB), Nayak Bound (NB), Yatarcos Bound (YB) on MSE, we identify and characterize the scenarios that the enhancement happens. We investigate two situations where variance of multiplicative noise is known and unknown. Wealso compare the performance of well-known estimators with the derived bounds, to ensure practicability of the mentioned enhancement modes.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006913, ucf:51698
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006913
-
-
Title
-
Theoretical Study of Laser Beam Quality and Pulse Shaping by Volume Bragg Gratings.
-
Creator
-
Kaim, Sergiy, Zeldovich, Boris, Flitsiyan, Elena, Leuenberger, Michael, Likamwa, Patrick, University of Central Florida
-
Abstract / Description
-
The theory of stretching and compressing of short light pulses by the chirped volume Bragg gratings (CBG) is reviewed based on spectral decomposition of short pulses and on the wavelength-dependent coupled wave equations. The analytic theory of diffraction efficiency of a CBG with constant chirp and approximate theory of time delay dispersion are presented. Based on those, we performed comparison of the approximate analytic results with the exact numeric coupled-wave modeling. We also study...
Show moreThe theory of stretching and compressing of short light pulses by the chirped volume Bragg gratings (CBG) is reviewed based on spectral decomposition of short pulses and on the wavelength-dependent coupled wave equations. The analytic theory of diffraction efficiency of a CBG with constant chirp and approximate theory of time delay dispersion are presented. Based on those, we performed comparison of the approximate analytic results with the exact numeric coupled-wave modeling. We also study theoretically various definitions of laser beam width in a given cross-section. Quality of the beam is characterized by the dimensionless beam propagation products (?x???_x)?? , which are different for each of the 21 definitions. We study six particular beams and introduce an axially-symmetric self-MFT (mathematical Fourier transform) function, which may be useful for the description of diffraction-quality beams. Furthermore, we discuss various saturation curves and their influence on the amplitudes of recorded gratings. Special attention is given to multiplexed volume Bragg gratings (VBG) aimed at recording of several gratings in the same volume. The best shape of a saturation curve for production of the strongest gratings is found to be the threshold-type curve. Both one-photon and two-photon absorption mechanism of recording are investigated. Finally, by means of the simulation software we investigate forced airflow cooling of a VBG heated by a laser beam. Two combinations of a setup are considered, and a number of temperature distributions and thermal deformations are obtained for different rates of airflows. Simulation results are compared to the experimental data, and show good mutual agreement.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005638, ucf:50210
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005638
-
-
Title
-
SPRAY COOLING FOR LAND, SEA, AIR AND SPACE BASED APPLICATIONS,A FLUID MANAGEMENT SYSTEM FOR MULTIPLE NOZZLE SPRAY COOLING AND A GUIDE TO HIGH HEAT FLUX HEATER DESIGN.
-
Creator
-
Glassman, Brian, Chow, Louis, University of Central Florida
-
Abstract / Description
-
This thesis is divided into four distinct chapters all linked by the topic of spray cooling. Chapter one gives a detailed categorization of future and current spray cooling applications, and reviews the major advantages and disadvantages that spray cooling has over other high heat flux cooling techniques. Chapter two outlines the developmental goals of spray cooling, which are to increase the output of a current system and to enable new technologies to be technically feasible. Furthermore,...
Show moreThis thesis is divided into four distinct chapters all linked by the topic of spray cooling. Chapter one gives a detailed categorization of future and current spray cooling applications, and reviews the major advantages and disadvantages that spray cooling has over other high heat flux cooling techniques. Chapter two outlines the developmental goals of spray cooling, which are to increase the output of a current system and to enable new technologies to be technically feasible. Furthermore, this chapter outlines in detail the impact that land, air, sea, and space environments have on the cooling system and what technologies could be enabled in each environment with the aid of spray cooling. In particular, the heat exchanger, condenser and radiator are analyzed in their corresponding environments. Chapter three presents an experimental investigation of a fluid management system for a large area multiple nozzle spray cooler. A fluid management or suction system was used to control the liquid film layer thickness needed for effective heat transfer. An array of sixteen pressure atomized spray nozzles along with an imbedded fluid suction system was constructed. Two surfaces were spray tested one being a clear grooved Plexiglas plate used for visualization and the other being a bottom heated grooved 4.5 x 4.5 cm2 copper plate used to determine the heat flux. The suction system utilized an array of thin copper tubes to extract excess liquid from the cooled surface. Pure water was ejected from two spray nozzle configurations at flow rates of 0.7 L/min to 1 L/min per nozzle. It was found that the fluid management system provided fluid removal efficiencies of 98% with a 4-nozzle array, and 90% with the full 16-nozzle array for the downward spraying orientation. The corresponding heat fluxes for the 16 nozzle configuration were found with and without the aid of the fluid management system. It was found that the fluid management system increased heat fluxes on the average of 30 W/cm2 at similar values of superheat. Unfortunately, the effectiveness of this array at removing heat at full levels of suction is approximately 50% & 40% of a single nozzle at respective 10aC & 15aC values of superheat. The heat transfer data more closely resembled convective pooling boiling. Thus, it was concluded that the poor heat transfer was due to flooding occurring which made the heat transfer mechanism mainly forced convective boiling and not spray cooling. Finally, Chapter four gives a detailed guide for the design and construction of a high heat flux heater for experimental uses where accurate measurements of surface temperatures and heat fluxes are extremely important. The heater designs presented allow for different testing applications; however, an emphasis is placed on heaters designed for use with spray cooling.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000473, ucf:46351
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000473
-
-
Title
-
Shock Tube Investigations of Novel Combustion Environments Towards a Carbon-Neutral Future.
-
Creator
-
Barak, Samuel, Vasu Sumathi, Subith, Kapat, Jayanta, Ahmed, Kareem, University of Central Florida
-
Abstract / Description
-
Supercritical carbon dioxide (sCO2) cycles are being investigated for the future of power generation. These cycles will contribute to a carbon-neutral future to combat the effects of climate change. These direct-fired closed cycles will produce power without adding significant pollutants to the atmosphere. For these cycles to be efficient, they will need to operate at significantly higher pressures (e.g., 300 atm for Allam Cycle) than existing systems (typically less than 40 atm). There is...
Show moreSupercritical carbon dioxide (sCO2) cycles are being investigated for the future of power generation. These cycles will contribute to a carbon-neutral future to combat the effects of climate change. These direct-fired closed cycles will produce power without adding significant pollutants to the atmosphere. For these cycles to be efficient, they will need to operate at significantly higher pressures (e.g., 300 atm for Allam Cycle) than existing systems (typically less than 40 atm). There is limited knowledge on combustion at these pressures or at the high dilution of carbon dioxide. Nominal fuel choices for gas turbines include natural gas and syngas (mixture of CO and H2). Shock tubes study these problems in order to understand the fundamentals and solve various challenges. Shock tube experiments have been studied by the author in the sCO2 regime for various fuels including natural gas, methane and syngas. Using the shock tube to take measurements, pressure and light emissions time-histories measurements were taken at a 2-cm axial location away from the end wall. Experiments for syngas at lower pressure utilized high-speed imaging through the end wall to investigate the effects of bifurcation. It was found that carbon dioxide created unique interactions with the shock tube compared to tradition bath gasses such as argon. The experimental results were compared to predictions from leading chemical kinetic mechanisms. In general, mechanisms can predict the experimental data for methane and other hydrocarbon fuels; however, the models overpredict for syngas mixtures. Reaction pathway analysis was evaluated to determine where the models need improvements. A new shock tube has been designed and built to operate up to 1000 atm pressures for future high-pressure experiments. Details of this new facility are included in this work. The experiments in this work are necessary for mechanism development to design an efficient combustor operate these cycles.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007781, ucf:52359
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007781
-
-
Title
-
A Psychophysical Approach to Standardizing Texture Compression for Virtual Environments.
-
Creator
-
Flynn, Jeremy, Szalma, James, Fidopiastis, Cali, Jentsch, Florian, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
Image compression is a technique to reduce overall data size, but its effects on human perception have not been clearly established. The purpose of this effort was to determine the most effective psychophysical method for subjective image quality assessment, and to apply those findings to an objective algorithm. This algorithm was used to identify the minimum level of texture compression noticeable to the human, in order to determine whether compression-induced texture distortion impacted...
Show moreImage compression is a technique to reduce overall data size, but its effects on human perception have not been clearly established. The purpose of this effort was to determine the most effective psychophysical method for subjective image quality assessment, and to apply those findings to an objective algorithm. This algorithm was used to identify the minimum level of texture compression noticeable to the human, in order to determine whether compression-induced texture distortion impacted game-play outcomes. Four experiments tested several hypotheses. The first hypothesis evaluated which of three magnitude estimation (ME) methods (absolute ME, absolute ME plus, or ME with a standard) for image quality assessment was the most reliable. The just noticeable difference (JND) point for textures compression against the Feature Similarity Index for color was determined The second hypothesis tested whether human participants perceived the same amount of distortion differently when textures were presented in three ways: when textures were displayed as flat images; when textures were wrapped around a model; and when textures were wrapped around models and in a virtual environment. The last set of hypotheses examined whether compression affected both subjective (immersion, technology acceptance, usability) and objective (performance) gameplay outcomes. The results were: the absolute magnitude estimation method was the most reliable; no difference was observed in the JND threshold between flat textures and textures placed on models, but textured embedded within the virtual environment were more noticeable than in the other two presentation formats. There were no differences in subjective gameplay outcomes when textures were compressed to below the JND thresholds; and those who played a game with uncompressed textures performed better on in-game tasks than those with the textures compressed, but only on the first in-game day. Practitioners and researchers can use these findings to guide their approaches to texture compression and experimental design.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007178, ucf:52250
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007178
-
-
Title
-
The Response of American Police Agencies to Digital Evidence.
-
Creator
-
Yesilyurt, Hamdi, Wan, Thomas, Potter, Roberto, Applegate, Brandon, Lang, Sheau-Dong, University of Central Florida
-
Abstract / Description
-
Little is known about the variation in digital forensics practice in the United States as adopted by large local police agencies. This study investigated how environmental constraints, contextual factors, organizational complexity, and organizational control relate to the adoption of digital forensics practice. This study integrated 3 theoretical perspectives in organizational studies to guide the analysis of the relations: institutional theory, contingency theory, and adoption-of-innovation...
Show moreLittle is known about the variation in digital forensics practice in the United States as adopted by large local police agencies. This study investigated how environmental constraints, contextual factors, organizational complexity, and organizational control relate to the adoption of digital forensics practice. This study integrated 3 theoretical perspectives in organizational studies to guide the analysis of the relations: institutional theory, contingency theory, and adoption-of-innovation theory. Institutional theory was used to analyze the impact of environmental constraints on the adoption of innovation, and contingency theory was used to examine the impacts of organizational control on the adoption of innovation. Adoption of innovation theory was employed to describe the degree to which digital forensics practice has been adopted by large municipal police agencies having 100 or more sworn police officers.The data set was assembled primarily by using Law Enforcement Management and Administrative Statistics (LEMAS) 2003 and 1999. Dr. Edward Maguire`s survey was used to obtain 1 variable. The joining up of the data set to construct the sample resulted in 345 large local police agencies. The descriptive results on the degree of adoption of digital forensics practice indicate that 37.7% of large local police agencies have dedicated personnel to address digital evidence, 32.8% of police agencies address digital evidence but do not have dedicated personnel, and only 24.3% of police agencies have a specialized unit with full-time personnel to address digital evidence. About 5% of local police agencies do nothing to address digital evidence in any circumstance. These descriptive statistics indicate that digital evidence is a matter of concern for most large local police agencies and that they respond to varying degrees to digital evidence at the organizational level. Agencies that have not adopted digital forensics practice are in the minority. The structural equation model was used to test the hypothesized relations, easing the rigorous analysis of relations between latent constructs and several indicator variables. Environmental constraints have the largest impact on the adoption of innovation, exerting a positive influence. No statistically significant relation was found between organizational control and adoption of digital forensic practice. Contextual factors (task scope and personnel size) positively influence the adoption of digital forensics. Structural control factors, including administrative weight and formalization, have no significant influence on the adoption of innovation. The conclusions of the study are as follows. Police agencies adopt digital forensics practice primarily by relying on environmental constraints. Police agencies exposed to higher environmental constraints are more frequently expected to adopt digital forensics practice. Because organizational control of police agencies is not significantly related to digital forensics practice adoption, police agencies do not take their organizational control extensively into consideration when they consider adopting digital forensics practice. The positive influence of task scope and size on digital forensics practice adoption was expected. The extent of task scope and the number of personnel indicate a higher capacity for police agencies to adopt digital forensics practice. Administrative weight and formalization do not influence the adoption of digital forensics practice. Therefore, structural control and coordination are not important for large local police agencies to adopt digital forensics practice.The results of the study indicate that the adoption of digital forensics practice is based primarily on environmental constraints. Therefore, more drastic impacts on digital forensics practice should be expected from local police agencies' environments than from internal organizational factors. Researchers investigating the influence of various factors on the adoption of digital forensics practice should further examine environmental variables. The unexpected results concerning the impact of administrative weight and formalization should be researched with broader considerations.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004181, ucf:49081
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004181
-
-
Title
-
CMOS RF CITUITS VARIABILITY AND RELIABILITY RESILIENT DESIGN, MODELING, AND SIMULATION.
-
Creator
-
Liu, Yidong, Yuan, Jiann-Shiun, University of Central Florida
-
Abstract / Description
-
The work presents a novel voltage biasing design that helps the CMOS RF circuits resilient to variability and reliability. The biasing scheme provides resilience through the threshold voltage (VT) adjustment, and at the mean time it does not degrade the PA performance. Analytical equations are established for sensitivity of the resilient biasing under various scenarios. Power Amplifier (PA) and Low Noise Amplifier (LNA) are investigated case by case through modeling and experiment. PTM 65nm...
Show moreThe work presents a novel voltage biasing design that helps the CMOS RF circuits resilient to variability and reliability. The biasing scheme provides resilience through the threshold voltage (VT) adjustment, and at the mean time it does not degrade the PA performance. Analytical equations are established for sensitivity of the resilient biasing under various scenarios. Power Amplifier (PA) and Low Noise Amplifier (LNA) are investigated case by case through modeling and experiment. PTM 65nm technology is adopted in modeling the transistors within these RF blocks. A traditional class-AB PA with resilient design is compared the same PA without such design in PTM 65nm technology. Analytical equations are established for sensitivity of the resilient biasing under various scenarios. A traditional class-AB PA with resilient design is compared the same PA without such design in PTM 65nm technology. The results show that the biasing design helps improve the robustness of the PA in terms of linear gain, P1dB, Psat, and power added efficiency (PAE). Except for post-fabrication calibration capability, the design reduces the majority performance sensitivity of PA by 50% when subjected to threshold voltage (VT) shift and 25% to electron mobility (¼n) degradation. The impact of degradation mismatches is also investigated. It is observed that the accelerated aging of MOS transistor in the biasing circuit will further reduce the sensitivity of PA. In the study of LNA, a 24 GHz narrow band cascade LNA with adaptive biasing scheme under various aging rate is compared to LNA without such biasing scheme. The modeling and simulation results show that the adaptive substrate biasing reduces the sensitivity of noise figure and minimum noise figure subject to process variation and device aging such as threshold voltage shift and electron mobility degradation. Simulation of different aging rate also shows that the sensitivity of LNA is further reduced with the accelerated aging of the biasing circuit. Thus, for majority RF transceiver circuits, the adaptive body biasing scheme provides overall performance resilience to the device reliability induced degradation. Also the tuning ability designed in RF PA and LNA provides the circuit post-process calibration capability.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003595, ucf:48861
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003595
-
-
Title
-
AN EXAMINATION OF CENTRAL ASIAN GEOPOLITICS THROUGH THE EXPECTED UTILITY MODEL: THE NEW GREAT GAME.
-
Creator
-
Stutte, Corey, Wan, Thomas, University of Central Florida
-
Abstract / Description
-
The New Great Game is a geopolitical competition between regional stakeholders over energy resources in Central Asia. The author seeks to use the expected utility voting model based on Black's median voter theorem for forecasting the New Great Game in Central Asia. To judge the external validity of the voting model, the author uses data from the Correlates of War project data set, to formulate three distinct models based only on the numbers in 1992 and 1993. Capabilities and alliance data...
Show moreThe New Great Game is a geopolitical competition between regional stakeholders over energy resources in Central Asia. The author seeks to use the expected utility voting model based on Black's median voter theorem for forecasting the New Great Game in Central Asia. To judge the external validity of the voting model, the author uses data from the Correlates of War project data set, to formulate three distinct models based only on the numbers in 1992 and 1993. Capabilities and alliance data were used to develop balance of power positions and compare the outcome of 100 simulations to the actual outcome in 2000 based on Correlates of War project data. This allows us to judge whether the emergence of Russia's weak advantage as well as the continuation of the competition in the New Great Game as of 2000 could have been predicted based on what was known in 1992 and 1993. By using only one year's data to forecast the New Great Game, we are able to eliminate historical and researcher bias and judge the applicability of the model in global policy and strategic analysis.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002861, ucf:48088
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002861
-
-
Title
-
Broad Bandwidth, All-fiber, Thulium-doped Photonic Crystal Fiber Amplifier for Potential Use in Scaling Ultrashort Pulse Peak Powers.
-
Creator
-
Sincore, Alex, Richardson, Martin, Shah, Lawrence, Amezcua Correa, Rodrigo, University of Central Florida
-
Abstract / Description
-
Fiber based ultrashort pulse laser sources are desirable for many applications; however generating high peak powers in fiber lasers is primarily limited by the onset of nonlinear effects such as self-phase modulation, stimulated Raman scattering, and self-focusing. Increasing the fiber core diameter mitigates the onset of these nonlinear effects, but also allows unwanted higher-order transverse spatial modes to propagate. Both large core diameters and single-mode propagation can be...
Show moreFiber based ultrashort pulse laser sources are desirable for many applications; however generating high peak powers in fiber lasers is primarily limited by the onset of nonlinear effects such as self-phase modulation, stimulated Raman scattering, and self-focusing. Increasing the fiber core diameter mitigates the onset of these nonlinear effects, but also allows unwanted higher-order transverse spatial modes to propagate. Both large core diameters and single-mode propagation can be simultaneously attained using photonic crystal fibers.Thulium-doped fiber lasers are attractive for high peak power ultrashort pulse systems. They offer a broad gain bandwidth, capable of amplifying sub-100 femtosecond pulses. The longer center wavelength at 2 ?m theoretically enables higher peak powers relative to 1 ?m systems since nonlinear effects inversely scale with wavelength. Also, the 2 ?m emission is desirable to support applications reaching further into the mid-IR.This work evaluates the performance of a novel all-fiber pump combiner that incorporates a thulium-doped photonic crystal fiber. This fully integrated amplifier is characterized and possesses a large gain bandwidth, essentially single-mode propagation, and high degree of polarization. This innovative all-fiber, thulium-doped photonic crystal fiber amplifier has great potential for enabling high peak powers in 2 ?m fiber systems; however the current optical-to-optical efficiency is low relative to similar free-space amplifiers. Further development and device optimization will lead to higher efficiencies and improved performance.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005260, ucf:50611
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005260
-
-
Title
-
THE INCOMPATIBILITY OF FREEDOM OF THE WILL AND ANTHROPOLOGICAL PHYSICALISM.
-
Creator
-
Gonzalez, Ariel, Rodgers, Travis, University of Central Florida
-
Abstract / Description
-
Many contemporary naturalistic philosophers have taken it for granted that a robust theory of free will, one which would afford us with an agency substantial enough to render us morally responsible for our actions, is itself not conceptually compatible with the philosophical theory of naturalism. I attempt to account for why it is that free will (in its most substantial form) cannot be plausibly located within a naturalistic understanding of the world. I consider the issues surrounding an...
Show moreMany contemporary naturalistic philosophers have taken it for granted that a robust theory of free will, one which would afford us with an agency substantial enough to render us morally responsible for our actions, is itself not conceptually compatible with the philosophical theory of naturalism. I attempt to account for why it is that free will (in its most substantial form) cannot be plausibly located within a naturalistic understanding of the world. I consider the issues surrounding an acceptance of a robust theory of free will within a naturalistic framework. Timothy O'Connor's reconciliatory effort in maintaining both a scientifically naturalist understanding of the human person and a full-blooded theory of agent-causal libertarian free will is considered. I conclude that Timothy O'Connor's reconciliatory model cannot be maintained and I reference several conceptual difficulties surrounding the reconciliation of agent-causal libertarian properties with physical properties that haunt the naturalistic libertarian.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFH0004628, ucf:45292
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004628
-
-
Title
-
The Tragic City: Black Rebellion and the Struggle for Freedom in Miami, 1945-1990.
-
Creator
-
Dossie, Porsha, Lester, Connie, French, Scot, Walker, Ezekiel, University of Central Florida
-
Abstract / Description
-
This thesis examines the creation of South Florida's tri-ethnic racial hierarchy during the postwar period, from 1945-1990. This racial hierarchy, coupled with discriminatory housing practices and police violence, created the necessary conditions for Dade County's first deadly uprising in 1968. Following the acquittal of several officers charged in the killing of an unarmed black businessman, a second uprising in 1980 culminated in three days and three nights of violent street warfare between...
Show moreThis thesis examines the creation of South Florida's tri-ethnic racial hierarchy during the postwar period, from 1945-1990. This racial hierarchy, coupled with discriminatory housing practices and police violence, created the necessary conditions for Dade County's first deadly uprising in 1968. Following the acquittal of several officers charged in the killing of an unarmed black businessman, a second uprising in 1980 culminated in three days and three nights of violent street warfare between law enforcement and black residents in Miami's northwest Liberty City neighborhood. The presence of state sanctioned violence at the hands of police in Liberty City set the stage for the city's second uprising. Further, the oftentimes murky and ambiguous racial divide that made people of color both comrades and rivals within Miami's larger power structure resulted in an Anglo-Cuban alliance by the late 1960s and early 1970s that only worsened racial tensions, especially among the city's ethnically diverse, English speaking black population. This thesis project uses a socio-historical framework to investigate how race and immigration, police brutality, and federal housing policy created a climate in which one of Miami's most vulnerable populations resorted to collective violence.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007173, ucf:52269
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007173
-
-
Title
-
Reexamining the Relationship Between Divided Government and Voter Turnout.
-
Creator
-
Beck, Heidi, Knuckey, Jonathan, Jewett, Aubrey, Lanier, Drew, University of Central Florida
-
Abstract / Description
-
This thesis reexamines the effect of divided government on voter turnout originally posited byFranklin and Hirczy de Mi(&)#241;o (1998), which suggested that each year of exposure to dividedgovernment resulted in a cumulative negative effect on voters leading to alienation and lowerturnout. It reconsiders this argument using more recent data, given that voter turnout in U.S.presidential elections (as measured by the Voting Eligible Population) has increased since 2000,even though divided...
Show moreThis thesis reexamines the effect of divided government on voter turnout originally posited byFranklin and Hirczy de Mi(&)#241;o (1998), which suggested that each year of exposure to dividedgovernment resulted in a cumulative negative effect on voters leading to alienation and lowerturnout. It reconsiders this argument using more recent data, given that voter turnout in U.S.presidential elections (as measured by the Voting Eligible Population) has increased since 2000,even though divided government has occurred during this period.This thesis also uses new data and methods to address concerns about the original aggregatelevelresearch design. The research question is tested at the individual-level of analysis todetermine if divided government does interact with political trust to lower turnout. Previousresearch assumed this relationship since there is no aggregate-level proxy for political trust. Byusing survey data from the American National Election Studies it is now possible to test the fulltheory.The aggregate-level models show that misspecifications in the research design of Franklinand Hirczy de Mi(&)#241;o resulting in multicollinearity, and in two instances autocorrelation, whichresulted in a failure to reject the null hypothesis. The individual-level models show that dividedgovernment interacts with low levels of political trust to increase voter turnout, falsifying theargument about the effect of divided government on turnout. Overall, the thesis suggests that theimplications of an aspect of the American political system that renders it distinguishable frommost other advanced-industrial democracies(-)divided party control of the executive andlegislative branches(-)should be reassessed. More generally, the thesis demonstrates theimportance of reevaluating hypotheses in political science with the most recent data and morerobust methods in order to establish whether those original hypotheses are still supported
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007783, ucf:52363
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007783
-
-
Title
-
Microscopic Assessment of Transportation Emissions on Limited Access Highways.
-
Creator
-
Abou-Senna, Hatem, Radwan, Ahmed, Abdel-Aty, Mohamed, Al-Deek, Haitham, Cooper, Charles, Johnson, Mark, University of Central Florida
-
Abstract / Description
-
On-road vehicles are a major source of transportation carbon dioxide (CO2) greenhouse gas emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, e.g., carbon monoxide (CO), nitrogen oxides (NOx), and particulate matter (PM). The need to accurately quantify transportation-related emissions from vehicles is essential. Transportation agencies and researchers in the past have...
Show moreOn-road vehicles are a major source of transportation carbon dioxide (CO2) greenhouse gas emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, e.g., carbon monoxide (CO), nitrogen oxides (NOx), and particulate matter (PM). The need to accurately quantify transportation-related emissions from vehicles is essential. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The new United States Environmental Protection Agency (USEPA) mobile source emissions model, MOVES2010a (MOVES) can estimate vehicle emissions on a second-by-second basis creating the opportunity to develop new software (")VIMIS 1.0(") (VISSIM/MOVES Integration Software) to facilitate the integration process. This research presents a microscopic examination of five key transportation parameters (traffic volume, speed, truck percentage, road grade and temperature) on a 10-mile stretch of Interstate 4 (I-4) test bed prototype; an urban limited access highway corridor in Orlando, Florida. The analysis was conducted utilizing VIMIS 1.0 and using an advanced custom design technique; D-Optimality and I-Optimality criteria, to identify active factors and to ensure precision in estimating the regression coefficients as well as the response variable.The analysis of the experiment identified the optimal settings of the key factors and resulted in the development of Micro-TEM (Microscopic Transportation Emissions Meta-Model). The main purpose of Micro-TEM is to serve as a substitute model for predicting transportation emissions on limited access highways to an acceptable degree of accuracy in lieu of running simulations using a traffic model and integrating the results in an emissions model. Furthermore, significant emission rate reductions were observed from the experiment on the modeled corridor especially for speeds between 55 and 60 mph while maintaining up to 80% and 90% of the freeway's capacity. However, vehicle activity characterization in terms of speed was shown to have a significant impact on the emission estimation approach.Four different approaches were further examined to capture the environmental impacts of vehicular operations on the modeled test bed prototype. First, (at the most basic level), emissions were estimated for the entire 10-mile section (")by hand(") using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link driving schedules (LDS), and second-by-second operating mode distributions (OPMODE). This research analyzed how the various approaches affect predicted emissions of CO, NOx, PM and CO2. The results demonstrated that obtaining accurate and comprehensive operating mode distributions on a second-by-second basis improves emission estimates. Specifically, emission rates were found to be highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, frequent braking/coasting and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach.Additionally, model applications and mitigation scenarios were examined on the modeled corridor to evaluate the environmental impacts in terms of vehicular emissions and at the same time validate the developed model (")Micro-TEM("). Mitigation scenarios included the future implementation of managed lanes (ML) along with the general use lanes (GUL) on the I-4 corridor, the currently implemented variable speed limits (VSL) scenario as well as a hypothetical restricted truck lane (RTL) scenario. Results of the mitigation scenarios showed an overall speed improvement on the corridor which resulted in overall reduction in emissions and emission rates when compared to the existing condition (EX) scenario and specifically on link by link basis for the RTL scenario.The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004777, ucf:49788
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004777
-
-
Title
-
BULLYING: OUT OF THE SCHOOL HALLS AND INTO THE WORKPLACE.
-
Creator
-
Cooney, Lucretia, Huff-Corzine, Lin, University of Central Florida
-
Abstract / Description
-
The primary purpose of this study is to identify those people at most risk of being bullied at work. While much research is being conducted on school bullying, little has been conducted on workplace bullying. Using data gathered from a 2004 study conducted by the National Opinion Research Center for the General Social Survey, which included a Quality of Work Life (QWL) module for the National Institute for Occupational Safety and Health (NIOSH), linear regressions indicated significant...
Show moreThe primary purpose of this study is to identify those people at most risk of being bullied at work. While much research is being conducted on school bullying, little has been conducted on workplace bullying. Using data gathered from a 2004 study conducted by the National Opinion Research Center for the General Social Survey, which included a Quality of Work Life (QWL) module for the National Institute for Occupational Safety and Health (NIOSH), linear regressions indicated significant findings. As predicted, workers in lower level occupations, as ranked by prestige scoring developed at National Opinion Research, are more likely to be victimized. Data also suggest that being young, Black, and relatively uneducated may contribute to being bullied in certain situations. Future research is needed to examine influences of socio-economic, legal, and other demographic factors that may predict the chance of being bullied.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003235, ucf:48512
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003235
-
-
Title
-
INVESTIGATION OF PS-PVD AND EB-PVD THERMAL BARRIER COATINGS OVER LIFETIME USING SYNCHROTRON X-RAY DIFFRACTION.
-
Creator
-
Northam, Matthew, Raghavan, Seetha, Ghosh, Ranajay, Vaidyanathan, Raj, University of Central Florida
-
Abstract / Description
-
Extreme operating temperatures within the turbine section of jet engines require sophisticated methods of cooling and material protection. Thermal barrier coatings (TBCs) achieve this through a ceramic coating applied to a substrate material (nickel-based superalloy). Electron-beam physical vapor deposition (EB-PVD) is the industry standard coating used on jet engines. By tailoring the microstructure of an emerging deposition method, Plasma-spray physical vapor deposition (PS-PVD), similar...
Show moreExtreme operating temperatures within the turbine section of jet engines require sophisticated methods of cooling and material protection. Thermal barrier coatings (TBCs) achieve this through a ceramic coating applied to a substrate material (nickel-based superalloy). Electron-beam physical vapor deposition (EB-PVD) is the industry standard coating used on jet engines. By tailoring the microstructure of an emerging deposition method, Plasma-spray physical vapor deposition (PS-PVD), similar microstructures to that of EB-PVD coatings can be fabricated, allowing the benefits of strain tolerance to be obtained while improving coating deposition times. This work investigates the strain through depth of uncycled and cycled samples using these coating techniques with synchrotron X-ray diffraction (XRD). In the TGO, room temperature XRD measurements indicated samples of both deposition methods showed similar in-plane compressive stresses after 300 and 600 thermal cycles. In-situ XRD measurements indicated similar high-temperature in-plane and out-of-plane stress in the TGO and no spallation after 600 thermal cycles for both coatings. Tensile in-plane residual stresses were found in the YSZ uncycled PS-PVD samples, similar to APS coatings. PS-PVD samples showed in most cases, higher compressive residual in-plane stress at the YSZ/TGO interface. These results provide valuable insight for optimizing the PS-PVD processing parameters to obtain strain compliance similar to that of EB-PVD. Additionally, external cooling methods used for thermal management in jet engine turbines were investigated. In this work, an additively manufactured lattice structure providing transpiration cooling holes is designed and residual strains are measured within an AM transpiration cooling sample using XRD. Strains within the lattice structure were found to have greater variation than that of the AM solid wall. These results provide valuable insight into the viability of implementing an AM lattice structure in turbine blades for the use of transpiration cooling.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007844, ucf:52830
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007844
-
-
Title
-
Adaptive Architectural Strategies for Resilient Energy-Aware Computing.
-
Creator
-
Ashraf, Rizwan, DeMara, Ronald, Lin, Mingjie, Wang, Jun, Jha, Sumit, Johnson, Mark, University of Central Florida
-
Abstract / Description
-
Reconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited...
Show moreReconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited to implement Evolvable Hardware (EHW). EHW utilize genetic algorithms to realize logic circuits at runtime, as directed by the objective function. However, the size of problems solved using EHW as compared with traditional approaches has been limited to relatively compact circuits. This is due to the increase in complexity of the genetic algorithm with increase in circuit size. To address this research challenge of scalability, the Netlist-Driven Evolutionary Refurbishment (NDER) technique was designed and implemented herein to enable on-the-fly permanent fault mitigation in FPGA circuits. NDER has been shown to achieve refurbishment of relatively large sized benchmark circuits as compared to related works. Additionally, Design Diversity (DD) techniques which are used to aid such evolutionary refurbishment techniques are also proposed and the efficacy of various DD techniques is quantified and evaluated.Similarly, there exists a growing need for adaptable logic datapaths in custom-designed nanometer-scale ICs, for ensuring operational reliability in the presence of Process, Voltage, and Temperature (PVT) and, transistor-aging variations owing to decreased feature sizes for electronic devices. Without such adaptability, excessive design guardbands are required to maintain the desired integration and performance levels. To address these challenges, the circuit-level technique of Self-Recovery Enabled Logic (SREL) was designed herein. At design-time, vulnerable portions of the circuit identified using conventional Electronic Design Automation tools are replicated to provide post-fabrication adaptability via intelligent techniques. In-situ timing sensors are utilized in a feedback loop to activate suitable datapaths based on current conditions that optimize performance and energy consumption. Primarily, SREL is able to mitigate the timing degradations caused due to transistor aging effects in sub-micron devices by reducing the stress induced on active elements by utilizing power-gating. As a result, fewer guardbands need to be included to achieve comparable performance levels which leads to considerable energy savings over the operational lifetime.The need for energy-efficient operation in current computing systems has given rise to Near-Threshold Computing as opposed to the conventional approach of operating devices at nominal voltage. In particular, the goal of exascale computing initiative in High Performance Computing (HPC) is to achieve 1 EFLOPS under the power budget of 20MW. However, it comes at the cost of increased reliability concerns, such as the increase in performance variations and soft errors. This has given rise to increased resiliency requirements for HPC applications in terms of ensuring functionality within given error thresholds while operating at lower voltages. My dissertation research devised techniques and tools to quantify the effects of radiation-induced transient faults in distributed applications on large-scale systems. A combination of compiler-level code transformation and instrumentation are employed for runtime monitoring to assess the speed and depth of application state corruption as a result of fault injection. Finally, fault propagation models are derived for each HPC application that can be used to estimate the number of corrupted memory locations at runtime. Additionally, the tradeoffs between performance and vulnerability and the causal relations between compiler optimization and application vulnerability are investigated.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006206, ucf:52889
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006206
Pages