Current Search: DeMara, Ronald (x)
View All Items
Pages
- Title
- A(&)nbsp;Framework For Modeling Attacker Capabilities with Deception.
- Creator
-
Hassan, Sharif, Guha, Ratan, Bassiouni, Mostafa, Chatterjee, Mainak, DeMara, Ronald, University of Central Florida
- Abstract / Description
-
In this research we built a custom experimental range using opensource emulated and custom pure honeypots designed to detect or capture attacker activity. The focus is to test the effectiveness of a deception in its ability to evade detection coupled with attacker skill levels. The range consists of three zones accessible via virtual private networking. The first zone houses varying configurations of opensource emulated honeypots, custom built pure honeypots, and real SSH servers. The second...
Show moreIn this research we built a custom experimental range using opensource emulated and custom pure honeypots designed to detect or capture attacker activity. The focus is to test the effectiveness of a deception in its ability to evade detection coupled with attacker skill levels. The range consists of three zones accessible via virtual private networking. The first zone houses varying configurations of opensource emulated honeypots, custom built pure honeypots, and real SSH servers. The second zone acts as a point of presence for attackers. The third zone is for administration and monitoring. Using the range, both a control and participant-based experiment were conducted. We conducted control experiments to baseline and empirically explore honeypot detectability amongst other systems through adversarial testing. We executed a series of tests such as network service sweep, enumeration scanning, and finally manual execution. We also selected participants to serve as cyber attackers against the experiment range of varying skills having unique tactics, techniques and procedures in attempting to detect the honeypots. We have concluded the experiments and performed data analysis. We measure the anticipated threat by presenting the Attacker Bias Perception Profile model. Using this model, each participant is ranked based on their overall threat classification and impact. This model is applied to the results of the participants which helps align the threat to likelihood and impact of a honeypot being detected. The results indicate the pure honeypots are significantly difficult to detect. Emulated honeypots are grouped in different categories based on the detection and skills of the attackers. We developed a framework abstracting the deceptive process, the interaction with system elements, the use of intelligence, and the relationship with attackers. The framework is illustrated by our experiment case studies and the attacker actions, the effects on the system, and impact to the success.
Show less - Date Issued
- 2019
- Identifier
- CFE0007467, ucf:52659
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007467
- Title
- Heterogeneous Reconfigurable Fabrics for In-circuit Training and Evaluation of Neuromorphic Architectures.
- Creator
-
Mohammadizand, Ramtin, DeMara, Ronald, Lin, Mingjie, Sundaram, Kalpathy, Fan, Deliang, Wu, Annie, University of Central Florida
- Abstract / Description
-
A heterogeneous device technology reconfigurable logic fabric is proposed which leverages the cooperating advantages of distinct magnetic random access memory (MRAM)-based look-up tables (LUTs) to realize sequential logic circuits, along with conventional SRAM-based LUTs to realize combinational logic paths. The resulting Hybrid Spin/Charge FPGA (HSC-FPGA) using magnetic tunnel junction (MTJ) devices within this topology demonstrates commensurate reductions in area and power consumption over...
Show moreA heterogeneous device technology reconfigurable logic fabric is proposed which leverages the cooperating advantages of distinct magnetic random access memory (MRAM)-based look-up tables (LUTs) to realize sequential logic circuits, along with conventional SRAM-based LUTs to realize combinational logic paths. The resulting Hybrid Spin/Charge FPGA (HSC-FPGA) using magnetic tunnel junction (MTJ) devices within this topology demonstrates commensurate reductions in area and power consumption over fabrics having LUTs constructed with either individual technology alone. Herein, a hierarchical top-down design approach is used to develop the HSC(&)#173; FPGA starting from the configurable logic block (CLB) and slice structures down to LUT circuits and the corresponding device fabrication paradigms. This facilitates a novel architectural approach to reduce leakage energy, minimize communication occurrence and energy cost by eliminating unnecessary data transfer, and support auto-tuning for resilience. Furthermore, HSC-FPGA enables new advantages of technology co-design which trades off alternative mappings between emerging devices and transistors at runtime by allowing dynamic remapping to adaptively leverage the intrinsic computing features of each device technology. HSC-FPGA offers a platform for fine-grained Logic-In-Memory architectures and runtime adaptive hardware.An orthogonal dimension of fabric heterogeneity is also non-determinism enabled by either low(&)#173; voltage CMOS or probabilistic emerging devices. It can be realized using probabilistic devices within a reconfigurable network to blend deterministic and probabilistic computational models. Herein, consider the probabilistic spin logic p-bit device as a fabric element comprising a crossbar(&)#173; structured weighted array. The programmability of the resistive network interconnecting p-bit devices can be achieved by modifying the resistive states of the array's weighted connections. Thus, the programmable weighted array forms a CLB-scale macro co-processing element with bitstream programmability. This allows field programmability for a wide range of classification problems and recognition tasks to allow fluid mappings of probabilistic and deterministic computing approaches. In particular, a Deep Belief Network (DBN) is implemented in the field using recurrent layers of co-processing elements to form an n(&)#215; m1(&)#215;m2(&)#215;...(&)#215;mi weighted array as a configurable hardware circuit with an n-input layer followed by i?1 hidden layers. As neuromorphic architectures using post-CMOS devices increase in capability and network size, the utility and benefits of reconfigurable fabrics of neuromorphic modules can be anticipated to continue to accelerate.
Show less - Date Issued
- 2019
- Identifier
- CFE0007502, ucf:52643
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007502
- Title
- Security of Autonomous Systems under Physical Attacks: With application to Self-Driving Cars.
- Creator
-
Dutta, Raj, Jin, Yier, Sundaram, Kalpathy, DeMara, Ronald, Zhang, Shaojie, Zhang, Teng, University of Central Florida
- Abstract / Description
-
The drive to achieve trustworthy autonomous cyber-physical systems (CPS), which can attain goals independently in the presence of significant uncertainties and for long periods of time without any human intervention, has always been enticing. Significant progress has been made in the avenues of both software and hardware for fulfilling these objectives. However, technological challenges still exist and particularly in terms of decision making under uncertainty. In an autonomous system,...
Show moreThe drive to achieve trustworthy autonomous cyber-physical systems (CPS), which can attain goals independently in the presence of significant uncertainties and for long periods of time without any human intervention, has always been enticing. Significant progress has been made in the avenues of both software and hardware for fulfilling these objectives. However, technological challenges still exist and particularly in terms of decision making under uncertainty. In an autonomous system, uncertainties can arise from the operating environment, adversarial attacks, and from within the system. As a result of these concerns, human-beings lack trust in these systems and hesitate to use them for day-to-day use.In this dissertation, we develop algorithms to enhance trust by mitigating physical attacks targeting the integrity and security of sensing units of autonomous CPS. The sensors of these systems are responsible for gathering data of the physical processes. Lack of measures for securing their information can enable malicious attackers to cause life-threatening situations. This serves as a motivation for developing attack resilient solutions.Among various security solutions, attention has been recently paid toward developing system-level countermeasures for CPS whose sensor measurements are corrupted by an attacker. Our methods are along this direction as we develop an active and multiple passive algorithm to detect the attack and minimize its effect on the internal state estimates of the system. In the active approach, we leverage a challenge authentication technique for detection of two types of attacks: The Denial of Service (DoS) and the delay injection on active sensors of the systems. Furthermore, we develop a recursive least square estimator for recovery of system from attacks. The majority of the dissertation focuses on designing passive approaches for sensor attacks. In the first method, we focus on a linear stochastic system with multiple sensors, where measurements are fused in a central unit to estimate the state of the CPS. By leveraging Bayesian interpretation of the Kalman filter and combining it with the Chi-Squared detector, we recursively estimate states within an error bound and detect the DoS and False Data Injection attacks. We also analyze the asymptotic performance of the estimator and provide conditions for resilience of the state estimate.Next, we propose a novel distributed estimator based on l1 norm optimization, which could recursively estimate states within an error bound without restricting the number of agents of the distributed system that can be compromised. We also extend this estimator to a vehicle platoon scenario which is subjected to sparse attacks. Furthermore, we analyze the resiliency and asymptotic properties of both the estimators. Finally, at the end of the dissertation, we make an initial effort to formally verify the control system of the autonomous CPS using the statistical model checking method. It is done to ensure that a real-time and resource constrained system such as a self-driving car, with controllers and security solutions, adheres to strict timing constrains.
Show less - Date Issued
- 2018
- Identifier
- CFE0007174, ucf:52253
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007174
- Title
- Hadwiger Numbers and Gallai-Ramsey Numbers of Special Graphs.
- Creator
-
Bosse, Christian, Song, Zixia, Brennan, Joseph, Zhao, Yue, DeMara, Ronald, University of Central Florida
- Abstract / Description
-
This dissertation explores two separate topics on graphs.We first study a far-reaching generalization of the Four Color Theorem. Given a graph G, we use chi(G) to denote the chromatic number; alpha(G) the independence number; and h(G) the Hadwiger number, which is the largest integer t such that the complete graph K_t can be obtained from a subgraph of G by contracting edges. Hadwiger's conjecture from 1943 states that for every graph G, h(G) is greater than or equal to chi(G). This is...
Show moreThis dissertation explores two separate topics on graphs.We first study a far-reaching generalization of the Four Color Theorem. Given a graph G, we use chi(G) to denote the chromatic number; alpha(G) the independence number; and h(G) the Hadwiger number, which is the largest integer t such that the complete graph K_t can be obtained from a subgraph of G by contracting edges. Hadwiger's conjecture from 1943 states that for every graph G, h(G) is greater than or equal to chi(G). This is perhaps the most famous conjecture in Graph Theory and remains open even for graphs G with alpha(G) less than or equal to 2. Let W_5 denote the wheel on six vertices. We establish more evidence for Hadwiger's conjecture by proving that h(G) is greater than or equal to chi(G) for all graphs G such that alpha(G) is less than or equal to 2 and G does not contain W_5 as an induced subgraph.Our second topic is related to Ramsey theory, a field that has intrigued those who study combinatorics for many decades. Computing the classical Ramsey numbers is a notoriously difficult problem, leaving many basic questions unanswered even after more than 80 years. We study Ramsey numbers under Gallai-colorings. A Gallai-coloring of a complete graph is an edge-coloring such that no triangle is colored with three distinct colors. Given a graph H and an integer k at least 1, the Gallai-Ramsey number, denoted GR_k(H), is the least positive integer n such that every Gallai-coloring of K_n with at most k colors contains a monochromatic copy of H. It turns out that GR_k(H) is more well-behaved than the classical Ramsey number R_k(H), though finding exact values of GR_k(H) is far from trivial. We show that for all k at least 3, GR_k(C_{2n+1}) = n2^k+1 where n is 4, 5, 6 or 7, and GR_k(C_{2n+1}) is at most (n ln n)2^k-(k+1)n+1 for all n at least 8, where C_{2n+1} denotes a cycle on 2n+1 vertices.
Show less - Date Issued
- 2019
- Identifier
- CFE0007603, ucf:52532
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007603
- Title
- Context-Centric Affect Recognition From Paralinguistic Features of Speech.
- Creator
-
Marpaung, Andreas, Gonzalez, Avelino, DeMara, Ronald, Sukthankar, Gita, Wu, Annie, Lisetti, Christine, University of Central Florida
- Abstract / Description
-
As the field of affect recognition has progressed, many researchers have shifted from having unimodal approaches to multimodal ones. In particular, the trends in paralinguistic speech affect recognition domain have been to integrate other modalities such as facial expression, body posture, gait, and linguistic speech. Our work focuses on integrating contextual knowledge into paralinguistic speech affect recognition. We hypothesize that a framework to recognize affect through paralinguistic...
Show moreAs the field of affect recognition has progressed, many researchers have shifted from having unimodal approaches to multimodal ones. In particular, the trends in paralinguistic speech affect recognition domain have been to integrate other modalities such as facial expression, body posture, gait, and linguistic speech. Our work focuses on integrating contextual knowledge into paralinguistic speech affect recognition. We hypothesize that a framework to recognize affect through paralinguistic features of speech can improve its performance by integrating relevant contextual knowledge. This dissertation describes our research to integrate contextual knowledge into the paralinguistic affect recognition process from acoustic features of speech. We conceived, built, and tested a two-phased system called the Context-Based Paralinguistic Affect Recognition System (CxBPARS). The first phase of this system is context-free and uses the AdaBoost classifier that applies data on the acoustic pitch, jitter, shimmer, Harmonics-to-Noise Ratio (HNR), and the Noise-to-Harmonics Ratio (NHR) to make an initial judgment about the emotion most likely exhibited by the human elicitor. The second phase then adds context modeling to improve upon the context-free classifications from phase I. CxBPARS was inspired by a human subject study performed as part of this work where test subjects were asked to classify an elicitor's emotion strictly from paralinguistic sounds, and then subsequently provided with contextual information to improve their selections. CxBPARS was rigorously tested and found to, at the worst case, improve the success rate from the state-of-the-art's 42% to 53%.
Show less - Date Issued
- 2019
- Identifier
- CFE0007836, ucf:52831
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007836
- Title
- Adaptive Architectural Strategies for Resilient Energy-Aware Computing.
- Creator
-
Ashraf, Rizwan, DeMara, Ronald, Lin, Mingjie, Wang, Jun, Jha, Sumit, Johnson, Mark, University of Central Florida
- Abstract / Description
-
Reconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited...
Show moreReconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited to implement Evolvable Hardware (EHW). EHW utilize genetic algorithms to realize logic circuits at runtime, as directed by the objective function. However, the size of problems solved using EHW as compared with traditional approaches has been limited to relatively compact circuits. This is due to the increase in complexity of the genetic algorithm with increase in circuit size. To address this research challenge of scalability, the Netlist-Driven Evolutionary Refurbishment (NDER) technique was designed and implemented herein to enable on-the-fly permanent fault mitigation in FPGA circuits. NDER has been shown to achieve refurbishment of relatively large sized benchmark circuits as compared to related works. Additionally, Design Diversity (DD) techniques which are used to aid such evolutionary refurbishment techniques are also proposed and the efficacy of various DD techniques is quantified and evaluated.Similarly, there exists a growing need for adaptable logic datapaths in custom-designed nanometer-scale ICs, for ensuring operational reliability in the presence of Process, Voltage, and Temperature (PVT) and, transistor-aging variations owing to decreased feature sizes for electronic devices. Without such adaptability, excessive design guardbands are required to maintain the desired integration and performance levels. To address these challenges, the circuit-level technique of Self-Recovery Enabled Logic (SREL) was designed herein. At design-time, vulnerable portions of the circuit identified using conventional Electronic Design Automation tools are replicated to provide post-fabrication adaptability via intelligent techniques. In-situ timing sensors are utilized in a feedback loop to activate suitable datapaths based on current conditions that optimize performance and energy consumption. Primarily, SREL is able to mitigate the timing degradations caused due to transistor aging effects in sub-micron devices by reducing the stress induced on active elements by utilizing power-gating. As a result, fewer guardbands need to be included to achieve comparable performance levels which leads to considerable energy savings over the operational lifetime.The need for energy-efficient operation in current computing systems has given rise to Near-Threshold Computing as opposed to the conventional approach of operating devices at nominal voltage. In particular, the goal of exascale computing initiative in High Performance Computing (HPC) is to achieve 1 EFLOPS under the power budget of 20MW. However, it comes at the cost of increased reliability concerns, such as the increase in performance variations and soft errors. This has given rise to increased resiliency requirements for HPC applications in terms of ensuring functionality within given error thresholds while operating at lower voltages. My dissertation research devised techniques and tools to quantify the effects of radiation-induced transient faults in distributed applications on large-scale systems. A combination of compiler-level code transformation and instrumentation are employed for runtime monitoring to assess the speed and depth of application state corruption as a result of fault injection. Finally, fault propagation models are derived for each HPC application that can be used to estimate the number of corrupted memory locations at runtime. Additionally, the tradeoffs between performance and vulnerability and the causal relations between compiler optimization and application vulnerability are investigated.
Show less - Date Issued
- 2015
- Identifier
- CFE0006206, ucf:52889
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006206
- Title
- The Performance and Power Impact of Using Multiple DRAM Address Mapping Schemes in Multicore Processors.
- Creator
-
Jadaa, Rami, Heinrich, Mark, DeMara, Ronald, Yuan, Jiann-Shiun, University of Central Florida
- Abstract / Description
-
Lowest-level cache misses are satisfied by the main memory through a specific address mapping scheme that is hard-coded in the memory controller. A dynamic address mapping scheme technique is investigated to provide higher performance and lower power consumption, and a method to throttle memory to meet a specific power budget. Several experiments are conducted on single and multithreaded synthetic memory traces -to study extreme cases- and validate the usability of the proposed dynamic...
Show moreLowest-level cache misses are satisfied by the main memory through a specific address mapping scheme that is hard-coded in the memory controller. A dynamic address mapping scheme technique is investigated to provide higher performance and lower power consumption, and a method to throttle memory to meet a specific power budget. Several experiments are conducted on single and multithreaded synthetic memory traces -to study extreme cases- and validate the usability of the proposed dynamic mapping scheme over the fixed one. Results show that applications' performance varies according to the mapping scheme used, and a dynamic mapping scheme achieves up to 2x increase in peak bandwidth utilization and around 30% higher energy efficiency than a system using only a single fixed scheme Moreover, the technique can be used to limit memory accesses into a subset of the memory devices by controlling data allocation at a finer granularity, providing a method to throttle main memory by allowing un-accessed devices to be put into power-down mode, hence saving power to meet a certain power budget.
Show less - Date Issued
- 2011
- Identifier
- CFE0004121, ucf:49118
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004121
- Title
- Normally-Off Computing Design Methodology Using Spintronics: from Devices to Architectures.
- Creator
-
Roohi, Arman, DeMara, Ronald, Abdolvand, Reza, Wang, Jun, Fan, Deliang, Del Barco, Enrique, University of Central Florida
- Abstract / Description
-
Energy-harvesting-powered computing offers intriguing and vast opportunities to dramatically transform the landscape of Internet of Things (IoT) devices and wireless sensor networks by utilizing ambient sources of light, thermal, kinetic, and electromagnetic energy to achieve battery-free computing. In order to operate within the restricted energy capacity and intermittency profile of battery-free operation, it is proposed to innovate Elastic Intermittent Computation (EIC) as a new duty-cycle...
Show moreEnergy-harvesting-powered computing offers intriguing and vast opportunities to dramatically transform the landscape of Internet of Things (IoT) devices and wireless sensor networks by utilizing ambient sources of light, thermal, kinetic, and electromagnetic energy to achieve battery-free computing. In order to operate within the restricted energy capacity and intermittency profile of battery-free operation, it is proposed to innovate Elastic Intermittent Computation (EIC) as a new duty-cycle-variable computing approach leveraging the non-volatility inherent in post-CMOS switching devices. The foundations of EIC will be advanced from the ground up by extending Spin Hall Effect Magnetic Tunnel Junction (SHE-MTJ) device models to realize SHE-MTJ-based Majority Gate (MG) and Polymorphic Gate (PG) logic approaches and libraries, that leverage intrinsic-non-volatility to realize middleware-coherent, intermittent computation without checkpointing, micro-tasking, or software bloat and energy overheads vital to IoT. Device-level EIC research concentrates on encapsulating SHE-MTJ behavior with a compact model to leverage the non-volatility of the device for intrinsic provision of intermittent computation and lifetime energy reduction. Based on this model, the circuit-level EIC contributions will entail the design, simulation, and analysis of PG-based spintronic logic which is adaptable at the gate-level to support variable duty cycle execution that is robust to brief and extended supply outages or unscheduled dropouts, and development of spin-based research synthesis and optimization routines compatible with existing commercial toolchains. These tools will be employed to design a hybrid post-CMOS processing unit utilizing pipelining and power-gating through state-holding properties within the datapath itself, thus eliminating checkpointing and data transfer operations.
Show less - Date Issued
- 2019
- Identifier
- CFE0007526, ucf:52619
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007526
- Title
- Coloring graphs with forbidden minors.
- Creator
-
Rolek, Martin, Song, Zixia, Brennan, Joseph, Reid, Michael, Zhao, Yue, DeMara, Ronald, University of Central Florida
- Abstract / Description
-
A graph H is a minor of a graph G if H can be obtained from a subgraph of G by contracting edges. My research is motivated by the famous Hadwiger's Conjecture from 1943 which states that every graph with no Kt-minor is (t - 1)-colorable. This conjecture has been proved true for t ? 6, but remains open for all t ? 7. For t = 7, it is not even yet known if a graph with no K7-minor is 7-colorable. We begin by showing that every graph with no K_t-minor is (2t - 6)-colorable for t = 7, 8, 9, in...
Show moreA graph H is a minor of a graph G if H can be obtained from a subgraph of G by contracting edges. My research is motivated by the famous Hadwiger's Conjecture from 1943 which states that every graph with no Kt-minor is (t - 1)-colorable. This conjecture has been proved true for t ? 6, but remains open for all t ? 7. For t = 7, it is not even yet known if a graph with no K7-minor is 7-colorable. We begin by showing that every graph with no K_t-minor is (2t - 6)-colorable for t = 7, 8, 9, in the process giving a shorter and computer-free proof of the known results for t = 7, 8. We also show that this result extends to larger values of t if Mader's bound for the extremal function for Kt-minors is true. Additionally, we show that any graph with no K8?-minor is 9-colorable, and any graph with no K8?-minor is 8-colorable. The Kempe-chain method developed for our proofs of the above results may be of independent interest. We also use Mader's H-Wege theorem to establish some sufficient conditions for a graph to contain a K8-minor.Another motivation for my research is a well-known conjecture of Erd?s and Lov(&)#225;sz from 1968, the Double-Critical Graph Conjecture. A connected graph G is double-critical if for all edges xy ? E(G), ?(G - x - y) = ?(G) - 2. Erd?s and Lov(&)#225;sz conjectured that the only double-critical t-chromatic graph is the complete graph Kt. This conjecture has been show to be true for t ? 5 and remains open for t ? 6. It has further been shown that any non-complete, double-critical, t-chromatic graph contains Kt as a minor for t ? 8. We give a shorter proof of this result for t = 7, a computer-free proof for t = 8, and extend the result to show that G contains a K9-minor for all t ? 9. Finally, we show that the Double-Critical Graph Conjecture is true for double-critical graphs with chromatic number t ? 8 if such graphs are claw-free.
Show less - Date Issued
- 2017
- Identifier
- CFE0006649, ucf:51227
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006649
- Title
- Reactive Rejuvenation of CMOS Logic Paths using Self-activating Voltage Domains.
- Creator
-
Khoshavi Najafabadi, Navid, DeMara, Ronald, Yuan, Jiann-Shiun, Song, Zixia, University of Central Florida
- Abstract / Description
-
Aggressive CMOS technology scaling trends exacerbate the aging-related degradation of propagation delay and energy efficiency in nanoscale designs. Recently, power-gating has been utilized as an effective low-power design technique which has also been shown to alleviate some aging impacts. However, the use of MOSFETs to realize power-gated designs will also encounter aging-induced degradations in the sleep transistors themselves which necessitates the exploration of design strategies to...
Show moreAggressive CMOS technology scaling trends exacerbate the aging-related degradation of propagation delay and energy efficiency in nanoscale designs. Recently, power-gating has been utilized as an effective low-power design technique which has also been shown to alleviate some aging impacts. However, the use of MOSFETs to realize power-gated designs will also encounter aging-induced degradations in the sleep transistors themselves which necessitates the exploration of design strategies to utilize power-gating effectively to mitigate aging. In particular, Bias Temperature Instability (BTI) which occurs during activation of power-gated voltage islands is investigated with respect to the placement of the sleep transistor in the header or footer as well as the impact of ungated input transitions on interfacial trapping. Results indicate the effectiveness of power-gating on NBTI/PBTI phenomena and propose a preferred sleep transistor configuration for maximizing higher recovery. Furthermore, the aging effect can manifest itself as timing error on critical speed-paths of the circuit, if a large design guardband is not reserved. To mitigate circuit from BTI-induced aging, the Reactive Rejuvenation (RR) architectural approach is proposed which entails detection and recovery phases. The BTI impact on the critical and near critical paths performance is continuously examined through a lightweight logic circuit which asserts an error signal in the case of any timing violation in those paths. By observing the timing violation occurrence in the system, the timing-sensitive portion of the circuit is recovered from BTI through switching computations to redundant aging-critical voltage domain. The proposed technique achieves aging mitigation and reduced energy consumption as compared to a baseline circuit. Thus, signi?cant voltage guardbands to meet the desired timing speci?cation are avoided result in energy savings during circuit operation.
Show less - Date Issued
- 2016
- Identifier
- CFE0006339, ucf:51561
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006339
- Title
- Probabilistic-Based Computing Transformation with Reconfigurable Logic Fabrics.
- Creator
-
Alawad, Mohammed, Lin, Mingjie, DeMara, Ronald, Mikhael, Wasfy, Wang, Jun, Das, Tuhin, University of Central Florida
- Abstract / Description
-
Effectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such...
Show moreEffectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such, the traditional computing means of first-principlemodeling or explicit statistical modeling will very likely be ineffective to achieveflexibility, autonomy, and human interaction. The bottom line is clear: given where we areheaded, the fundamental principle of modern computing(-)deterministic logic circuits canflawlessly emulate propositional logic deduction governed by Boolean algebra(-)has to bereexamined, and transformative changes in the foundation of modern computing must bemade.This dissertation presents a novel stochastic-based computing methodology. It efficientlyrealizes the algorithmatic computing through the proposed concept of Probabilistic DomainTransform (PDT). The essence of PDT approach is to encode the input signal asthe probability density function, perform stochastic computing operations on the signal inthe probabilistic domain, and decode the output signal by estimating the probability densityfunction of the resulting random samples. The proposed methodology possesses manynotable advantages. Specifically, it uses much simplified circuit units to conduct complexoperations, which leads to highly area- and energy-efficient designs suitable for parallel processing.Moreover, it is highly fault-tolerant because the information to be processed isencoded with a large ensemble of random samples. As such, the local perturbations of itscomputing accuracy will be dissipated globally, thus becoming inconsequential to the final overall results. Finally, the proposed probabilistic-based computing can facilitate buildingscalable precision systems, which provides an elegant way to trade-off between computingaccuracy and computing performance/hardware efficiency for many real-world applications.To validate the effectiveness of the proposed PDT methodology, two important signal processingapplications, discrete convolution and 2-D FIR filtering, are first implemented andbenchmarked against other deterministic-based circuit implementations. Furthermore, alarge-scale Convolutional Neural Network (CNN), a fundamental algorithmic building blockin many computer vision and artificial intelligence applications that follow the deep learningprinciple, is also implemented with FPGA based on a novel stochastic-based and scalablehardware architecture and circuit design. The key idea is to implement all key componentsof a deep learning CNN, including multi-dimensional convolution, activation, and poolinglayers, completely in the probabilistic computing domain. The proposed architecture notonly achieves the advantages of stochastic-based computation, but can also solve severalchallenges in conventional CNN, such as complexity, parallelism, and memory storage.Overall, being highly scalable and energy efficient, the proposed PDT-based architecture iswell-suited for a modular vision engine with the goal of performing real-time detection, recognitionand segmentation of mega-pixel images, especially those perception-based computingtasks that are inherently fault-tolerant.
Show less - Date Issued
- 2016
- Identifier
- CFE0006828, ucf:51768
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006828
- Title
- Energy-Aware Data Movement In Non-Volatile Memory Hierarchies.
- Creator
-
Khoshavi Najafabadi, Navid, DeMara, Ronald, Yuan, Jiann-Shiun, Song, Zixia, University of Central Florida
- Abstract / Description
-
While technology scaling enables increased density for memory cells, the intrinsic high leakagepower of conventional CMOS technology and the demand for reduced energy consumption inspiresthe use of emerging technology alternatives such as eDRAM and Non-Volatile Memory (NVM) including STT-MRAM, PCM, and RRAM. The utilization of emerging technology in Last Level Cache (LLC) designs which occupies a signi?cant fraction of total die area in Chip Multi Processors (CMPs) introduces new dimensions...
Show moreWhile technology scaling enables increased density for memory cells, the intrinsic high leakagepower of conventional CMOS technology and the demand for reduced energy consumption inspiresthe use of emerging technology alternatives such as eDRAM and Non-Volatile Memory (NVM) including STT-MRAM, PCM, and RRAM. The utilization of emerging technology in Last Level Cache (LLC) designs which occupies a signi?cant fraction of total die area in Chip Multi Processors (CMPs) introduces new dimensions of vulnerability, energy consumption, and performance delivery. To be speci?c, a part of this research focuses on eDRAM Bit Upset Vulnerability Factor (BUVF) to assess vulnerable portion of the eDRAM refresh cycle where the critical charge varies depending on the write voltage, storage and bit-line capacitance. This dissertation broaden the study on vulnerability assessment of LLC through investigating the impact of Process Variations (PV) on narrow resistive sensing margins in high-density NVM arrays, including on-chip cache and primary memory. Large-latency and power-hungry Sense Ampli?ers (SAs) have been adapted to combat PV in the past. Herein, a novel approach is proposed to leverage the PV in NVM arrays using Self-Organized Sub-bank (SOS) design. SOS engages the preferred SA alternative based on the intrinsic as-built behavior of the resistive sensing timing margin to reduce the latency and power consumption while maintaining acceptable access time.On the other hand, this dissertation investigates a novel technique to prioritize the service to 1)Extensive Read Reused Accessed blocks of the LLC that are silently dropped from higher levelsof cache, and 2) the portion of the working set that may exhibit distant re-reference interval in L2. In particular, we develop a lightweight Multi-level Access History Pro?ler to ef?ciently identifyERRA blocks through aggregating the LLC block addresses tagged with identical Most Signi?cantBits into a single entry. Experimental results indicate that the proposed technique can reduce theL2 read miss ratio by 51.7% on average across PARSEC and SPEC2006 workloads.In addition, this dissertation will broaden and apply advancements in theories of subspace recoveryto pioneer computationally-aware in-situ operand reconstruction via the novel Logic In Intercon-nect (LI2) scheme. LI2 will be developed, validated, and re?ned both theoretically and experimentally to realize a radically different approach to post-Moore's Law computing by leveraginglow-rank matrices features offering data reconstruction instead of fetching data from main memory to reduce energy/latency cost per data movement. We propose LI2 enhancement to attain highperformance delivery in the post-Moore's Law era through equipping the contemporary micro-architecture design with a customized memory controller which orchestrates the memory requestfor fetching low-rank matrices to customized Fine Grain Recon?gurable Accelerator (FGRA) forreconstruction while the other memory requests are serviced as before. The goal of LI2 is to conquer the high latency/energy required to traverse main memory arrays in the case of LLC miss, by using in-situ construction of the requested data dealing with low-rank matrices. Thus, LI2 exchanges a high volume of data transfers with a novel lightweight reconstruction method under speci?c conditions using a cross-layer hardware/algorithm approach.
Show less - Date Issued
- 2017
- Identifier
- CFE0006754, ucf:51859
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006754
- Title
- Energy-Aware Reconfigurable Logic Device Using Spin-based Storage and Carbon Nanotube Switching.
- Creator
-
Gopi Krishna, Mohan Krishna, DeMara, Ronald, Yuan, Jiann-Shiun, Del Barco, Enrique, University of Central Florida
- Abstract / Description
-
Scaling of semiconductors to the 14-nanometer range and below nanometer range introduces serious design challenges that include high static power in memories and high leakage power, hindering further integration of CMOS devices. Thus, emerging devices are under intense analysis to overcome these drawbacks caused by transistor size scaling. Spintronics technology provides excellent features such as Non-Volatility, low read power, low read delay, higher scalability as well as easy integration...
Show moreScaling of semiconductors to the 14-nanometer range and below nanometer range introduces serious design challenges that include high static power in memories and high leakage power, hindering further integration of CMOS devices. Thus, emerging devices are under intense analysis to overcome these drawbacks caused by transistor size scaling. Spintronics technology provides excellent features such as Non-Volatility, low read power, low read delay, higher scalability as well as easy integration with CMOS in comparison with SRAM memories. In addition, Carbon-Nanotube Field-Effect Transistors (CNFETs) provide superior electrical conductivity, low delay and low power consumption in comparison with conventional CMOS technology. Thus in this thesis, a unique approach to amalgamate spintronics memory technology with CNFET for logic drive in a reconfigurable computing architecture, realizing ultimate circuit performance has been discussed. A Carbon Magnetic Look-Up Table (CM-LUT) is proposed, using a Magnetic Tunnel Junction (MTJ) spintronic device as memory element and CNFET to perform the logical operations to read the data stored in the aforementioned devices. The proposed circuit is radiation resilient, ultra-low power and high speed operation and the ability to withstand high temperature gradient, Ideal for low power high performance battery operated mobile applications. In addition, the performance of hybrid drive for LUT to leverage fabrication feasibility of CMOS and performance of CNFET to realize fabrication cost effective design. The proposed 4-input 1-output CM-LUT utilizes 41 CNFETs and 16 MTJs for read operation and 35 CNFETs to perform write operation. The results for CM-LUT show 38 times energy reduction and 5.8 times faster circuit operation in comparison with CMOS-based spin-LUT.
Show less - Date Issued
- 2016
- Identifier
- CFE0006109, ucf:51204
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006109
- Title
- Time and Space Efficient Techniques for Facial Recognition.
- Creator
-
Alrasheed, Waleed, Mikhael, Wasfy, DeMara, Ronald, Haralambous, Michael, Wei, Lei, Myers, Brent, University of Central Florida
- Abstract / Description
-
In recent years, there has been an increasing interest in face recognition. As a result, many new facial recognition techniques have been introduced. Recent developments in the field of face recognition have led to an increase in the number of available face recognition commercial products. However, Face recognition techniques are currently constrained by three main factors: recognition accuracy, computational complexity, and storage requirements. The problem is that most of the current face...
Show moreIn recent years, there has been an increasing interest in face recognition. As a result, many new facial recognition techniques have been introduced. Recent developments in the field of face recognition have led to an increase in the number of available face recognition commercial products. However, Face recognition techniques are currently constrained by three main factors: recognition accuracy, computational complexity, and storage requirements. The problem is that most of the current face recognition techniques succeed in improving one or two of these factors at the expense of the others.In this dissertation, four novel face recognition techniques that improve the storage and computational requirements of face recognition systems are presented and analyzed. Three of the four novel face recognition techniques to be introduced, namely, Quantized/truncated Transform Domain (QTD), Frequency Domain Thresholding and Quantization (FD-TQ), and Normalized Transform Domain (NTD). All the three techniques utilize the Two-dimensional Discrete Cosine Transform (DCT-II), which reduces the dimensionality of facial feature images, thereby reducing the computational complexity. The fourth novel face recognition technique is introduced, namely, the Normalized Histogram Intensity (NHI). It is based on utilizing the pixel intensity histogram of poses' subimages, which reduces the computational complexity and the needed storage requirements. Various simulation experiments using MATLAB were conducted to test the proposed methods. For the purpose of benchmarking the performance of the proposed methods, the simulation experiments were performed using current state-of-the-art face recognition techniques, namely, Two Dimensional Principal Component Analysis (2DPCA), Two-Directional Two-Dimensional Principal Component Analysis ((2D)^2PCA), and Transform Domain Two Dimensional Principal Component Analysis (TD2DPCA). The experiments were applied to the ORL, Yale, and FERET databases.The experimental results for the proposed techniques confirm that the use of any of the four novel techniques examined in this study results in a significant reduction in computational complexity and storage requirements compared to the state-of-the-art techniques without sacrificing the recognition accuracy.
Show less - Date Issued
- 2013
- Identifier
- CFE0005297, ucf:50566
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005297
- Title
- Structural Identification through Monitoring, Modeling and Predictive Analysis under Uncertainty.
- Creator
-
Gokce, Hasan, Catbas, Fikret, Chopra, Manoj, Mackie, Kevin, Yun, Hae-Bum, DeMara, Ronald, University of Central Florida
- Abstract / Description
-
Bridges are critical components of highway networks, which provide mobility and economical vitality to a nation. Ensuring the safety and regular operation as well as accurate structural assessment of bridges is essential. Structural Identification (St-Id) can be utilized for better assessment of structures by integrating experimental and analytical technologies in support of decision-making. St-Id is defined as creating parametric or nonparametric models to characterize structural behavior...
Show moreBridges are critical components of highway networks, which provide mobility and economical vitality to a nation. Ensuring the safety and regular operation as well as accurate structural assessment of bridges is essential. Structural Identification (St-Id) can be utilized for better assessment of structures by integrating experimental and analytical technologies in support of decision-making. St-Id is defined as creating parametric or nonparametric models to characterize structural behavior based on structural health monitoring (SHM) data. In a recent study by the ASCE St-Id Committee, St-Id framework is given in six steps, including modeling, experimentation and ultimately decision making for estimating the performance and vulnerability of structural systems reliably through the improved simulations using monitoring data. In some St-Id applications, there can be challenges and considerations related to this six-step framework. For instance not all of the steps can be employed; thereby a subset of the six steps can be adapted for some cases based on the various limitations. In addition, each step has its own characteristics, challenges, and uncertainties due to the considerations such as time varying nature of civil structures, modeling and measurements. It is often discussed that even a calibrated model has limitations in fully representing an existing structure; therefore, a family of models may be well suited to represent the structure's response and performance in a probabilistic manner.The principle objective of this dissertation is to investigate nonparametric and parametric St-Id approaches by considering uncertainties coming from different sources to better assess the structural condition for decision making. In the first part of the dissertation, a nonparametric St-Id approach is employed without the use of an analytical model. The new methodology, which is successfully demonstrated on both lab and real-life structures, can identify and locate the damage by tracking correlation coefficients between strain time histories and can locate the damage from the generated correlation matrices of different strain time histories. This methodology is found to be load independent, computationally efficient, easy to use, especially for handling large amounts of monitoring data, and capable of identifying the effectiveness of the maintenance. In the second part, a parametric St-Id approach is introduced by developing a family of models using Monte Carlo simulations and finite element analyses to explore the uncertainty effects on performance predictions in terms of load rating and structural reliability. The family of models is developed from a parent model, which is calibrated using monitoring data. In this dissertation, the calibration is carried out using artificial neural networks (ANNs) and the approach and results are demonstrated on a laboratory structure and a real-life movable bridge, where predictive analyses are carried out for performance decrease due to deterioration, damage, and traffic increase over time. In addition, a long-span bridge is investigated using the same approach when the bridge is retrofitted. The family of models for these structures is employed to determine the component and system reliability, as well as the load rating, with a distribution that incorporates various uncertainties that were defined and characterized. It is observed that the uncertainties play a considerable role even when compared to calibrated model-based predictions for reliability and load rating, especially when the structure is complex, deteriorated and aged, and subjected to variable environmental and operational conditions. It is recommended that a family-of-models approach is suitable for structures that have less redundancy, high operational importance, are deteriorated, and are performing under close capacity and demand levels.
Show less - Date Issued
- 2012
- Identifier
- CFE0004232, ucf:48997
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004232
- Title
- Study of Novel Power Semiconductor Devices for Performance and Reliability.
- Creator
-
Padmanabhan, Karthik, Yuan, Jiann-Shiun, Sundaram, Kalpathy, Atia, George, DeMara, Ronald, Chow, Lee, University of Central Florida
- Abstract / Description
-
Power Semiconductor Devices are crucial components in present day power electronic systems. The performance and efficiency of the devices have a direct correlation with the power system efficiency. This dissertation will examine some of the components that are commonly used in a power system, with emphasis on their performance characteristics and reliability. In recent times, there has a proliferation of charge balance devices in high voltage discrete power devices. We examine the same charge...
Show morePower Semiconductor Devices are crucial components in present day power electronic systems. The performance and efficiency of the devices have a direct correlation with the power system efficiency. This dissertation will examine some of the components that are commonly used in a power system, with emphasis on their performance characteristics and reliability. In recent times, there has a proliferation of charge balance devices in high voltage discrete power devices. We examine the same charge balance concept in a fast recovery diode and a MOSFET. This is crucial in the extending system performance at compact dimensions. At smaller device and system sizes, the performance trade-off between the ON and OFF states becomes all the more critical. The focus on reducing the switching losses while maintaining system reliability increases. In a conventional planar technology, the technology places a limit on the switching performance owing to the larger die sizes. Using a charge balance structure helps achieve the improved trade-off, while working towards ultimately improving system reliability, size and cost.Chapter 1 introduces the basic power system based on an inductive switching circuit, and the various components that determine its efficiency. Chapter 2 presents a novel Trench Fast Recovery Diode (FRD) structure with injection control is proposed in this dissertation. The proposed structure achieves improved carrier profile without the need for excess lifetime control. This substantially improves the device performance, especially at extreme temperatures (-40oC to 175oC). The device maintains low leakage at high temperatures, and it's Qrr and Irm do not degrade as is the usual case in heavily electron radiated devices. A 1600 diode using this structure has been developed, with a low forward turn-on voltage and good reverse recovery properties. The experimental results show that the structure maintains its performance at high temperatures.In chapter 3, we develop a termination scheme for the previously mentioned diode. A major limitation on the performance of high voltage power semiconductor is the edge termination of the device. It is critical to maintain the breakdown voltage of the device without compromising the reliability of the device by controlling the surface electric field. A good termination structure is critical to the reliability of the power semiconductor device. The proposed termination uses a novel trench MOS with buried guard ring structure to completely eliminate high surface electric field in the silicon region of the termination. The termination scheme was applied towards a 1350 V fast recovery diode, and showed excellent results. It achieved 98% of parallel plane breakdown voltage, with low leakage and no shifts after High Temperature Reverse Bias testing due to mobile ion contamination from packaging mold compound.In chapter 4, we also investigate the device physics behind a superjunction MOSFET structure for improved robustness. The biggest issue with a completely charge balanced MOSFET is decreased robustness in an Unclamped Inductive Switching (UIS) Circuit. The equally charged P and N pillars result in a flat electric field profile, with the peak carrier density closer to the P-N junction at the surface. This results in an almost negligible positive dynamic Rds-on effect in the MOSFET. By changing the charge profile of the P-column, either by increasing it completely or by implementing a graded profile with the heavier P on top, we can change the field profile and shift the carrier density deeper into silicon, increasing the positive dynamic Rds-on effect. Simulation and experimental results are presented to support the theory and understanding.Chapter 5 summarizes all the theories presented and the contributions made by them in the field. It also seeks to highlight future work to be done in these areas.
Show less - Date Issued
- 2016
- Identifier
- CFE0006158, ucf:51148
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006158
- Title
- Design Disjunction for Resilient Reconfigurable Hardware.
- Creator
-
Alzahrani, Ahmad, DeMara, Ronald, Yuan, Jiann-Shiun, Lin, Mingjie, Wang, Jun, Turgut, Damla, University of Central Florida
- Abstract / Description
-
Contemporary reconfigurable hardware devices have the capability to achieve high performance, powerefficiency, and adaptability required to meet a wide range of design goals. With scaling challenges facing current complementary metal oxide semiconductor (CMOS), new concepts and methodologies supportingefficient adaptation to handle reliability issues are becoming increasingly prominent. Reconfigurable hardware and their ability to realize self-organization features are expected to play a key...
Show moreContemporary reconfigurable hardware devices have the capability to achieve high performance, powerefficiency, and adaptability required to meet a wide range of design goals. With scaling challenges facing current complementary metal oxide semiconductor (CMOS), new concepts and methodologies supportingefficient adaptation to handle reliability issues are becoming increasingly prominent. Reconfigurable hardware and their ability to realize self-organization features are expected to play a key role in designingfuture dependable hardware architectures. However, the exponential increase in density and complexity of current commercial SRAM-based field-programmable gate arrays (FPGAs) has escalated the overheadassociated with dynamic runtime design adaptation. Traditionally, static modular redundancy techniques areconsidered to surmount this limitation; however, they can incur substantial overheads in both area andpower requirements. To achieve a better trade-off among performance, area, power, and reliability, thisresearch proposes design-time approaches that enable fine selection of redundancy level based on target reliability goals and autonomous adaptation to runtime demands. To achieve this goal, three studies were conducted:First, a graph and set theoretic approach, named Hypergraph-Cover Diversity (HCD), is introduced as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-freehypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets ofresources, each of which can be utilized by the same synthesized application netlist. The diverseimplementations provide reconfiguration-based resilience throughout the system lifetime while avoiding thesignificant overheads associated with runtime placement and routing phases. Evaluation on a Motion-JPEGimage compression core using a Xilinx 7-series-based FPGA hardware platform has demonstrated thepotential of the proposed FT method to achieve 37.5% area saving and up to 66% reduction in powerconsumption compared to the frequently-used TMR scheme while providing superior fault tolerance.Second, Design Disjunction based on non-adaptive group testing is developed to realize a low-overheadfault tolerant system capable of handling self-testing and self-recovery using runtime partial reconfiguration.Reconfiguration is guided by resource grouping procedures which employ non-linear measurements given by the constructive property of f-disjunctness to extend runtime resilience to a large fault space and realize a favorable range of tradeoffs. Disjunct designs are created using the mosaic convergence algorithmdeveloped such that at least one configuration in the library evades any occurrence of up to d resource faults, where d is lower-bounded by f. Experimental results for a set of MCNC and ISCAS benchmarks havedemonstrated f-diagnosability at the individual slice level with average isolation resolution of 96.4% (94.4%) for f=1 (f=2) while incurring an average critical path delay impact of only 1.49% and area cost roughly comparable to conventional 2-MR approaches. Finally, the proposed Design Disjunction method is evaluated as a design-time method to improve timing yield in the presence of large random within-die (WID) process variations for application with a moderately high production capacity.
Show less - Date Issued
- 2015
- Identifier
- CFE0006250, ucf:51086
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006250
- Title
- Enhanced Hardware Security Using Charge-Based Emerging Device Technology.
- Creator
-
Bi, Yu, Yuan, Jiann-Shiun, Jin, Yier, DeMara, Ronald, Lin, Mingjie, Chow, Lee, University of Central Florida
- Abstract / Description
-
The emergence of hardware Trojans has largely reshaped the traditional view that the hardware layer can be blindly trusted. Hardware Trojans, which are often in the form of maliciously inserted circuitry, may impact the original design by data leakage or circuit malfunction. Hardware counterfeiting and IP piracy are another two serious issues costing the US economy more than $200 billion annually. A large amount of research and experimentation has been carried out on the design of these...
Show moreThe emergence of hardware Trojans has largely reshaped the traditional view that the hardware layer can be blindly trusted. Hardware Trojans, which are often in the form of maliciously inserted circuitry, may impact the original design by data leakage or circuit malfunction. Hardware counterfeiting and IP piracy are another two serious issues costing the US economy more than $200 billion annually. A large amount of research and experimentation has been carried out on the design of these primitives based on the currently prevailing CMOS technology.However, the security provided by these primitives comes at the cost of large overheads mostly in terms of area and power consumption. The development of emerging technologies provides hardware security researchers with opportunities to utilize some of the otherwise unusable properties of emerging technologies in security applications. In this dissertation, we will include the security consideration in the overall performance measurements to fully compare the emerging devices with CMOS technology.The first approach is to leverage two emerging devices (Silicon NanoWire and Graphene SymFET) for hardware security applications. Experimental results indicate that emerging device based solutions can provide high level circuit protection with relatively lower performance overhead compared to conventional CMOS counterpart. The second topic is to construct an energy-efficient DPA-resilient block cipher with ultra low-power Tunnel FET. Current-mode logic is adopted as a circuit-level solution to countermeasure differential power analysis attack, which is mostly used in the cryptographic system. The third investigation targets on potential security vulnerability of foundry insider's attack. Split manufacturing is adopted for the protection on radio-frequency (RF) circuit design.
Show less - Date Issued
- 2016
- Identifier
- CFE0006264, ucf:51041
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006264
- Title
- Autonomous Recovery of Reconfigurable Logic Devices using Priority Escalation of Slack.
- Creator
-
Imran, Syednaveed, DeMara, Ronald, Mikhael, Wasfy, Lin, Mingjie, Yuan, Jiann-Shiun, Geiger, Christopher, University of Central Florida
- Abstract / Description
-
Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases.To extend these concepts to semiconductor aging and process variation in the deep...
Show moreField Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases.To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Reconfigurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric.FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria.
Show less - Date Issued
- 2013
- Identifier
- CFE0005006, ucf:50005
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005006