Current Search: energy efficient (x)
View All Items
- Title
- Research on Improving Reliability, Energy Efficiency and Scalability in Distributed and Parallel File Systems.
- Creator
-
Zhang, Junyao, Wang, Jun, Zhang, Shaojie, Lee, Jooheung, University of Central Florida
- Abstract / Description
-
With the increasing popularity of cloud computing and "Big data" applications, current data centers are often required to manage petabytes or exabytes of data. To store this huge amount of data, thousands or tens of thousands storage nodes are required at a single site. This imposes three major challenges for storage system designers: (1) Reliability---node failure in these datacenters is a normal occurrence rather than a rare situation. This makes data reliability a great concern. (2) Energy...
Show moreWith the increasing popularity of cloud computing and "Big data" applications, current data centers are often required to manage petabytes or exabytes of data. To store this huge amount of data, thousands or tens of thousands storage nodes are required at a single site. This imposes three major challenges for storage system designers: (1) Reliability---node failure in these datacenters is a normal occurrence rather than a rare situation. This makes data reliability a great concern. (2) Energy efficiency---a data center can consume up to 100 times more energy than a standard office building. More than 10% of this energy consumption can be attributed to storage systems. Thus, reducing the energy consumption of the storage system is key to reducing the overall consumption of the data center.(3) Scalability---with the continuously increasing size of data, maintaining the scalability of the storage systems is essential. That is, the expansion of the storage system should be completed efficiently and without limitations on the total number of storage nodes or performance.This thesis proposes three ways to improve the above three key features for current large-scale storage systems. Firstly, we define the problem of "reverse lookup", namely finding the list of objects (blocks) for a failed node. As the first step of failure recovery, this process is directly related to the recovery/reconstruction time. While existing solutions use metadata traversal or data distribution reversing methods for reverse lookup, which are either time consuming or expensive, a deterministic block placement can achieve fast and efficient reverse lookup.However, the deterministic placement solutions are designed for centralized, small-scale storage architectures such as RAID etc.. Due to their lacking of scalability, they cannot be directly applied in large-scale storage systems. In this paper, we propose Group-Shifted Declustering (G-SD), a deterministic data layout for multi-way replication. G-SD addresses the scalability issue of our previous Shifted Declustering layout and supports fast and efficient reverse lookup.Secondly, we define a problem: "how to balance the performance, energy, and recovery in degradation mode for an energy efficient storage system?". While extensive researches have been proposed to tradeoff performance for energy efficiency under normal mode, the system enters degradation mode when node failure occurs, in which node reconstruction is initiated. This very process requires a number of disks to be spun up and requires a substantial amount of I/O bandwidth, which will not only compromise energy efficiency but also performance. Without considering the I/O bandwidth contention between recovery and performance, we find that the current energy proportional solutions cannot answer this question accurately. This thesis present PERP, a mathematical model to minimize the energy consumption for a storage systems with respect to performance and recovery. PERP answers this problem by providing the accurate number of nodes and the assigned recovery bandwidth at each time frame.Thirdly, current distributed file systems such as Google File System(GFS) and Hadoop Distributed File System (HDFS), employ a pseudo-random method for replica distribution and a centralized lookup table (block map) to record all replica locations. This lookup table requires a large amount of memory and consumes a considerable amount of CPU/network resources on the metadata server. With the booming size of "Big Data", the metadata server becomes a scalability and performance bottleneck. While current approaches such as HDFS Federation attempt to "horizontally" extend scalability by allowing multiple metadata servers, we believe a more promising optimization option is to "vertically" scale up each metadata server. We propose Deister, a novel block management scheme that builds on top of a deterministic declustering distribution method Intersected Shifted Declustering (ISD). Thus both replica distribution and location lookup can be achieved without a centralized lookup table.
Show less - Date Issued
- 2015
- Identifier
- CFE0006238, ucf:51082
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006238
- Title
- Optimization of Ocean Thermal Energy Conversion Power Plants.
- Creator
-
Rizea, Steven, Ilie, Marcel, Bai, Yuanli, Vasu Sumathi, Subith, University of Central Florida
- Abstract / Description
-
A proprietary Ocean Thermal Energy Conversion (OTEC) modeling tool, the Makai OTEC Thermodynamic and Economic Model (MOTEM), is leveraged to evaluate the accuracy of finite-time thermodynamic OTEC optimization methods. MOTEM is a full OTEC system simulator capable of evaluating the effects of variation in heat exchanger operating temperatures and seawater flow rates. The evaluation is based on a comparison of the net power output of an OTEC plant with a fixed configuration. Select...
Show moreA proprietary Ocean Thermal Energy Conversion (OTEC) modeling tool, the Makai OTEC Thermodynamic and Economic Model (MOTEM), is leveraged to evaluate the accuracy of finite-time thermodynamic OTEC optimization methods. MOTEM is a full OTEC system simulator capable of evaluating the effects of variation in heat exchanger operating temperatures and seawater flow rates. The evaluation is based on a comparison of the net power output of an OTEC plant with a fixed configuration. Select optimization methods from the literature are shown to produce between 93% and 99% of the maximum possible amount of power, depending on the selection of heat exchanger performance curves. OTEC optimization is found to be dependent on the performance characteristics of the evaporator and condenser used in the plant. Optimization algorithms in the literature do not take heat exchanger performance variation into account, which causes a discrepancy between their predictions and those calculated with MOTEM. A new characteristic metric of OTEC optimization, the ratio of evaporator and condenser overall heat transfer coefficients, is found. The heat transfer ratio is constant for all plant configurations in which the seawater flow rate is optimized for any particular evaporator and condenser operating temperatures. The existence of this ratio implies that a solution for the ideal heat exchanger operating temperatures could be computed based on the ratio of heat exchanger performance curves, and additional research is recommended.
Show less - Date Issued
- 2012
- Identifier
- CFE0004430, ucf:49343
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004430
- Title
- Developing new power management and High-Reliability Schemes in Data-Intensive Environment.
- Creator
-
Wang, Ruijun, Wang, Jun, Jin, Yier, DeMara, Ronald, Zhang, Shaojie, Ni, Liqiang, University of Central Florida
- Abstract / Description
-
With the increasing popularity of data-intensive applications as well as the large-scale computingand storage systems, current data centers and supercomputers are often dealing with extremelylarge data-sets. To store and process this huge amount of data reliably and energy-efficiently,three major challenges should be taken into consideration for the system designers. Firstly, power conservation(-)Multicore processors or CMPs have become a mainstream in the current processormarket because of...
Show moreWith the increasing popularity of data-intensive applications as well as the large-scale computingand storage systems, current data centers and supercomputers are often dealing with extremelylarge data-sets. To store and process this huge amount of data reliably and energy-efficiently,three major challenges should be taken into consideration for the system designers. Firstly, power conservation(-)Multicore processors or CMPs have become a mainstream in the current processormarket because of the tremendous improvement in transistor density and the advancement in semiconductor technology. However, the increasing number of transistors on a single die or chip reveals a super-linear growth in power consumption [4]. Thus, how to balance system performance andpower-saving is a critical issue which needs to be solved effectively. Secondly, system reliability(-)Reliability is a critical metric in the design and development of replication-based big data storagesystems such as Hadoop File System (HDFS). In the system with thousands machines and storagedevices, even in-frequent failures become likely. In Google File System, the annual disk failurerate is 2:88%,which means you were expected to see 8,760 disk failures in a year. Unfortunately,given an increasing number of node failures, how often a cluster starts losing data when beingscaled out is not well investigated. Thirdly, energy efficiency(-)The fast processing speeds of the current generation of supercomputers provide a great convenience to scientists dealing with extremely large data sets. The next generation of (")exascale(") supercomputers could provide accuratesimulation results for the automobile industry, aerospace industry, and even nuclear fusion reactors for the very first time. However, the energy cost of super-computing is extremely high, with a total electricity bill of 9 million dollars per year. Thus, conserving energy and increasing the energy efficiency of supercomputers has become critical in recent years.This dissertation proposes new solutions to address the above three key challenges for currentlarge-scale storage and computing systems. Firstly, we propose a novel power management scheme called MAR (model-free, adaptive, rule-based) in multiprocessor systems to minimize the CPU power consumption subject to performance constraints. By introducing new I/O wait status, MAR is able to accurately describe the relationship between core frequencies, performance and power consumption. Moreover, we adopt a model-free control method to filter out the I/O wait status from the traditional CPU busy/idle model in order to achieve fast responsiveness to burst situations and take full advantage of power saving. Our extensive experiments on a physical testbed demonstrate that, for SPEC benchmarks and data-intensive (TPC-C) benchmarks, an MAR prototype system achieves 95.8-97.8% accuracy of the ideal power saving strategy calculated offline. Compared with baseline solutions, MAR is able to save 12.3-16.1% more power while maintain a comparable performance loss of about 0.78-1.08%. In addition, more simulation results indicate that our design achieved 3.35-14.2% more power saving efficiency and 4.2-10.7% less performance loss under various CMP configurations as compared with various baseline approaches such as LAST, Relax,PID and MPC.Secondly, we create a new reliability model by incorporating the probability of replica loss toinvestigate the system reliability of multi-way declustering data layouts and analyze their potential parallel recovery possibilities. Our comprehensive simulation results on Matlab and SHARPE show that the shifted declustering data layout outperforms the random declustering layout in a multi-way replication scale-out architecture, in terms of data loss probability and system reliability by upto 63% and 85% respectively. Our study on both 5-year and 10-year system reliability equipped with various recovery bandwidth settings shows that, the shifted declustering layout surpasses the two baseline approaches in both cases by consuming up to 79 % and 87% less recovery bandwidth for copyset, as well as 4.8% and 10.2% less recovery bandwidth for random layout.Thirdly, we develop a power-aware job scheduler by applying a rule based control method and takinginto account real world power and speedup profiles to improve power efficiency while adheringto predetermined power constraints. The intensive simulation results shown that our proposed method is able to achieve the maximum utilization of computing resources as compared to baselinescheduling algorithms while keeping the energy cost under the threshold. Moreover, by introducinga Power Performance Factor (PPF) based on the real world power and speedup profiles, we areable to increase the power efficiency by up to 75%.
Show less - Date Issued
- 2016
- Identifier
- CFE0006704, ucf:51907
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006704
- Title
- RESOURCE BANKING: AN ENERGY-EFFICIENT, RUN-TIME ADAPTIVE PROCESSOR DESIGN TECHNIQUE.
- Creator
-
Staples, Jacob, Heinrich, Mark, University of Central Florida
- Abstract / Description
-
From the earliest and simplest scalar computation engines to modern superscalar out-of-order processors, the evolution of computational machinery during the past century has largely been driven by a single goal: performance. In today's world of cheap, billion-plus transistor count processors and with an exploding market in mobile computing, a design landscape has emerged where energy efficiency, arguably more than any other single metric, determines the viability of a processor for a given...
Show moreFrom the earliest and simplest scalar computation engines to modern superscalar out-of-order processors, the evolution of computational machinery during the past century has largely been driven by a single goal: performance. In today's world of cheap, billion-plus transistor count processors and with an exploding market in mobile computing, a design landscape has emerged where energy efficiency, arguably more than any other single metric, determines the viability of a processor for a given application. The historical emphasis on performance has left modern processors bloated and over provisioned for everyday tasks in the hope that during computationally intensive periods some performance improvement will be observed. This work explores an energy-efficient processor design technique that ensures even a highly over provisioned out-of-order processor has only as many of its computational resources active as it requires for efficient computation at any given time. Specifically, this paper examines the feasibility of a dynamically banked register file and reorder buffer with variable banking policies that enable unused rename registers or reorder buffer entries to be voltage gated (turned off) during execution to save power. The impact of bank placement, turn-off and turn-on policies as well as rail stabilization latencies for this approach are explored for high-performance desktop and server designs as well as low-power mobile processors.
Show less - Date Issued
- 2011
- Identifier
- CFE0003991, ucf:48675
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003991
- Title
- GETTING TO NET ZERO ENERGY BUILDINGS: A HOLISTIC TECHNO-ECOLOGICAL MODELING APPROACH.
- Creator
-
Alirezaei, Mehdi, Tatari, Omer, Oloufa, Amr, Nam, Boo Hyun, Xanthopoulos, Petros, University of Central Florida
- Abstract / Description
-
Buildings in the United States are responsible for more than 40% of the primary energy and 70% of electricity usage and greatly in CO2 emission by about 39%, more than any other sector including transportation and industry sectors. This energy consumption is expected to grow mainly due to increasing trends in new buildings construction. Rising energy prices alongside with energy independencies, limited resources, and climate change have made the current situation even worse. An Energy...
Show moreBuildings in the United States are responsible for more than 40% of the primary energy and 70% of electricity usage and greatly in CO2 emission by about 39%, more than any other sector including transportation and industry sectors. This energy consumption is expected to grow mainly due to increasing trends in new buildings construction. Rising energy prices alongside with energy independencies, limited resources, and climate change have made the current situation even worse. An Energy Efficient (EE) building is able to reduce the heating and cooling load significantly compared with a code compliant building. Furthermore, integrating renewable energy sources in the building energy portfolio could drive the building's grid reliance further down. Such buildings that are able to passively save and actively produce energy are called Net Zero Energy Buildings (NZEB). Despite all new energy efficient technologies, reaching NZEB is challenging due to high first cost of super-efficient measures and renewable energy sources as well as integration of the newly on-site generated electricity to the grid. Achieving NZEB without looking at its surrounding environment may result in sub-optimal solutions. Currently, 95% of American households own a car, and with the help of newly introduced Vehicle to Home (V2H) technologies, building, vehicle, renewable energy sources, and ecological environment can work together as a techno-ecological system to fulfill the requirement of an NZEB ecosystem.Due to the great flexibility of electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs) in interacting with the power grid, they will play a significant role in the future of the power system. In a large scale, an organized fleet of EVs can be considered as reliable and flexible power storage for a set of building blocks or in a smaller scale, individual EV owners can use their own vehicles as a source of power alongside with other sources of power. To this end, V2H technologies can utilize idle EV battery power as an electricity storage tool to mitigate fluctuations in renewable electric power supply, to provide electricity for the building during the peak time, and to help in supplying electricity during emergency situation and power outage. V2H is said to be the solution to a successful integration of renewables and at the same time maintaining the integrity of the grid. This happens through depleting the stored power in the battery of EV and then charging the battery when the demand is low, using the electricity provided by grid or renewables. Government incentives can play an important role in employing this technology by buying out the high first time cost request. According to Energy Information Administration (EIA), U.S. residential utility customers consume 29.95 kWh electricity on average per household-day. With the current technology, EV batteries could store up to 30 kWh electricity. As a result, even for a code compliant house, a family could use EV battery as a source of energy for one normal day operation. For an energy efficient home, there could even be a surplus of energy that could be transferred to the grid. In summary, Achieving NZEB is facing various obstacles and removing these barriers require a more holistic view on a greater system and environment, where a building interacts with on-site renewable energy sources, EV, and its surrounded ecological environment.This dissertation aims to utilize the application of Vehicle to Home technology to reach NZEB by developing two new models in two phases; the macro based excel model (NZEB-VBA) and agent based model (NZEB-ABM). Using these two models, homeowners can calculate the savings through implementing abovementioned technologies which can be considered as a motivation to move toward greener buildings. In the first step, an optimization analysis is performed first to select the best design alternatives for an energy-efficient building under the relevant economic and environmental constraints. Next, solar photovoltaic sources are used to supply the building's remaining energy demand and thereby minimize the building's grid reliance. Finally, Vehicle to Home technology is coupled with the renewable energy source as a substitute for power from the grid. The whole algorithm for this process will be running in the visual basic environment.In the second phase of the study, the focus is more on the dynamic interaction of different components of the system with each other. Although the general procedure is the same, the modeling will take place in a different environment. Showing the status of different parts of the system at any specific time, changing the values of different parameters of the system and observing the results, and investigating the impact of each parameter's on overall behavior of the system are among the advantages of the agent based model. Having real time data can greatly enhance the capabilities of this system. The results indicate that, with the help of energy-efficient design features and a properly developed algorithm to draw electricity from EV and solar energy, it is possible to reduce the required electricity from the power grid by 59% when compared to a standard energy-efficient building and by as much as 90% when compared to a typical code-compliant building. This thereby reduces the electricity cost by 1.55 times the cost of the conventional method of drawing grid electricity. This savings can compensate the installation costs of solar panels and other technologies necessary for a Net Zero Energy Building. In the last phase of the study, a regional analysis will be performed to investigate the effect of different weather conditions, traffic situation and driving behavior on the behavior of this system.
Show less - Date Issued
- 2016
- Identifier
- CFE0006830, ucf:51797
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006830
- Title
- Soft-Error Resilience Framework For Reliable and Energy-Efficient CMOS Logic and Spintronic Memory Architectures.
- Creator
-
Alghareb, Faris, DeMara, Ronald, Lin, Mingjie, Zou, Changchun, Jha, Sumit Kumar, Song, Zixia, University of Central Florida
- Abstract / Description
-
The revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error...
Show moreThe revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error (SE) dependability issues. Consequently, soft-error mitigation techniques have become essential to improve systems' reliability. Herein, first, we proposed optimized soft-error resilience designs to improve robustness of sub-micron computing systems. The proposed approaches were developed to deliver energy-efficiency and tolerate double/multiple errors simultaneously while incurring acceptable speed performance degradation compared to the prior work. Secondly, the impact of Process Variation (PV) at the Near-Threshold Voltage (NTV) region on redundancy-based SE-mitigation approaches for High-Performance Computing (HPC) systems was investigated to highlight the approach that can realize favorable attributes, such as reduced critical datapath delay variation and low speed degradation. Finally, recently, spin-based devices have been widely used to design Non-Volatile (NV) elements such as NV latches and flip-flops, which can be leveraged in normally-off computing architectures for Internet-of-Things (IoT) and energy-harvesting-powered applications. Thus, in the last portion of this dissertation, we design and evaluate for soft-error resilience NV-latching circuits that can achieve intriguing features, such as low energy consumption, high computing performance, and superior soft errors tolerance, i.e., concurrently able to tolerate Multiple Node Upset (MNU), to potentially become a mainstream solution for the aerospace and avionic nanoelectronics. Together, these objectives cooperate to increase energy-efficiency and soft errors mitigation resiliency of larger-scale emerging NV latching circuits within iso-energy constraints. In summary, addressing these reliability concerns is paramount to successful deployment of future reliable and energy-efficient CMOS logic and spintronic memory architectures with deeply-scaled devices operating at low-voltages.
Show less - Date Issued
- 2019
- Identifier
- CFE0007884, ucf:52765
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007884
- Title
- ESTABLISHING DEGRADATION RATES AND SERVICE LIFETIME OF PHOTOVOLTAIC SYSTEMS.
- Creator
-
Leyte-Vidal, Albert, Hickman, James, University of Central Florida
- Abstract / Description
-
As fossil fuel sources continue to diminish, oil prices continue to increase, and global warming and CO2 emissions keep impacting the environment, it has been necessary to shift energy consumption and generation to a different path. Solar energy has proven to be one of the most promising sources of renewable energy because it is environmentally friendly, available anywhere in the world, and cost competitive. For photovoltaic (PV) system engineers, designing a PV system is not an easy task....
Show moreAs fossil fuel sources continue to diminish, oil prices continue to increase, and global warming and CO2 emissions keep impacting the environment, it has been necessary to shift energy consumption and generation to a different path. Solar energy has proven to be one of the most promising sources of renewable energy because it is environmentally friendly, available anywhere in the world, and cost competitive. For photovoltaic (PV) system engineers, designing a PV system is not an easy task. Research demonstrates that different PV technologies behave differently under certain conditions; therefore energy production varies not only with capacity of the system but also with the type of module. For years, researchers have also studied how these different technologies perform for long periods of time, when exposed out in the field. In this study, data collected by the Florida Solar Energy Center for periods of over four years was analyzed using two techniques, widely accepted by researchers and industry, to evaluate the long‐term performance of five systems. The performance ratio analysis normalizes system capacity and enables the comparison of performance between multiple systems. In PVUSA Regression analysis, regression coefficients are calculated which correspond to the effect of irradiance, wind speed, and ambient temperature, and these coefficients are then used to calculate power at a predetermined set of conditions. This study allows manufacturers to address the difficulties found on system lifetime when their modules are installed out on the field. Also allows for the further development and improvement of the different PV technologies already commercially available.
Show less - Date Issued
- 2010
- Identifier
- CFE0003326, ucf:48483
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003326
- Title
- Energy Efficient and Secure Wireless Sensor Networks Design.
- Creator
-
Attiah, Afraa, Zou, Changchun, Chatterjee, Mainak, Wang, Jun, Yuksel, Murat, Wang, Chung-Ching, University of Central Florida
- Abstract / Description
-
ABSTRACTWireless Sensor Networks (WSNs) are emerging technologies that have the ability to sense,process, communicate, and transmit information to a destination, and they are expected to have significantimpact on the efficiency of many applications in various fields. The resource constraintsuch as limited battery power, is the greatest challenge in WSNs design as it affects the lifetimeand performance of the network. An energy efficient, secure, and trustworthy system is vital whena WSN...
Show moreABSTRACTWireless Sensor Networks (WSNs) are emerging technologies that have the ability to sense,process, communicate, and transmit information to a destination, and they are expected to have significantimpact on the efficiency of many applications in various fields. The resource constraintsuch as limited battery power, is the greatest challenge in WSNs design as it affects the lifetimeand performance of the network. An energy efficient, secure, and trustworthy system is vital whena WSN involves highly sensitive information. Thus, it is critical to design mechanisms that are energyefficient and secure while at the same time maintaining the desired level of quality of service.Inspired by these challenges, this dissertation is dedicated to exploiting optimization and gametheoretic approaches/solutions to handle several important issues in WSN communication, includingenergy efficiency, latency, congestion, dynamic traffic load, and security. We present severalnovel mechanisms to improve the security and energy efficiency of WSNs. Two new schemes areproposed for the network layer stack to achieve the following: (a) to enhance energy efficiencythrough optimized sleep intervals, that also considers the underlying dynamic traffic load and (b)to develop the routing protocol in order to handle wasted energy, congestion, and clustering. Wealso propose efficient routing and energy-efficient clustering algorithms based on optimization andgame theory. Furthermore, we propose a dynamic game theoretic framework (i.e., hyper defense)to analyze the interactions between attacker and defender as a non-cooperative security game thatconsiders the resource limitation. All the proposed schemes are validated by extensive experimentalanalyses, obtained by running simulations depicting various situations in WSNs in orderto represent real-world scenarios as realistically as possible. The results show that the proposedschemes achieve high performance in different terms, such as network lifetime, compared with thestate-of-the-art schemes.
Show less - Date Issued
- 2018
- Identifier
- CFE0006971, ucf:51672
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006971
- Title
- Towards Energy-Efficient and Reliable Computing: From Highly-Scaled CMOS Devices to Resistive Memories.
- Creator
-
Salehi Mobarakeh, Soheil, DeMara, Ronald, Fan, Deliang, Turgut, Damla, University of Central Florida
- Abstract / Description
-
The continuous increase in transistor density based on Moore's Law has led us to highly scaled Complementary Metal-Oxide Semiconductor (CMOS) technologies. These transistor-based process technologies offer improved density as well as a reduction in nominal supply voltage. An analysis regarding different aspects of 45nm and 15nm technologies, such as power consumption and cell area to compare these two technologies is proposed on an IEEE 754 Single Precision Floating-Point Unit implementation....
Show moreThe continuous increase in transistor density based on Moore's Law has led us to highly scaled Complementary Metal-Oxide Semiconductor (CMOS) technologies. These transistor-based process technologies offer improved density as well as a reduction in nominal supply voltage. An analysis regarding different aspects of 45nm and 15nm technologies, such as power consumption and cell area to compare these two technologies is proposed on an IEEE 754 Single Precision Floating-Point Unit implementation. Based on the results, using the 15nm technology offers 4-times less energy and 3-fold smaller footprint. New challenges also arise, such as relative proportion of leakage power in standby mode that can be addressed by post-CMOS technologies. Spin-Transfer Torque Random Access Memory (STT-MRAM) has been explored as a post-CMOS technology for embedded and data storage applications seeking non-volatility, near-zero standby energy, and high density. Towards attaining these objectives for practical implementations, various techniques to mitigate the specific reliability challenges associated with STT-MRAM elements are surveyed, classified, and assessed herein. Cost and suitability metrics assessed include the area of nanomagmetic and CMOS components per bit, access time and complexity, Sense Margin (SM), and energy or power consumption costs versus resiliency benefits. In an attempt to further improve the Process Variation (PV) immunity of the Sense Amplifiers (SAs), a new SA has been introduced called Adaptive Sense Amplifier (ASA). ASA can benefit from low Bit Error Rate (BER) and low Energy Delay Product (EDP) by combining the properties of two of the commonly used SAs, Pre-Charge Sense Amplifier (PCSA) and Separated Pre-Charge Sense Amplifier (SPCSA). ASA can operate in either PCSA or SPCSA mode based on the requirements of the circuit such as energy efficiency or reliability. Then, ASA is utilized to propose a novel approach to actually leverage the PV in Non-Volatile Memory (NVM) arrays using Self-Organized Sub-bank (SOS) design. SOS engages the preferred SA alternative based on the intrinsic as-built behavior of the resistive sensing timing margin to reduce the latency and power consumption while maintaining acceptable access time.
Show less - Date Issued
- 2016
- Identifier
- CFE0006493, ucf:51400
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006493
- Title
- Design and Implementation of PV-Firming and Optimization Algorithms For Three-Port Microinverters.
- Creator
-
Alharbi, Mahmood, Batarseh, Issa, Haralambous, Michael, Mikhael, Wasfy, Yuan, Jiann-Shiun, Kutkut, Nasser, University of Central Florida
- Abstract / Description
-
With the demand increase for electricity, the ever-increasing awareness of environmental issues, coupled with rolling blackouts, the role of renewable energy generation is increasing along with the thirst for electricity and awareness of environmental issues. This dissertation proposes the design and implementation of PV-firming and optimization algorithms for three-port microinverters.Novel strategies are proposed in Chapters 3 and 4 for harvesting stable solar power in spite of intermittent...
Show moreWith the demand increase for electricity, the ever-increasing awareness of environmental issues, coupled with rolling blackouts, the role of renewable energy generation is increasing along with the thirst for electricity and awareness of environmental issues. This dissertation proposes the design and implementation of PV-firming and optimization algorithms for three-port microinverters.Novel strategies are proposed in Chapters 3 and 4 for harvesting stable solar power in spite of intermittent solar irradiance. PV firming is implemented using a panel-level three-port grid-tied PV microinverter system instead of the traditional high-power energy storage and management system at the utility scale. The microinverter system consists of a flyback converter and an H-bridge inverter/rectifier, with a battery connected to the DC-link. The key to these strategies lies in using static and dynamic algorithms to generate a smooth PV reference power. The outcomes are applied to various control methods to charge/discharge the battery so that a stable power generation profile is obtained. In addition, frequency-based optimization for the inverter stage is presented.One of the design parameters of grid-tied single-phase H-bridge sinusoidal pulse-width modulation (SPWM) microinverters is switching frequency. The selection of the switching frequency is a tradeoff between improving the power quality by reducing the total harmonic distortion (THD), and improving the efficiency by reducing the switching loss. In Chapter 5, two algorithms are proposed for optimizing both the power quality and the efficiency of the microinverter. They do this by using a frequency tracking technique that requires no hardware modification. The first algorithm tracks the optimal switching frequency for maximum efficiency at a given THD value. The second maximizes the power quality of the H-bridge micro-inverter by tracking the switching frequency that corresponds to the minimum THD.Real-time PV intermittency and usable capacity data were evaluated and then further analyzed in MATLAB/SIMULINK to validate the PV firming control. The proposed PV firming and optimization algorithms were experimentally verified, and the results evaluated. Finally, Chapter 6 provides a summary of key conclusions and future work to optimize the presented topology and algorithms.
Show less - Date Issued
- 2018
- Identifier
- CFE0007305, ucf:52166
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007305
- Title
- Adaptive Architectural Strategies for Resilient Energy-Aware Computing.
- Creator
-
Ashraf, Rizwan, DeMara, Ronald, Lin, Mingjie, Wang, Jun, Jha, Sumit, Johnson, Mark, University of Central Florida
- Abstract / Description
-
Reconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited...
Show moreReconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited to implement Evolvable Hardware (EHW). EHW utilize genetic algorithms to realize logic circuits at runtime, as directed by the objective function. However, the size of problems solved using EHW as compared with traditional approaches has been limited to relatively compact circuits. This is due to the increase in complexity of the genetic algorithm with increase in circuit size. To address this research challenge of scalability, the Netlist-Driven Evolutionary Refurbishment (NDER) technique was designed and implemented herein to enable on-the-fly permanent fault mitigation in FPGA circuits. NDER has been shown to achieve refurbishment of relatively large sized benchmark circuits as compared to related works. Additionally, Design Diversity (DD) techniques which are used to aid such evolutionary refurbishment techniques are also proposed and the efficacy of various DD techniques is quantified and evaluated.Similarly, there exists a growing need for adaptable logic datapaths in custom-designed nanometer-scale ICs, for ensuring operational reliability in the presence of Process, Voltage, and Temperature (PVT) and, transistor-aging variations owing to decreased feature sizes for electronic devices. Without such adaptability, excessive design guardbands are required to maintain the desired integration and performance levels. To address these challenges, the circuit-level technique of Self-Recovery Enabled Logic (SREL) was designed herein. At design-time, vulnerable portions of the circuit identified using conventional Electronic Design Automation tools are replicated to provide post-fabrication adaptability via intelligent techniques. In-situ timing sensors are utilized in a feedback loop to activate suitable datapaths based on current conditions that optimize performance and energy consumption. Primarily, SREL is able to mitigate the timing degradations caused due to transistor aging effects in sub-micron devices by reducing the stress induced on active elements by utilizing power-gating. As a result, fewer guardbands need to be included to achieve comparable performance levels which leads to considerable energy savings over the operational lifetime.The need for energy-efficient operation in current computing systems has given rise to Near-Threshold Computing as opposed to the conventional approach of operating devices at nominal voltage. In particular, the goal of exascale computing initiative in High Performance Computing (HPC) is to achieve 1 EFLOPS under the power budget of 20MW. However, it comes at the cost of increased reliability concerns, such as the increase in performance variations and soft errors. This has given rise to increased resiliency requirements for HPC applications in terms of ensuring functionality within given error thresholds while operating at lower voltages. My dissertation research devised techniques and tools to quantify the effects of radiation-induced transient faults in distributed applications on large-scale systems. A combination of compiler-level code transformation and instrumentation are employed for runtime monitoring to assess the speed and depth of application state corruption as a result of fault injection. Finally, fault propagation models are derived for each HPC application that can be used to estimate the number of corrupted memory locations at runtime. Additionally, the tradeoffs between performance and vulnerability and the causal relations between compiler optimization and application vulnerability are investigated.
Show less - Date Issued
- 2015
- Identifier
- CFE0006206, ucf:52889
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006206