Current Search: blocks (x)
-
-
Title
-
FIELD IMPLEMENTATION OF POLYACRYLAMIDE FOR RUNOFF FROM CONSTRUCTION SITES.
-
Creator
-
Chowdhury , Rafiqul, Chopra , Manoj, University of Central Florida
-
Abstract / Description
-
Polyacrylamide (PAM) is often used a part of a treatment train for the treatment of stormwater to reduce its turbidity. This study investigated the application of PAM within various treatment systems for a construction site environment. The general concept is to introduce hydraulic principles when placing PAM blocks within an open channel in order to yield high mixing energies leading to high turbidity removal efficiency. The first part of the study observed energy variation using a hydraulic...
Show morePolyacrylamide (PAM) is often used a part of a treatment train for the treatment of stormwater to reduce its turbidity. This study investigated the application of PAM within various treatment systems for a construction site environment. The general concept is to introduce hydraulic principles when placing PAM blocks within an open channel in order to yield high mixing energies leading to high turbidity removal efficiency. The first part of the study observed energy variation using a hydraulic flume for three dissimilar configurations. The flume was ultimately used to determine which configuration would be most beneficial when transposed into field-scale conditions. Three different configurations were tested in the flume, namely, the Jump configuration, Dispersion configuration and the Staggered configuration. The field-scale testing served as both justification of the findings within the controlled hydraulic flume and comprehension of the elements introduced within the field when attempting to reduce the turbidity of stormwater. As a result, the Dispersion configuration proved to be the most effective when removing turbidity and displayed a greater energy used for mixing within the open channel. Consequently, an analysis aid is developed based on calculations from the results of this study to better serve the sediment control industry when implementing PAM blocks within a treatment system. Recommendations are made for modification and future applications of the research conducted. This innovative approach has great potential for expansion and future applications. Continued research on this topic can expand on key elements such as solubility of the PAM, toxicity of the configuration within the field, and additional configurations that may yield more advantageous energy throughout the open channel.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004017, ucf:49158
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004017
-
-
Title
-
Correctness and Progress Verification of Non-Blocking Programs.
-
Creator
-
Peterson, Christina, Dechev, Damian, Leavens, Gary, Bassiouni, Mostafa, Cash, Mason, University of Central Florida
-
Abstract / Description
-
The progression of multi-core processors has inspired the development of concurrency libraries that guarantee safety and liveness properties of multiprocessor applications. The difficulty of reasoning about safety and liveness properties in a concurrent environment has led to the development of tools to verify that a concurrent data structure meets a correctness condition or progress guarantee. However, these tools possess shortcomings regarding the ability to verify a composition of data...
Show moreThe progression of multi-core processors has inspired the development of concurrency libraries that guarantee safety and liveness properties of multiprocessor applications. The difficulty of reasoning about safety and liveness properties in a concurrent environment has led to the development of tools to verify that a concurrent data structure meets a correctness condition or progress guarantee. However, these tools possess shortcomings regarding the ability to verify a composition of data structure operations. Additionally, verification techniques for transactional memory evaluate correctness based on low-level read/write histories, which is not applicable to transactional data structures that use a high-level semantic conflict detection.In my dissertation, I present tools for checking the correctness of multiprocessor programs that overcome the limitations of previous correctness verification techniques. Correctness Condition Specification (CCSpec) is the first tool that automatically checks the correctness of a composition of concurrent multi-container operations performed in a non-atomic manner. Transactional Correctness tool for Abstract Data Types (TxC-ADT) is the first tool that can check the correctness of transactional data structures. TxC-ADT elevates the standard definitions of transactional correctness to be in terms of an abstract data type, an essential aspect for checking correctness of transactions that synchronize only for high-level semantic conflicts. Many practical concurrent data structures, transactional data structures, and algorithms to facilitate non-blocking programming all incorporate helping schemes to ensure that an operation comprising multiple atomic steps is completed according to the progress guarantee. The helping scheme introduces additional interference by the active threads in the system to achieve the designed progress guarantee. Previous progress verification techniques do not accommodate loops whose termination is dependent on complex behaviors of the interfering threads, making these approaches unsuitable. My dissertation presents the first progress verification technique for non-blocking algorithms that are dependent on descriptor-based helping mechanisms.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007705, ucf:52433
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007705
-
-
Title
-
The Design, Implementation, and Refinement of Wait-Free Algorithms and Containers.
-
Creator
-
Feldman, Steven, Dechev, Damian, Heinrich, Mark, Orooji, Ali, Mucciolo, Eduardo, University of Central Florida
-
Abstract / Description
-
My research has been on the development of concurrent algorithms for shared memory systems that provide guarantees of progress.Research into such algorithms is important to developers implementing applications on mission critical and time sensitive systems.These guarantees of progress provide safety properties and freedom from many hazards, such as dead-lock, live-lock, and thread starvation.In addition to the safety concerns, the fine-grained synchronization used in implementing these...
Show moreMy research has been on the development of concurrent algorithms for shared memory systems that provide guarantees of progress.Research into such algorithms is important to developers implementing applications on mission critical and time sensitive systems.These guarantees of progress provide safety properties and freedom from many hazards, such as dead-lock, live-lock, and thread starvation.In addition to the safety concerns, the fine-grained synchronization used in implementing these algorithms promises to provide scalable performance in massively parallel systems.My research has resulted in the development of wait-free versions of the stack, hash map, ring buffer, vector, and a multi-word compare-and-swap algorithms.Through this experience, I have learned and developed new techniques and methodologies for implementing non-blocking and wait-free algorithms.I have worked with and refined existing techniques to improve their practicality and applicability.In the creation of the aforementioned algorithms, I have developed an association model for use with descriptor-based operations.This model, originally developed for the multi-word compare-and-swap algorithm, has been applied to the design of the vector and ring buffer algorithms.To unify these algorithms and techniques, I have released Tervel, a wait-free library of common algorithms and containers.This library includes a framework that simplifies and improves the design of non-blocking algorithms.I have reimplemented several algorithms using this framework and the resulting implementation exhibits less code duplication and fewer perceivable states.When reimplementing algorithms, I have adapted their Application Programming Interface (API) specification to remove ambiguity and non-deterministic behavior found when using a sequential API in a concurrent environment.To improve the performance of my algorithm implementations, I extended OVIS's Lightweight Distributed Metric Service (LDMS)'s data collection and transport system to support performance monitoring using perf_event and PAPI libraries.These libraries have provided me with deeper insights into the behavior of my algorithms, and I was able to use these insights to improve the design and performance of my algorithms.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005946, ucf:50813
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005946
-
-
Title
-
Practical Dynamic Transactional Data Structures.
-
Creator
-
Laborde, Pierre, Dechev, Damian, Leavens, Gary, Turgut, Damla, Mucciolo, Eduardo, University of Central Florida
-
Abstract / Description
-
Multicore programming presents the challenge of synchronizing multiple threads.Traditionally, mutual exclusion locks are used to limit access to a shared resource to a single thread at a time.Whether this lock is applied to an entire data structure, or only a single element, the pitfalls of lock-based programming persist.Deadlock, livelock, starvation, and priority inversion are some of the hazards of lock-based programming that can be avoided by using non-blocking techniques.Non-blocking...
Show moreMulticore programming presents the challenge of synchronizing multiple threads.Traditionally, mutual exclusion locks are used to limit access to a shared resource to a single thread at a time.Whether this lock is applied to an entire data structure, or only a single element, the pitfalls of lock-based programming persist.Deadlock, livelock, starvation, and priority inversion are some of the hazards of lock-based programming that can be avoided by using non-blocking techniques.Non-blocking data structures allow scalable and thread-safe access to shared data by guaranteeing, at least, system-wide progress.In this work, we present the first wait-free hash map which allows a large number of threads to concurrently insert, get, and remove information.Wait-freedom means that all threads make progress in a finite amount of time --- an attribute that can be critical in real-time environments.We only use atomic operations that are provided by the hardware; therefore, our hash map can be utilized by a variety of data-intensive applications including those within the domains of embedded systems and supercomputers.The challenges of providing this guarantee make the design and implementation of wait-free objects difficult.As such, there are few wait-free data structures described in the literature; in particular, there are no wait-free hash maps.It often becomes necessary to sacrifice performance in order to achieve wait-freedom.However, our experimental evaluation shows that our hash map design is, on average, 7 times faster than a traditional blocking design.Our solution outperforms the best available alternative non-blocking designs in a large majority of cases, typically by a factor of 15 or higher.The main drawback of non-blocking data structures is that only one linearizable operation can be executed by each thread, at any one time.To overcome this limitation we present a framework for developing dynamic transactional data containers.Transactional containers are those that execute a sequence of operations atomically and in such a way that concurrent transactions appear to take effect in some sequential order.We take an existing algorithm that transforms non-blocking sets into static transactional versions (LFTT), and we modify it to support maps.We implement a non-blocking transactional hash map using this new approach.We continue to build on LFTT by implementing a lock-free vector using a methodology to allow LFTT to be compatible with non-linked data structures.A static transaction requires all operands and operations to be specified at compile-time, and no code may be executed between transactions.These limitations render static transactions impractical for most use cases.We modify LFTT to support dynamic transactions, and we enhance it with additional features.Dynamic transactions allow operands to be specified at runtime rather than compile-time, and threads can execute code between the data structure operations of a transaction.We build a framework for transforming non-blocking containers into dynamic transactional data structures, called Dynamic Transactional Transformation (DTT), and provide a library of novel transactional containers.Our library provides the wait-free progress guarantee and supports transactions among multiple data structures, whereas previous work on data structure transactions has been limited to operating on a single container.Our approach is 3 times faster than software transactional memory, and its performance matches its lock-free transactional counterpart.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007215, ucf:52212
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007215
-
-
Title
-
SYNTHESIS AND APPLICATIONS OF RING OPENING METATHESIS POLYMERIZATION BASED FUNCTIONAL BLOCK COPOLYMERS.
-
Creator
-
Biswas, Sanchita, Belfield, Kevin, University of Central Florida
-
Abstract / Description
-
Ring opening metathesis polymerization (ROMP) is established as one of the efficient controlled living polymerization methods which have various applications in polymer science and technology fields. The research presented in this dissertation addresses several applications of multifunctional well-defined norbornene-based block copolymers synthesized by ROMP using ruthenium-based Grubbs catalysts. These novel block copolymers were applied to stabilize maghemite nanoparticles, creating the...
Show moreRing opening metathesis polymerization (ROMP) is established as one of the efficient controlled living polymerization methods which have various applications in polymer science and technology fields. The research presented in this dissertation addresses several applications of multifunctional well-defined norbornene-based block copolymers synthesized by ROMP using ruthenium-based Grubbs catalysts. These novel block copolymers were applied to stabilize maghemite nanoparticles, creating the superparamagnetic polymeric nanocomposites. The J-aggregation properties of the porphyrin dyes were improved via self-assembly with a customized norbornene polymer. Novel multimodal copolymer probes were synthesized for two-photon fluorescence integrin-targeted bioimaging. In Chapter 1 a brief overview of ROMP along with ruthenium metal catalysts and selected applications of the polymers related to this research is presented. Superparamagnetic maghemite nanoparticles are important in biotechnology fields, such as enhanced magnetic resonance imaging (MRI), magnetically controlled drug delivery, and biomimetics. However, cluster formation and eventual loss of nano-dimensions is a major obstacle for these materials. Chapter 2 presents a solution to this problem through nanoparticles stabiulized in a polymer matrix. The synthesis and chracterization of novel diblock copolymers, consisting of epoxy pendant anchoring groups to chelate maghemite nanoparticles and steric stabilizing groups, as well as generation of nanocomposites and their characterization, including surface morphologies and magnetic properties, is discussed in Chapter 2. In Chapter 3, further improvement of the nanocomposites by ligand modification and the synthesis of pyrazole-templated diblock copolymers and their impact to stabilize the maghemite nanocomposite are presented. Additionally, the organic soluble magnetic nanocomposites with high magnetizations were encapsulated in an amphiphilic copolymer and dispersed in water to assess their water stability by TEM. To gain a preliminary measure of biocopatibility of the micelle-encapsulated polymeric magnetic nanocomposites, cell-viability was determined. In Chapter 4, aggregation behaviors of two porphyrin-based dyes were investigated. A new amphiphilic homopolymer containing secondary amine moieties was synthesized and characterized. In low pH, the polymer became water soluble and initiated the stable J-aggregation of the porphyrin. Spectroscopic data supported the aggregation behavior. Two photon fluorescence microscopy (2PFM) has become a powerful technique in bioimaging for non-invasive imaging and potential diagnosis and treatment of a number of diseases via excitation in the near-infrared (NIR) region. The fluorescence emission upon two-photon absorption (2PA) is quadratically dependent with the intensity of excitation light (compared to the linear dependence in the case of one-photon absoprtion), offering several advantages for biological applications over the conventional one-photon absorption (1PA) due to the high 3D spatial resolution that is confined near the focal point along with less photodamage and interference from the biological tissues at longer wavelength (~700-900 nm). Hence, efficient 2PA absorbing fluorophores conjugated with specific targeting moieties provides an even better bioimaging probe to diagnose desired cellular processes or areas of interest The αVβ3 integrin adhesive protein plays a significant role in regulating angiogenesis and is over-expressed in uncontrolled neovascularization during tumor growth, invasion, and metastasis. Cyclic-RGD peptides are well-known antagonists of αVβ3 integrin which suppress the angiogenesis process, thus preventing tumor growth. In Chapter 5 the synthesis, photophysical studies and bioimaging is reported for a versatile norbornene-based block copolymer multifunctional scaffold containing biocompatible (PEG), two-photon fluorescent (fluorenyl), and targeting (cyclic RGD peptide) moieties. This water-soluble polymeric multi scaffold probe with negligible cytotoxicity exhibited much stronger fluorescence and high localization in U87MG cells (that overexpress integrin) compared to control MCF7 cells. The norbornene-based polymers and copolymers have quite remarkable versatility for the creation of advanced functional magnetic, photonic, and biophotonic materials.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003065, ucf:48296
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003065
-
-
Title
-
Cascaded Digital Refinement for Intrinsic Evolvable Hardware.
-
Creator
-
Thangavel, Vignesh, DeMara, Ronald, Sundaram, Kalpathy, Song, Zixia, University of Central Florida
-
Abstract / Description
-
Intrinsic evolution of reconfigurable hardware is sought to solve computational problems using the intrinsic processing behavior of System-on-Chip (SoC) platforms. SoC devices combine capabilities of analog and digital embedded components within a reconfigurable fabric under software control. A new technique is developed for these fabrics that leverages the digital resources' enhanced accuracy and signal refinement capability to improve circuit performance of the analog resources' which are...
Show moreIntrinsic evolution of reconfigurable hardware is sought to solve computational problems using the intrinsic processing behavior of System-on-Chip (SoC) platforms. SoC devices combine capabilities of analog and digital embedded components within a reconfigurable fabric under software control. A new technique is developed for these fabrics that leverages the digital resources' enhanced accuracy and signal refinement capability to improve circuit performance of the analog resources' which are providing low power processing and high computation rates. In particular, Differential Digital Correction (DDC) is developed utilizing an error metric computed from the evolved analog circuit to reconfigure the digital fabric thereby enhancing precision of analog computations. The approach developed herein, Cascaded Digital Refinement (CaDR), explores a multi-level strategy of utilizing DDC for refining intrinsic evolution of analog computational circuits to construct building blocks, known as Constituent Functional Blocks (CFBs). The CFBs are developed in a cascaded sequence followed by digital evolution of higher-level control of these CFBs to build the final solution for the larger circuit at-hand. One such platform, Cypress PSoC-5LP was utilized to realize solutions to ordinary differential equations by first evolving various powers of the independent variable followed by that of their combinations to emulate mathematical series-based solutions for the desired range of values. This is shown to enhance accuracy and precision while incurring lower computational energy and time overheads. The fitness function for each CFB being evolved is different from the fitness function that is defined for the overall problem.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005723, ucf:50123
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005723
-
-
Title
-
A Comparative Study of the Effect of Block Scheduling and Traditional Scheduling on Student Achievement for the Florida Algebra 1 End-of-Course Examination.
-
Creator
-
Underwood, Arthur, Murray, Kenneth, Murray, Barbara, Baldwin, Lee, Hutchinson, Cynthia, University of Central Florida
-
Abstract / Description
-
The focus of this research was on the effect of school schedules on student achievement for ninth-grade students in a Florida school district. Data were collected from two central Florida high schools from the 2011-2012 and 2012-2013 school years. Five one-way analyses of covariance (ANCOVA) were performed to ascertain if there was any interaction between school schedules and student achievement. Examined were the interactions (a) between schedule and schools, (b) schedule and male students, ...
Show moreThe focus of this research was on the effect of school schedules on student achievement for ninth-grade students in a Florida school district. Data were collected from two central Florida high schools from the 2011-2012 and 2012-2013 school years. Five one-way analyses of covariance (ANCOVA) were performed to ascertain if there was any interaction between school schedules and student achievement. Examined were the interactions (a) between schedule and schools, (b) schedule and male students, (c) schedule and female students, (d) schedule and Black students, and (e) schedule and Hispanic students. The independent variable, school schedule, consisted of two levels: traditional schedule and A/B block schedule. The dependent variable was the spring Algebra 1 End- of-Course Examination (EOC), and the covariate was the Florida Comprehensive Assessment Test (FCAT) Mathematics Eighth-grade Development Scale Score. School schedule was not significantly related to students' spring Algebra 1 EOC scores, F(1,788) p = .932. School schedule was not significantly related to male students' spring Algebra 1 EOC scores, F(1,392) p = .698. School schedule was not significantly related to female students' spring Algebra 1 EOC scores, F(1,393) p = .579. School schedule was not significantly related to Black students' spring Algebra 1 EOC scores, F(1,186) p = .545. School schedule was not significantly related to Hispanic students' spring Algebra 1 EOC scores, F (1,184) p = .700.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005433, ucf:50406
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005433
-
-
Title
-
Recognition of Complex Events in Open-source Web-scale Videos: Features, Intermediate Representations and their Temporal Interactions.
-
Creator
-
Bhattacharya, Subhabrata, Shah, Mubarak, Guha, Ratan, Laviola II, Joseph, Sukthankar, Rahul, Moore, Brian, University of Central Florida
-
Abstract / Description
-
Recognition of complex events in consumer uploaded Internet videos, captured under real-world settings, has emerged as a challenging area of research across both computer vision and multimedia community. In this dissertation, we present a systematic decomposition of complex events into hierarchical components and make an in-depth analysis of how existing research are being used to cater to various levels of this hierarchy and identify three key stages where we make novel contributions,...
Show moreRecognition of complex events in consumer uploaded Internet videos, captured under real-world settings, has emerged as a challenging area of research across both computer vision and multimedia community. In this dissertation, we present a systematic decomposition of complex events into hierarchical components and make an in-depth analysis of how existing research are being used to cater to various levels of this hierarchy and identify three key stages where we make novel contributions, keeping complex events in focus. These are listed as follows: (a) Extraction of novel semi-global features -- firstly, we introduce a Lie-algebra based representation of dominant camera motion present while capturing videos and show how this can be used as a complementary feature for video analysis. Secondly, we propose compact clip level descriptors of a video based on covariance of appearance and motion features which we further use in a sparse coding framework to recognize realistic actions and gestures. (b) Construction of intermediate representations -- We propose an efficient probabilistic representation from low-level features computed from videos, basedon Maximum Likelihood Estimates which demonstrates state of the art performancein large scale visual concept detection, and finally, (c) Modeling temporal interactions between intermediate concepts -- Using block Hankel matrices and harmonic analysis of slowly evolving Linear Dynamical Systems, we propose two new discriminative feature spaces for complex event recognition and demonstratesignificantly improved recognition rates over previously proposed approaches.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004817, ucf:49724
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004817
-
-
Title
-
DESIGN AND CHARACTERIZATION OF NOVELDEVICES FOR NEW GENERATION OF ELECTROSTATICDISCHARGE (ESD) PROTECTION STRUCTURES.
-
Creator
-
SALCEDO, Javier, Liou, Juin, University of Central Florida
-
Abstract / Description
-
The technology evolution and complexity of new circuit applications involve emerging reliability problems and even more sensitivity of integrated circuits (ICs) to electrostatic discharge (ESD)-induced damage. Regardless of the aggressive evolution in downscaling and subsequent improvement in applications' performance, ICs still should comply with minimum standards of ESD robustness in order to be commercially viable. Although the topic of ESD has received attention industry-wide, the...
Show moreThe technology evolution and complexity of new circuit applications involve emerging reliability problems and even more sensitivity of integrated circuits (ICs) to electrostatic discharge (ESD)-induced damage. Regardless of the aggressive evolution in downscaling and subsequent improvement in applications' performance, ICs still should comply with minimum standards of ESD robustness in order to be commercially viable. Although the topic of ESD has received attention industry-wide, the design of robust protection structures and circuits remains challenging because ESD failure mechanisms continue to become more acute and design windows less flexible. The sensitivity of smaller devices, along with a limited understanding of the ESD phenomena and the resulting empirical approach to solving the problem have yielded time consuming, costly and unpredictable design procedures. As turnaround design cycles in new technologies continue to decrease, the traditional trial-and-error design strategy is no longer acceptable, and better analysis capabilities and a systematic design approach are essential to accomplish the increasingly difficult task of adequate ESD protection-circuit design. This dissertation presents a comprehensive design methodology for implementing custom on-chip ESD protection structures in different commercial technologies. First, the ESD topic in the semiconductor industry is revised, as well as ESD standards and commonly used schemes to provide ESD protection in ICs. The general ESD protection approaches are illustrated and discussed using different types of protection components and the concept of the ESD design window. The problem of implementing and assessing ESD protection structures is addressed next, starting from the general discussion of two design methods. The first ESD design method follows an experimental approach, in which design requirements are obtained via fabrication, testing and failure analysis. The second method consists of the technology computer aided design (TCAD)-assisted ESD protection design. This method incorporates numerical simulations in different stages of the ESD design process, and thus results in a more predictable and systematic ESD development strategy. Physical models considered in the device simulation are discussed and subsequently utilized in different ESD designs along this study. The implementation of new custom ESD protection devices and a further integration strategy based on the concept of the high-holding, low-voltage-trigger, silicon controlled rectifier (SCR) (HH-LVTSCR) is demonstrated for implementing ESD solutions in commercial low-voltage digital and mixed-signal applications developed using complementary metal oxide semiconductor (CMOS) and bipolar CMOS (BiCMOS) technologies. This ESD protection concept proposed in this study is also successfully incorporated for implementing a tailored ESD protection solution for an emerging CMOS-based embedded MicroElectroMechanical (MEMS) sensor system-on-a-chip (SoC) technology. Circuit applications that are required to operate at relatively large input/output (I/O) voltage, above/below the VDD/VSS core circuit power supply, introduce further complications in the development and integration of ESD protection solutions. In these applications, the I/O operating voltage can extend over one order of magnitude larger than the safe operating voltage established in advanced technologies, while the IC is also required to comply with stringent ESD robustness requirements. A practical TCAD methodology based on a process- and device- simulation is demonstrated for assessment of the device physics, and subsequent design and implementation of custom P1N1-P2N2 and coupled P1N1-P2N2//N2P3-N3P1 silicon controlled rectifier (SCR)-type devices for ESD protection in different circuit applications, including those applications operating at I/O voltage considerably above/below the VDD/VSS. Results from the TCAD simulations are compared with measurements and used for developing technology- and circuit-adapted protection structures, capable of blocking large voltages and providing versatile dual-polarity symmetric/asymmetric S-type current-voltage characteristics for high ESD protection. The design guidelines introduced in this dissertation are used to optimize and extend the ESD protection capability in existing CMOS/BiCMOS technologies, by implementing smaller and more robust single- or dual-polarity ESD protection structures within the flexibility provided in the specific fabrication process. The ESD design methodologies and characteristics of the developed protection devices are demonstrated via ESD measurements obtained from fabricated stand-alone devices and on-chip ESD protections. The superior ESD protection performance of the devices developed in this study is also successfully verified in IC applications where the standard ESD protection approaches are not suitable to meet the stringent area constraint and performance requirement.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001213, ucf:46942
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001213