Current Search: processing (x) » Turgut, Damla (x)
View All Items
- Title
- Applied Advanced Error Control Coding for General Purpose Representation and Association Machine Systems.
- Creator
-
Dai, Bowen, Wei, Lei, Lin, Mingjie, Rahnavard, Nazanin, Turgut, Damla, Sun, Qiyu, University of Central Florida
- Abstract / Description
-
General-Purpose Representation and Association Machine (GPRAM) is proposed to be focusing on computations in terms of variation and flexibility, rather than precision and speed. GPRAM system has a vague representation and has no predefined tasks. With several important lessons learned from error control coding, neuroscience and human visual system, we investigate several types of error control codes, including Hamming code and Low-Density Parity Check (LDPC) codes, and extend them to...
Show moreGeneral-Purpose Representation and Association Machine (GPRAM) is proposed to be focusing on computations in terms of variation and flexibility, rather than precision and speed. GPRAM system has a vague representation and has no predefined tasks. With several important lessons learned from error control coding, neuroscience and human visual system, we investigate several types of error control codes, including Hamming code and Low-Density Parity Check (LDPC) codes, and extend them to different directions.While in error control codes, solely XOR logic gate is used to connect different nodes. Inspired by bio-systems and Turbo codes, we suggest and study non-linear codes with expanded operations, such as codes including AND and OR gates which raises the problem of prior-probabilities mismatching. Prior discussions about critical challenges in designing codes and iterative decoding for non-equiprobable symbols may pave the way for a more comprehensive understanding of bio-signal processing. The limitation of XOR operation in iterative decoding with non-equiprobable symbols is described and can be potentially resolved by applying quasi-XOR operation and intermediate transformation layer. Constructing codes for non-equiprobable symbols with the former approach cannot satisfyingly perform with regarding to error correction capability. Probabilistic messages for sum-product algorithm using XOR, AND, and OR operations with non-equiprobable symbols are further computed. The primary motivation for the constructing codes is to establish the GPRAM system rather than to conduct error control coding per se. The GPRAM system is fundamentally developed by applying various operations with substantial over-complete basis. This system is capable of continuously achieving better and simpler approximations for complex tasks.The approaches of decoding LDPC codes with non-equiprobable binary symbols are discussed due to the aforementioned prior-probabilities mismatching problem. The traditional Tanner graph should be modified because of the distinction of message passing to information bits and to parity check bits from check nodes. In other words, the message passing along two directions are identical in conventional Tanner graph, while the message along the forward direction and backward direction are different in our case. A method of optimizing signal constellation is described, which is able to maximize the channel mutual information.A simple Image Processing Unit (IPU) structure is proposed for GPRAM system, to which images are inputted. The IPU consists of a randomly constructed LDPC code, an iterative decoder, a switch, and scaling and decision device. The quality of input images has been severely deteriorated for the purpose of mimicking visual information variability (VIV) experienced in human visual systems. The IPU is capable of (a) reliably recognizing digits from images of which quality is extremely inadequate; (b) achieving similar hyper-acuity performance comparing to human visual system; and (c) significantly improving the recognition rate with applying randomly constructed LDPC code, which is not specifically optimized for the tasks.
Show less - Date Issued
- 2016
- Identifier
- CFE0006449, ucf:51413
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006449
- Title
- Mathematical and Computational Methods for Freeform Optical Shape Description.
- Creator
-
Kaya, Ilhan, Foroosh, Hassan, Rolland, Jannick, Turgut, Damla, Thompson, Kevin, Ilegbusi, Olusegun, University of Central Florida
- Abstract / Description
-
Slow-servo single-point diamond turning as well as advances in computer controlled small lap polishing enable the fabrication of freeform optics, specifically, optical surfaces for imaging applications that are not rotationally symmetric. Freeform optical elements will have a profound importance in the future of optical technology. Orthogonal polynomials added onto conic sections have been extensively used to describe optical surface shapes. The optical testing industry has chosen to...
Show moreSlow-servo single-point diamond turning as well as advances in computer controlled small lap polishing enable the fabrication of freeform optics, specifically, optical surfaces for imaging applications that are not rotationally symmetric. Freeform optical elements will have a profound importance in the future of optical technology. Orthogonal polynomials added onto conic sections have been extensively used to describe optical surface shapes. The optical testing industry has chosen to represent the departure of a wavefront under test from a reference sphere in terms of orthogonal ?-polynomials, specifically Zernike polynomials. Various forms of polynomials for describing freeform optical surfaces may be considered, however, both in optical design and in support of fabrication. More recently, radial basis functions were also investigated for optical shape description. In the application of orthogonal ?-polynomials to optical freeform shape description, there are important limitations, such as the number of terms required as well as edge-ringing and ill-conditioning in representing the surface with the accuracy demanded by most stringent optics applications. The first part of this dissertation focuses upon describing freeform optical surfaces with ? polynomials and shows their limitations when including higher orders together with possible remedies. We show that a possible remedy is to use edge clustered-fitting grids. Provided different grid types, we furthermore compared the efficacy of using different types of ? polynomials, namely Zernike and gradient orthogonal Q polynomials. In the second part of this thesis, a local, efficient and accurate hybrid method is developed in order to greatly reduce the order of polynomial terms required to achieve higher level of accuracy in freeform shape description that were shown to require thousands of terms including many higher order terms under prior art. This comes at the expense of multiple sub-apertures, and as such computational methods may leverage parallel processing. This new method combines the assets of both radial basis functions and orthogonal phi-polynomials for freeform shape description and is uniquely applicable across any aperture shape due to its locality and stitching principles. Finally in this thesis, in order to comprehend the possible advantages of parallel computing for optical surface descriptions, the benefits of making an effective use of impressive computational power offered by multi-core platforms for the computation of ?-polynomials are investigated. The ?-polynomials, specifically Zernike and gradient orthogonal Q-polynomials, are implemented with a set of recurrence based parallel algorithms on Graphics Processing Units (GPUs). The results show that more than an order of magnitude speedup is possible in the computation of ?-polynomials over a sequential implementation if the recurrence based parallel algorithms are adopted.
Show less - Date Issued
- 2013
- Identifier
- CFE0005012, ucf:49993
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005012
- Title
- Towards Energy-Efficient and Reliable Computing: From Highly-Scaled CMOS Devices to Resistive Memories.
- Creator
-
Salehi Mobarakeh, Soheil, DeMara, Ronald, Fan, Deliang, Turgut, Damla, University of Central Florida
- Abstract / Description
-
The continuous increase in transistor density based on Moore's Law has led us to highly scaled Complementary Metal-Oxide Semiconductor (CMOS) technologies. These transistor-based process technologies offer improved density as well as a reduction in nominal supply voltage. An analysis regarding different aspects of 45nm and 15nm technologies, such as power consumption and cell area to compare these two technologies is proposed on an IEEE 754 Single Precision Floating-Point Unit implementation....
Show moreThe continuous increase in transistor density based on Moore's Law has led us to highly scaled Complementary Metal-Oxide Semiconductor (CMOS) technologies. These transistor-based process technologies offer improved density as well as a reduction in nominal supply voltage. An analysis regarding different aspects of 45nm and 15nm technologies, such as power consumption and cell area to compare these two technologies is proposed on an IEEE 754 Single Precision Floating-Point Unit implementation. Based on the results, using the 15nm technology offers 4-times less energy and 3-fold smaller footprint. New challenges also arise, such as relative proportion of leakage power in standby mode that can be addressed by post-CMOS technologies. Spin-Transfer Torque Random Access Memory (STT-MRAM) has been explored as a post-CMOS technology for embedded and data storage applications seeking non-volatility, near-zero standby energy, and high density. Towards attaining these objectives for practical implementations, various techniques to mitigate the specific reliability challenges associated with STT-MRAM elements are surveyed, classified, and assessed herein. Cost and suitability metrics assessed include the area of nanomagmetic and CMOS components per bit, access time and complexity, Sense Margin (SM), and energy or power consumption costs versus resiliency benefits. In an attempt to further improve the Process Variation (PV) immunity of the Sense Amplifiers (SAs), a new SA has been introduced called Adaptive Sense Amplifier (ASA). ASA can benefit from low Bit Error Rate (BER) and low Energy Delay Product (EDP) by combining the properties of two of the commonly used SAs, Pre-Charge Sense Amplifier (PCSA) and Separated Pre-Charge Sense Amplifier (SPCSA). ASA can operate in either PCSA or SPCSA mode based on the requirements of the circuit such as energy efficiency or reliability. Then, ASA is utilized to propose a novel approach to actually leverage the PV in Non-Volatile Memory (NVM) arrays using Self-Organized Sub-bank (SOS) design. SOS engages the preferred SA alternative based on the intrinsic as-built behavior of the resistive sensing timing margin to reduce the latency and power consumption while maintaining acceptable access time.
Show less - Date Issued
- 2016
- Identifier
- CFE0006493, ucf:51400
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006493
- Title
- Design Disjunction for Resilient Reconfigurable Hardware.
- Creator
-
Alzahrani, Ahmad, DeMara, Ronald, Yuan, Jiann-Shiun, Lin, Mingjie, Wang, Jun, Turgut, Damla, University of Central Florida
- Abstract / Description
-
Contemporary reconfigurable hardware devices have the capability to achieve high performance, powerefficiency, and adaptability required to meet a wide range of design goals. With scaling challenges facing current complementary metal oxide semiconductor (CMOS), new concepts and methodologies supportingefficient adaptation to handle reliability issues are becoming increasingly prominent. Reconfigurable hardware and their ability to realize self-organization features are expected to play a key...
Show moreContemporary reconfigurable hardware devices have the capability to achieve high performance, powerefficiency, and adaptability required to meet a wide range of design goals. With scaling challenges facing current complementary metal oxide semiconductor (CMOS), new concepts and methodologies supportingefficient adaptation to handle reliability issues are becoming increasingly prominent. Reconfigurable hardware and their ability to realize self-organization features are expected to play a key role in designingfuture dependable hardware architectures. However, the exponential increase in density and complexity of current commercial SRAM-based field-programmable gate arrays (FPGAs) has escalated the overheadassociated with dynamic runtime design adaptation. Traditionally, static modular redundancy techniques areconsidered to surmount this limitation; however, they can incur substantial overheads in both area andpower requirements. To achieve a better trade-off among performance, area, power, and reliability, thisresearch proposes design-time approaches that enable fine selection of redundancy level based on target reliability goals and autonomous adaptation to runtime demands. To achieve this goal, three studies were conducted:First, a graph and set theoretic approach, named Hypergraph-Cover Diversity (HCD), is introduced as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-freehypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets ofresources, each of which can be utilized by the same synthesized application netlist. The diverseimplementations provide reconfiguration-based resilience throughout the system lifetime while avoiding thesignificant overheads associated with runtime placement and routing phases. Evaluation on a Motion-JPEGimage compression core using a Xilinx 7-series-based FPGA hardware platform has demonstrated thepotential of the proposed FT method to achieve 37.5% area saving and up to 66% reduction in powerconsumption compared to the frequently-used TMR scheme while providing superior fault tolerance.Second, Design Disjunction based on non-adaptive group testing is developed to realize a low-overheadfault tolerant system capable of handling self-testing and self-recovery using runtime partial reconfiguration.Reconfiguration is guided by resource grouping procedures which employ non-linear measurements given by the constructive property of f-disjunctness to extend runtime resilience to a large fault space and realize a favorable range of tradeoffs. Disjunct designs are created using the mosaic convergence algorithmdeveloped such that at least one configuration in the library evades any occurrence of up to d resource faults, where d is lower-bounded by f. Experimental results for a set of MCNC and ISCAS benchmarks havedemonstrated f-diagnosability at the individual slice level with average isolation resolution of 96.4% (94.4%) for f=1 (f=2) while incurring an average critical path delay impact of only 1.49% and area cost roughly comparable to conventional 2-MR approaches. Finally, the proposed Design Disjunction method is evaluated as a design-time method to improve timing yield in the presence of large random within-die (WID) process variations for application with a moderately high production capacity.
Show less - Date Issued
- 2015
- Identifier
- CFE0006250, ucf:51086
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006250
- Title
- Exploring Techniques for Providing Privacy in Location-Based Services Nearest Neighbor Query.
- Creator
-
Asanya, John-Charles, Guha, Ratan, Turgut, Damla, Bassiouni, Mostafa, Zou, Changchun, Mohapatra, Ram, University of Central Florida
- Abstract / Description
-
Increasing numbers of people are subscribing to location-based services, but as the popularity grows so are the privacy concerns. Varieties of research exist to address these privacy concerns. Each technique tries to address different models with which location-based services respond to subscribers. In this work, we present ideas to address privacy concerns for the two main models namely: the snapshot nearest neighbor query model and the continuous nearest neighbor query model. First, we...
Show moreIncreasing numbers of people are subscribing to location-based services, but as the popularity grows so are the privacy concerns. Varieties of research exist to address these privacy concerns. Each technique tries to address different models with which location-based services respond to subscribers. In this work, we present ideas to address privacy concerns for the two main models namely: the snapshot nearest neighbor query model and the continuous nearest neighbor query model. First, we address snapshot nearest neighbor query model where location-based services response represents a snapshot of point in time. In this model, we introduce a novel idea based on the concept of an open set in a topological space where points belongs to a subset called neighborhood of a point. We extend this concept to provide anonymity to real objects where each object belongs to a disjointed neighborhood such that each neighborhood contains a single object. To help identify the objects, we implement a database which dynamically scales in direct proportion with the size of the neighborhood. To retrieve information secretly and allow the database to expose only requested information, private information retrieval protocols are executed twice on the data. Our study of the implementation shows that the concept of a single object neighborhood is able to efficiently scale the database with the objects in the area.The size of the database grows with the size of the grid and the objects covered by the location-based services. Typically, creating neighborhoods, computing distances between objects in the area, and running private information retrieval protocols causes the CPU to respond slowly with this increase in database size. In order to handle a large number of objects, we explore the concept of kernel and parallel computing in GPU. We develop GPU parallel implementation of the snapshot query to handle large number of objects. In our experiment, we exploit parameter tuning. The results show that with parameter tuning and parallel computing power of GPU we are able to significantly reduce the response time as the number of objects increases. To determine response time of an application without knowledge of the intricacies of GPU architecture, we extend our analysis to predict GPU execution time. We develop the run time equation for an operation and extrapolate the run time for a problem set based on the equation, and then we provide a model to predict GPU response time.As an alternative, the snapshot nearest neighbor query privacy problem can be addressed using secure hardware computing which can eliminate the need for protecting the rest of the sub-system, minimize resource usage and network transmission time. In this approach, a secure coprocessor is used to provide privacy. We process all information inside the coprocessor to deny adversaries access to any private information. To obfuscate access pattern to external memory location, we use oblivious random access memory methodology to access the server. Experimental evaluation shows that using a secure coprocessor reduces resource usage and query response time as the size of the coverage area and objects increases.Second, we address privacy concerns in the continuous nearest neighbor query model where location-based services automatically respond to a change in object's location. In this model, we present solutions for two different types known as moving query static object and moving query moving object. For the solutions, we propose plane partition using a Voronoi diagram, and a continuous fractal space filling curve using a Hilbert curve order to create a continuous nearest neighbor relationship between the points of interest in a path. Specifically, space filling curve results in multi-dimensional to 1-dimensional object mapping where values are assigned to the objects based on proximity. To prevent subscribers from issuing a query each time there is a change in location and to reduce the response time, we introduce the concept of transition and update time to indicate where and when the nearest neighbor changes. We also introduce a database that dynamically scales with the size of the objects in a path to help obscure and relate objects. By executing the private information retrieval protocol twice on the data, the user secretly retrieves requested information from the database. The results of our experiment show that using plane partitioning and a fractal space filling curve to create nearest neighbor relationships with transition time between objects reduces the total response time.
Show less - Date Issued
- 2015
- Identifier
- CFE0005757, ucf:50098
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005757
- Title
- UTILIZING EDGE IN IOT AND VIDEO STREAMING APPLICATIONS TO REDUCE BOTTLENECKS IN INTERNET TRAFFIC.
- Creator
-
Akpinar, Kutalmis, Hua, Kien, Zou, Changchun, Turgut, Damla, Wang, Jun, University of Central Florida
- Abstract / Description
-
There is a large increase in the surge of data over Internet due to the increasing demand on multimedia content. It is estimated that 80% of Internet traffic will be video by 2022, according to a recent study. At the same time, IoT devices on Internet will double the human population. While infrastructure standards on IoT are still nonexistent, enterprise solutions tend to encourage cloud-based solutions, causing an additional surge of data over the Internet. This study proposes solutions to...
Show moreThere is a large increase in the surge of data over Internet due to the increasing demand on multimedia content. It is estimated that 80% of Internet traffic will be video by 2022, according to a recent study. At the same time, IoT devices on Internet will double the human population. While infrastructure standards on IoT are still nonexistent, enterprise solutions tend to encourage cloud-based solutions, causing an additional surge of data over the Internet. This study proposes solutions to bring video traffic and IoT computation back to the edges of the network, so that costly Internet infrastructure upgrades are not necessary. An efficient way to prevent the Internet surge over the network for IoT is to push the application specific computation to the edge of the network, close to where the data is generated, so that large data can be eliminated before being delivered to the cloud. In this study, an event query language and processing environment is provided to process events from various devices. The query processing environment brings the application developers, sensor infrastructure providers and end users together. It uses boolean events as the streaming and processing units. This addresses the device heterogeneity and pushes the data-intense tasks to the edge of network.The second focus of the study is Video-on-Demand applications. A characteristic of VoD traffic is its high redundancy. Due to the demand on popular content, the same video traffic flows through Internet Service Provider's network as overlapping but separate streams. In previous studies on redundancy elimination, overlapping streams are merged into each other in link-level by receiving the packet only for the first stream, and re-using it for the subsequent duplicated streams. In this study, we significantly improve these techniques by introducing a merger-aware routing method.Our final focus is increasing utilization of Content Delivery Network (CDN) servers on the edge of network to reduce the long-distance traffic. The proposed system uses Software Defined Networks (SDN) to route adaptive video streaming clients to the best available CDN servers in terms of network availability. While performing the network assistance, the system does not reveal the video request information to the network provider, thus enabling privacy protection for encrypted streams. The request routing is performed in segment level for adaptive streaming. This enables to re-route the client to the best available CDN without an interruption if network conditions change during the stream.
Show less - Date Issued
- 2019
- Identifier
- CFE0007882, ucf:52774
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007882