Current Search: Parallel processing (x)
View All Items
- Title
- MODIFICATIONS TO THE FUZZY-ARTMAP ALGORITHM FOR DISTRIBUTED LEARNING IN LARGE DATA SETS.
- Creator
-
Castro, Jose R, Georgiopoulos, Michael, University of Central Florida
- Abstract / Description
-
The Fuzzy-ARTMAP (FAM) algorithm is one of the premier neural network architectures for classification problems. FAM can learn on line and is usually faster than other neural network approaches. Nevertheless the learning time of FAM can slow down considerably when the size of the training set increases into the hundreds of thousands. We apply data partitioning and networkpartitioning to the FAM algorithm in a sequential and parallel settingto achieve better convergence time and to efficiently...
Show moreThe Fuzzy-ARTMAP (FAM) algorithm is one of the premier neural network architectures for classification problems. FAM can learn on line and is usually faster than other neural network approaches. Nevertheless the learning time of FAM can slow down considerably when the size of the training set increases into the hundreds of thousands. We apply data partitioning and networkpartitioning to the FAM algorithm in a sequential and parallel settingto achieve better convergence time and to efficiently train withlarge databases (hundreds of thousands of patterns).Our parallelization is implemented on a Beowulf clusters of workstations. Two data partitioning approaches and two networkpartitioning approaches are developed. Extensive testing of all the approaches is done on three large datasets (half a milliondata points). One of them is the Forest Covertype database from Blackard and the other two are artificially generated Gaussian data with different percentages of overlap between classes.Speedups in the data partitioning approach reached the order of the hundreds without having to invest in parallel computation. Speedups onthe network partitioning approach are close to linear on a cluster of workstations. Both methods allowed us to reduce the computation time of training the neural network in large databases from days to minutes. We prove formally that the workload balance of our network partitioning approaches will never be worse than an acceptable bound, and also demonstrate the correctness of these parallelization variants of FAM.
Show less - Date Issued
- 2004
- Identifier
- CFE0000065, ucf:46092
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000065
- Title
- Improved Interpolation in SPH in Cases of Less Smooth Flow.
- Creator
-
Brun, Oddny, Wiegand, Rudolf, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
ABSTRACTWe introduced a method presented in Information Field Theory (IFT) [Abramovich et al.,2007] to improve interpolation in Smoothed Particle Hydrodynamics (SPH) in cases of less smoothflow. The method makes use of wavelet theory combined with B-splines for interpolation. The ideais to identify any jumps a function may have and then reconstruct the smoother segments betweenthe jumps. The results of our work demonstrated superior capability when compared to a particularchallenging SPH...
Show moreABSTRACTWe introduced a method presented in Information Field Theory (IFT) [Abramovich et al.,2007] to improve interpolation in Smoothed Particle Hydrodynamics (SPH) in cases of less smoothflow. The method makes use of wavelet theory combined with B-splines for interpolation. The ideais to identify any jumps a function may have and then reconstruct the smoother segments betweenthe jumps. The results of our work demonstrated superior capability when compared to a particularchallenging SPH application, to better conserve jumps and more accurately interpolate thesmoother segments of the function. The results of our work also demonstrated increased computationalefficiency with limited loss in accuracy as number of multiplications and execution timewere reduced. Similar benefits were observed for functions with spikes analyzed by the samemethod. Lesser, but similar effects were also demonstrated for real life data sets of less smoothnature.SPH is widely used in modeling and simulation of flow of matters. SPH presents advantagescompared to grid based methods both in terms of computational efficiency and accuracy, inparticular when dealing with less smooth flow. The results we achieved through our research is animprovement to the model in cases of less smooth flow, in particular flow with jumps and spikes.Up until now such improvements have been sought through modifications to the models' physicalequations and/or kernel functions and have only partially been able to address the issue.This research, as it introduced wavelet theory and IFT to a field of science that, to ourknowledge, not currently are utilizing these methods, did lay the groundwork for future researchiiiideas to benefit SPH. Among those ideas are further development of criteria for wavelet selection,use of smoothing splines for SPH interpolation and incorporation of Bayesian field theory.Improving the method's accuracy, stability and efficiency under more challenging conditionssuch as flow with jumps and spikes, will benefit applications in a wide area of science. Justin medicine alone, such improvements will further increase real time diagnostics, treatments andtraining opportunities because jumps and spikes are often the characteristics of significant physiologicaland anatomic conditions such as pulsatile blood flow, peristaltic intestine contractions andorgans' edges appearance in imaging.
Show less - Date Issued
- 2016
- Identifier
- CFE0006446, ucf:51451
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006446
- Title
- Mathematical and Computational Methods for Freeform Optical Shape Description.
- Creator
-
Kaya, Ilhan, Foroosh, Hassan, Rolland, Jannick, Turgut, Damla, Thompson, Kevin, Ilegbusi, Olusegun, University of Central Florida
- Abstract / Description
-
Slow-servo single-point diamond turning as well as advances in computer controlled small lap polishing enable the fabrication of freeform optics, specifically, optical surfaces for imaging applications that are not rotationally symmetric. Freeform optical elements will have a profound importance in the future of optical technology. Orthogonal polynomials added onto conic sections have been extensively used to describe optical surface shapes. The optical testing industry has chosen to...
Show moreSlow-servo single-point diamond turning as well as advances in computer controlled small lap polishing enable the fabrication of freeform optics, specifically, optical surfaces for imaging applications that are not rotationally symmetric. Freeform optical elements will have a profound importance in the future of optical technology. Orthogonal polynomials added onto conic sections have been extensively used to describe optical surface shapes. The optical testing industry has chosen to represent the departure of a wavefront under test from a reference sphere in terms of orthogonal ?-polynomials, specifically Zernike polynomials. Various forms of polynomials for describing freeform optical surfaces may be considered, however, both in optical design and in support of fabrication. More recently, radial basis functions were also investigated for optical shape description. In the application of orthogonal ?-polynomials to optical freeform shape description, there are important limitations, such as the number of terms required as well as edge-ringing and ill-conditioning in representing the surface with the accuracy demanded by most stringent optics applications. The first part of this dissertation focuses upon describing freeform optical surfaces with ? polynomials and shows their limitations when including higher orders together with possible remedies. We show that a possible remedy is to use edge clustered-fitting grids. Provided different grid types, we furthermore compared the efficacy of using different types of ? polynomials, namely Zernike and gradient orthogonal Q polynomials. In the second part of this thesis, a local, efficient and accurate hybrid method is developed in order to greatly reduce the order of polynomial terms required to achieve higher level of accuracy in freeform shape description that were shown to require thousands of terms including many higher order terms under prior art. This comes at the expense of multiple sub-apertures, and as such computational methods may leverage parallel processing. This new method combines the assets of both radial basis functions and orthogonal phi-polynomials for freeform shape description and is uniquely applicable across any aperture shape due to its locality and stitching principles. Finally in this thesis, in order to comprehend the possible advantages of parallel computing for optical surface descriptions, the benefits of making an effective use of impressive computational power offered by multi-core platforms for the computation of ?-polynomials are investigated. The ?-polynomials, specifically Zernike and gradient orthogonal Q-polynomials, are implemented with a set of recurrence based parallel algorithms on Graphics Processing Units (GPUs). The results show that more than an order of magnitude speedup is possible in the computation of ?-polynomials over a sequential implementation if the recurrence based parallel algorithms are adopted.
Show less - Date Issued
- 2013
- Identifier
- CFE0005012, ucf:49993
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005012
- Title
- SOURCE REPRESENTATION AND FRAMING IN CHILDHOOD IMMUNIZATION COMMUNICATION.
- Creator
-
Raneri, April, Matusitz, Jonathan, University of Central Florida
- Abstract / Description
-
Research has indicated a strong interest in knowing who is being represented and how information is being represented in the communication about childhood immunization. This study uses a two-part analysis to look at source representation and framing in childhood immunization communication. A quantitative analysis of articles from the New York Times and USA Today were examined for their source representation, their use of fear appeals, through the Extended Parallel Processing Model (EPPM), and...
Show moreResearch has indicated a strong interest in knowing who is being represented and how information is being represented in the communication about childhood immunization. This study uses a two-part analysis to look at source representation and framing in childhood immunization communication. A quantitative analysis of articles from the New York Times and USA Today were examined for their source representation, their use of fear appeals, through the Extended Parallel Processing Model (EPPM), and the use of frames, through the application of Prospect Theory. A qualitative semiotic analysis was conducted on 36 images that appeared on www.yahoo.com and www.google.com to find common themes for who is being represented and how information is being portrayed through the images. Results found a high prevalence of representation from the Center for Disease Control and Prevention, other governmental agencies and views from health/medical professionals in both the articles and images.
Show less - Date Issued
- 2010
- Identifier
- CFE0003016, ucf:48343
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003016
- Title
- Simulation, Analysis, and Optimization of Heterogeneous CPU-GPU Systems.
- Creator
-
Giles, Christopher, Heinrich, Mark, Ewetz, Rickard, Lin, Mingjie, Pattanaik, Sumanta, Flitsiyan, Elena, University of Central Florida
- Abstract / Description
-
With the computing industry's recent adoption of the Heterogeneous System Architecture (HSA) standard, we have seen a rapid change in heterogeneous CPU-GPU processor designs. State-of-the-art heterogeneous CPU-GPU processors tightly integrate multicore CPUs and multi-compute unit GPUs together on a single die. This brings the MIMD processing capabilities of the CPU and the SIMD processing capabilities of the GPU together into a single cohesive package with new HSA features comprising better...
Show moreWith the computing industry's recent adoption of the Heterogeneous System Architecture (HSA) standard, we have seen a rapid change in heterogeneous CPU-GPU processor designs. State-of-the-art heterogeneous CPU-GPU processors tightly integrate multicore CPUs and multi-compute unit GPUs together on a single die. This brings the MIMD processing capabilities of the CPU and the SIMD processing capabilities of the GPU together into a single cohesive package with new HSA features comprising better programmability, coherency between the CPU and GPU, shared Last Level Cache (LLC), and shared virtual memory address spaces. These advancements can potentially bring marked gains in heterogeneous processor performance and have piqued the interest of researchers who wish to unlock these potential performance gains. Therefore, in this dissertation I explore the heterogeneous CPU-GPU processor and application design space with the goal of answering interesting research questions, such as, (1) what are the architectural design trade-offs in heterogeneous CPU-GPU processors and (2) how do we best maximize heterogeneous CPU-GPU application performance on a given system. To enable my exploration of the heterogeneous CPU-GPU design space, I introduce a novel discrete event-driven simulation library called KnightSim and a novel computer architectural simulator called M2S-CGM. M2S-CGM includes all of the simulation elements necessary to simulate coherent execution between a CPU and GPU with shared LLC and shared virtual memory address spaces. I then utilize M2S-CGM for the conduct of three architectural studies. First, I study the architectural effects of shared LLC and CPU-GPU coherence on the overall performance of non-collaborative GPU-only applications. Second, I profile and analyze a set of collaborative CPU-GPU applications to determine how to best optimize them for maximum collaborative performance. Third, I study the impact of varying four key architectural parameters on collaborative CPU-GPU performance by varying GPU compute unit coalesce size, GPU to memory controller bandwidth, GPU frequency, and system wide switching fabric latency.
Show less - Date Issued
- 2019
- Identifier
- CFE0007807, ucf:52346
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007807
- Title
- Improvement of Data-Intensive Applications Running on Cloud Computing Clusters.
- Creator
-
Ibrahim, Ibrahim, Bassiouni, Mostafa, Lin, Mingjie, Zhou, Qun, Ewetz, Rickard, Garibay, Ivan, University of Central Florida
- Abstract / Description
-
MapReduce, designed by Google, is widely used as the most popular distributed programmingmodel in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobehave been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in...
Show moreMapReduce, designed by Google, is widely used as the most popular distributed programmingmodel in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobehave been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in cloud computing systems. This impedance is due to the uneven distribution of input data and computation load among cluster nodes, heterogeneous data nodes, data skew in reduce phase, resource contention situations, and network configurations. All these reasons may cause delay failure and the violation of job completion time. One of the key issues that can significantly affect the performance of cloud computing is the computation load balancing among cluster nodes. Replica placement in Hadoop distributed file system plays a significant role in data availability and the balanced utilization of clusters. In the current replica placement policy (RPP) of Hadoop distributed file system (HDFS), the replicas of data blocks cannot be evenly distributed across cluster's nodes. The current HDFS must rely on a load balancing utility for balancing the distribution of replicas, which results in extra overhead for time and resources. This dissertation addresses data load balancing problem and presents an innovative replica placement policy for HDFS. It can perfectly balance the data load among cluster's nodes. The heterogeneity of cluster nodes exacerbates the issue of computational load balancing; therefore, another replica placement algorithm has been proposed in this dissertation for heterogeneous cluster environments. The timing of identifying the straggler map task is very important for straggler mitigation in data-intensive cloud computing. To mitigate the straggler map task, Present progress and Feedback based Speculative Execution (PFSE) algorithm has been proposed in this dissertation. PFSE is a new straggler identification scheme to identify the straggler map tasks based on the feedback information received from completed tasks beside the progress of the current running task. Straggler reduce task aggravates the violation of MapReduce job completion time. Straggler reduce task is typically the result of bad data partitioning during the reduce phase. The Hash partitioner employed by Hadoop may cause intermediate data skew, which results in straggler reduce task. In this dissertation a new partitioning scheme, named Balanced Data Clusters Partitioner (BDCP), is proposed to mitigate straggler reduce tasks. BDCP is based on sampling of input data and feedback information about the current processing task. BDCP can assist in straggler mitigation during the reduce phase and minimize the job completion time in MapReduce jobs. The results of extensive experiments corroborate that the algorithms and policies proposed in this dissertation can improve the performance of data-intensive applications running on cloud platforms.
Show less - Date Issued
- 2019
- Identifier
- CFE0007818, ucf:52804
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007818