Current Search: computers (x)
Pages
-
-
Title
-
EFFECT OF A HUMAN-TEACHER VS. A ROBOT-TEACHER ON HUMAN LEARNING: A PILOT STUDY.
-
Creator
-
Smith, Melissa, Sims, Valerie, University of Central Florida
-
Abstract / Description
-
Studies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the...
Show moreStudies about the dynamics of human-robot interactions have increased within the past decade as robots become more integrated into the daily lives of humans. However, much of the research into learning and robotics has been focused on methods that would allow robots to learn from humans and very little has been done on how and what, if possible, humans could learn from programmed robots. A between-subjects experiment was conducted, in which two groups were compared: a group where the participants learned a simple pick-and-place block task via video of a human-teacher and a group where the participants learned the same pick-and-place block task via video from a robotic-teacher. After being the taught the task, the participants performed a 15-minute distracter task and then were timed in their reconstruction of the block configuration. An exit survey asking about their level of comfort learning from robot and computer entities was given upon completion. Results showed that there was no significant difference in the rebuild scores of the two groups, but there was a marginally significant difference in the rebuild times of the two groups. Exit survey results, research implications, and future work are discussed.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFH0004068, ucf:44809
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004068
-
-
Title
-
ATTRIBUTIONS OF BLAME IN A HUMAN-ROBOT INTERACTION SCENARIO.
-
Creator
-
Scholcover, Federico, Sims, Valerie, University of Central Florida
-
Abstract / Description
-
This thesis worked towards answering the following question: Where, if at all, do the beliefs and behaviors associated with interacting with a nonhuman agent deviate from how we treat a human? This was done by exploring the inter-related fields of Human-Computer and Human-Robot Interaction in the literature review, viewing them through the theoretical lens of anthropomorphism. A study was performed which looked at how 104 participants would attribute blame in a robotic surgery scenario, as...
Show moreThis thesis worked towards answering the following question: Where, if at all, do the beliefs and behaviors associated with interacting with a nonhuman agent deviate from how we treat a human? This was done by exploring the inter-related fields of Human-Computer and Human-Robot Interaction in the literature review, viewing them through the theoretical lens of anthropomorphism. A study was performed which looked at how 104 participants would attribute blame in a robotic surgery scenario, as detailed in a vignette. A majority of results were statistically non-significant, however, some results emerged which may imply a diffusion of responsibility in human-robot collaboration scenarios.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFH0004587, ucf:45224
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004587
-
-
Title
-
MESHLESS HEMODYNAMICS MODELING AND EVOLUTIONARY SHAPE OPTIMIZATION OF BYPASS GRAFTS ANASTOMOSES.
-
Creator
-
El Zahab, Zaher, Kassab, Alain, University of Central Florida
-
Abstract / Description
-
Objectives: The main objective of the current dissertation is to establish a formal shape optimization procedure for a given bypass grafts end-to-side distal anastomosis (ETSDA). The motivation behind this dissertation is that most of the previous ETSDA shape optimization research activities cited in the literature relied on direct optimization approaches that do not guaranty accurate optimization results. Three different ETSDA models are considered herein: The conventional, the Miller cuff,...
Show moreObjectives: The main objective of the current dissertation is to establish a formal shape optimization procedure for a given bypass grafts end-to-side distal anastomosis (ETSDA). The motivation behind this dissertation is that most of the previous ETSDA shape optimization research activities cited in the literature relied on direct optimization approaches that do not guaranty accurate optimization results. Three different ETSDA models are considered herein: The conventional, the Miller cuff, and the hood models. Materials and Methods: The ETSDA shape optimization is driven by three computational objects: a localized collocation meshless method (LCMM) solver, an automated geometry pre-processor, and a genetic-algorithm-based optimizer. The usage of the LCMM solver is very convenient to set an autonomous optimization mechanism for the ETSDA models. The task of the automated pre-processor is to randomly distribute solution points in the ETSDA geometries. The task of the optimized is the adjust the ETSDA geometries based on mitigation of the abnormal hemodynamics parameters. Results: The results reported in this dissertation entail the stabilization and validation of the LCMM solver in addition to the shape optimization of the considered ETSDA models. The LCMM stabilization results consists validating a custom-designed upwinding scheme on different one-dimensional and two-dimensional test cases. The LCMM validation is done for incompressible steady and unsteady flow applications in the ETSDA models. The ETSDA shape optimization include single-objective optimization results in steady flow situations and bi-objective optimization results in pulsatile flow situations. Conclusions: The LCMM solver provides verifiably accurate resolution of hemodynamics and is demonstrated to be third order accurate in a comparison to a benchmark analytical solution of the Navier-Stokes. The genetic-algorithm-based shape optimization approach proved to be very effective for the conventional and Miller cuff ETSDA models. The shape optimization results for those two models definitely suggest that the graft caliber should be maximized whereas the anastomotic angle and the cuff height (in the Miller cuff model) should be chosen following a compromise between the wall shear stress spatial and temporal gradients. The shape optimization of the hood ETSDA model did not prove to be advantageous, however it could be meaningful with the inclusion of the suture line cut length as an optimization parameter.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002165, ucf:47927
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002165
-
-
Title
-
AN ARCHITECTURE FOR HIGH-PERFORMANCE PRIVACY-PRESERVING AND DISTRIBUTED DATA MINING.
-
Creator
-
Secretan, James, Georgiopoulos, Michael, University of Central Florida
-
Abstract / Description
-
This dissertation discusses the development of an architecture and associated techniques to support Privacy Preserving and Distributed Data Mining. The field of Distributed Data Mining (DDM) attempts to solve the challenges inherent in coordinating data mining tasks with databases that are geographically distributed, through the application of parallel algorithms and grid computing concepts. The closely related field of Privacy Preserving Data Mining (PPDM) adds the dimension of privacy to...
Show moreThis dissertation discusses the development of an architecture and associated techniques to support Privacy Preserving and Distributed Data Mining. The field of Distributed Data Mining (DDM) attempts to solve the challenges inherent in coordinating data mining tasks with databases that are geographically distributed, through the application of parallel algorithms and grid computing concepts. The closely related field of Privacy Preserving Data Mining (PPDM) adds the dimension of privacy to the problem, trying to find ways that organizations can collaborate to mine their databases collectively, while at the same time preserving the privacy of their records. Developing data mining algorithms for DDM and PPDM environments can be difficult and there is little software to support it. In addition, because these tasks can be computationally demanding, taking hours of even days to complete data mining tasks, organizations should be able to take advantage of high-performance and parallel computing to accelerate these tasks. Unfortunately there is no such framework that is able to provide all of these services easily for a developer. In this dissertation such a framework is developed to support the creation and execution of DDM and PPDM applications, called APHID (Architecture for Private, High-performance Integrated Data mining). The architecture allows users to flexibly and seamlessly integrate cluster and grid resources into their DDM and PPDM applications. The architecture is scalable, and is split into highly de-coupled services to ensure flexibility and extensibility. This dissertation first develops a comprehensive example algorithm, a privacy-preserving Probabilistic Neural Network (PNN), which serves a basis for analysis of the difficulties of DDM/PPDM development. The privacy-preserving PNN is the first such PNN in the literature, and provides not only a practical algorithm ready for use in privacy-preserving applications, but also a template for other data intensive algorithms, and a starting point for analyzing APHID's architectural needs. After analyzing the difficulties in the PNN algorithm's development, as well as the shortcomings of researched systems, this dissertation presents the first concrete programming model joining high performance computing resources with a privacy preserving data mining process. Unlike many of the existing PPDM development models, the platform of services is language independent, allowing layers and algorithms to be implemented in popular languages (Java, C++, Python, etc.). An implementation of a PPDM algorithm is developed in Java utilizing the new framework. Performance results are presented, showing that APHID can enable highly simplified PPDM development while speeding up resource intensive parts of the algorithm.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002853, ucf:48076
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002853
-
-
Title
-
MULTIMEDIA COMPUTER-BASED TRAINING AND LEARNING: THE ROLE OF REFERENTIAL CONNECTIONS IN SUPPORTING COGNITIVE LEARNING OUTCOMES.
-
Creator
-
Scielzo, Sandro, Jentsch, Florian, University of Central Florida
-
Abstract / Description
-
Multimedia theory has generated a number of principles and guidelines to support computer-based training (CBT) design. However, the cognitive processes responsible for learning, from which these principles and guidelines stem from, are only indirectly derived by focusing on cognitive learning outcome differences. Unfortunately, the effects that cognitive processes have on learning are based on the assumption that cognitive learning outcomes are indicative of certain cognitive processes. Such...
Show moreMultimedia theory has generated a number of principles and guidelines to support computer-based training (CBT) design. However, the cognitive processes responsible for learning, from which these principles and guidelines stem from, are only indirectly derived by focusing on cognitive learning outcome differences. Unfortunately, the effects that cognitive processes have on learning are based on the assumption that cognitive learning outcomes are indicative of certain cognitive processes. Such circular reasoning is what prompted this dissertation. Specifically, this dissertation looked at the notion of referential connections, which is a prevalent cognitive process that is thought to support knowledge acquisition in a multimedia CBT environment. Referential connections, and the related cognitive mechanisms supporting them, are responsible for creating associations between verbal and visual information; as a result, their impact on multimedia learning is theorized to be far reaching. Therefore, one of the main goals of this dissertation was to address the issue of indirectly assessing cognitive processes by directly measuring referential connections to (a) verify the presence of referential connections, and (b) to measure the extent to which referential connections affect cognitive learning outcomes. To achieve this goal, a complete review of the prevalent multimedia theories was brought fourth. The most important factors thought to be influencing referential connections were extracted and cataloged into variables that were manipulated, fixed, covaried, or randomized to empirically examine the link between referential connections and learning. Specifically, this dissertation manipulated referential connections by varying the temporal presentation of modalities and the color coding of instructional material. Manipulating the temporal presentation of modalities was achieved by either presenting modalities simultaneously or sequentially. Color coding manipulations capitalized on pre-attentive highlighting and pairing of elements (i.e., pairing text with corresponding visuals). As such, the computer-based training varied color coding on three levels: absence of color coding, color coding without pairing text and corresponding visual aids, and color coding that also paired text and corresponding visual aids. The modalities employed in the experiment were written text and static visual aids, and the computer-based training taught the principles of flight to naïve participants. Furthermore, verbal and spatial aptitudes were used as covariates, as they consistently showed to affect learning. Overall, the manipulations were hypothesized to differentially affect referential connections and cognitive learning outcomes, thereby altering cognitive learning outcomes. Specifically, training with simultaneously presented modalities was hypothesized to be superior, in terms of referential connections and learning performance, to a successive presentation, and color coding modalities with pairing of verbal and visual correspondents was hypothesized to be superior to other forms of color coding. Finally, it was also hypothesized that referential connections would positively correlate with cognitive learning outcomes and, indeed, mediate the effects of temporal contiguity and color coding on learning. A total of 96 were randomly assigned to one of the six experimental groups, and were trained on the principles of flight. The key construct of referential connections was successfully measured with three methods. Cognitive learning outcomes were captured by a traditional declarative test and by two integrative (i.e., knowledge application) tests. Results showed that the two multimedia manipulation impacted cognitive learning outcomes and did so through corresponding changes of related referential connections (i.e., through mediation). Specifically, as predicted, referential connections mediated the impact of both temporal contiguity and color coding on lower- and higher-level cognitive learning outcomes. Theoretical and practical implications of the results are discussed in relation to computer-based training design principles and guidelines. Specifically, theoretical implications focus on the contribution that referential connections have on multimedia learning theory, and practical implications are brought forth in terms of instructional design issues. Future research considerations are described as they relate to further exploring the role of referential connections within multimedia CBT paradigms.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002224, ucf:47899
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002224
-
-
Title
-
A STUDY OF DIGITAL COMMUNICATION TOOLS USED IN ONLINE HIGH SCHOOL COURSES.
-
Creator
-
Putney, Nathan, Gunter, Glenda, University of Central Florida
-
Abstract / Description
-
The purpose of this study was to determine the degree to which selected communication tools used by teachers who teach online are positively perceived by their students in improving feelings of self-efficacy and motivation, and which tools may be perceived to be significantly more effective than the others. Students from the Florida Virtual School, a leader in online course delivery for grades 6-12, were surveyed to find their perceptions about how their teachers' use of email, Instant...
Show moreThe purpose of this study was to determine the degree to which selected communication tools used by teachers who teach online are positively perceived by their students in improving feelings of self-efficacy and motivation, and which tools may be perceived to be significantly more effective than the others. Students from the Florida Virtual School, a leader in online course delivery for grades 6-12, were surveyed to find their perceptions about how their teachers' use of email, Instant Messaging, chat, the telephone, discussion area, whiteboard, and assignment feedback affected their motivation and success in an online high school course. Correlations were done to discover if there were any significant relationships between variables that relate to teacher interaction and motivation. In addition, distributions of student responses to survey questions about digital communication tools and demographics were examined. It was found that there is a high degree of correlation between frequency of teachers' use of digital communication tools and student's perception of their level of motivation. It was also found that the digital communication tools most frequently used by teachers in communicating with their students were email, the telephone, and assignment feedback, and that the students found these same tools the most helpful in their learning. In addition, no significant demographic differences were found in students' perception of teacher's use of tools to enhance learning and motivation in their courses except in the number of previous online courses taken. These findings can help direct online high school teachers in their selection of digital tools used to communicate with their students.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002333, ucf:47784
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002333
-
-
Title
-
TRAIT AROUSABILITYAND ITS IMPACT ON ADAPTIVE MULTIMEDIA TRAINING.
-
Creator
-
Schatz, Sae, Bowers, Clint, University of Central Florida
-
Abstract / Description
-
Today's best intelligent, adaptive, multimedia trainers have shown excellent performance; however, their results still fall far-short of what good human tutors can achieve. The overarching thesis of this paper is that future intelligent, adaptive systems will be improved by taking into account relevant, consistent, and meaningful individual differences. Specifically, responding to individual differences among trainees will (a) form more accurate individual baselines within a training...
Show moreToday's best intelligent, adaptive, multimedia trainers have shown excellent performance; however, their results still fall far-short of what good human tutors can achieve. The overarching thesis of this paper is that future intelligent, adaptive systems will be improved by taking into account relevant, consistent, and meaningful individual differences. Specifically, responding to individual differences among trainees will (a) form more accurate individual baselines within a training system, and (b) better inform system responses (so that they interpret and respond to observable data more appropriately). One variable to consider is trait arousability, which describes individual differences in sensitivity to stimuli. Individuals' arousability interacts with the arousal inherent to a task/environment to create a person's arousal state. An individual's arousal state affects his/her attentional capacity, working memory function, and depth of processing. In this paper, two studies are presented. The purpose of the first study was to evaluate existing subjective measures of trait arousability and then develop a new measure by factor analyzing existing apparatus. From this well-populated (N = 622) study, a new reliable (α = .91) 35-item scale was developed. This scale includes two factors, negative emotionality and orienting sensitivity, which have been previously theorized but not yet so reliably measured. The purposes of the second study were to (a) validate the measure developed in the first investigation and (b) demonstrate the applied value of the arousability construct in the context of training. Results from the second study (N=45) demonstrated significant main effects, but the interaction effects were inconclusive. They neither clearly confirm nor invalidate the hypotheses, but they do raise further questions.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002351, ucf:47815
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002351
-
-
Title
-
Soft-Error Resilience Framework For Reliable and Energy-Efficient CMOS Logic and Spintronic Memory Architectures.
-
Creator
-
Alghareb, Faris, DeMara, Ronald, Lin, Mingjie, Zou, Changchun, Jha, Sumit Kumar, Song, Zixia, University of Central Florida
-
Abstract / Description
-
The revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error...
Show moreThe revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error (SE) dependability issues. Consequently, soft-error mitigation techniques have become essential to improve systems' reliability. Herein, first, we proposed optimized soft-error resilience designs to improve robustness of sub-micron computing systems. The proposed approaches were developed to deliver energy-efficiency and tolerate double/multiple errors simultaneously while incurring acceptable speed performance degradation compared to the prior work. Secondly, the impact of Process Variation (PV) at the Near-Threshold Voltage (NTV) region on redundancy-based SE-mitigation approaches for High-Performance Computing (HPC) systems was investigated to highlight the approach that can realize favorable attributes, such as reduced critical datapath delay variation and low speed degradation. Finally, recently, spin-based devices have been widely used to design Non-Volatile (NV) elements such as NV latches and flip-flops, which can be leveraged in normally-off computing architectures for Internet-of-Things (IoT) and energy-harvesting-powered applications. Thus, in the last portion of this dissertation, we design and evaluate for soft-error resilience NV-latching circuits that can achieve intriguing features, such as low energy consumption, high computing performance, and superior soft errors tolerance, i.e., concurrently able to tolerate Multiple Node Upset (MNU), to potentially become a mainstream solution for the aerospace and avionic nanoelectronics. Together, these objectives cooperate to increase energy-efficiency and soft errors mitigation resiliency of larger-scale emerging NV latching circuits within iso-energy constraints. In summary, addressing these reliability concerns is paramount to successful deployment of future reliable and energy-efficient CMOS logic and spintronic memory architectures with deeply-scaled devices operating at low-voltages.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007884, ucf:52765
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007884
-
-
Title
-
AUTONOMOUS ROBOTIC GRASPING IN UNSTRUCTURED ENVIRONMENTS.
-
Creator
-
Jabalameli, Amirhossein, Behal, Aman, Haralambous, Michael, Pourmohammadi Fallah, Yaser, Boloni, Ladislau, Xu, Yunjun, University of Central Florida
-
Abstract / Description
-
A crucial problem in robotics is interacting with known or novel objects in unstructured environments. While the convergence of a multitude of research advances is required to address this problem, our goal is to describe a framework that employs the robot's visual perception to identify and execute an appropriate grasp to pick and place novel objects. Analytical approaches explore for solutions through kinematic and dynamic formulations. On the other hand, data-driven methods retrieve grasps...
Show moreA crucial problem in robotics is interacting with known or novel objects in unstructured environments. While the convergence of a multitude of research advances is required to address this problem, our goal is to describe a framework that employs the robot's visual perception to identify and execute an appropriate grasp to pick and place novel objects. Analytical approaches explore for solutions through kinematic and dynamic formulations. On the other hand, data-driven methods retrieve grasps according to their prior knowledge of either the target object, human experience, or through information obtained from acquired data. In this dissertation, we propose a framework based on the supporting principle that potential contacting regions for a stable grasp can be foundby searching for (i) sharp discontinuities and (ii) regions of locally maximal principal curvature in the depth map. In addition to suggestions from empirical evidence, we discuss this principle by applying the concept of force-closure and wrench convexes. The key point is that no prior knowledge of objects is utilized in the grasp planning process; however, the obtained results show thatthe approach is capable to deal successfully with objects of different shapes and sizes. We believe that the proposed work is novel because the description of the visible portion of objects by theaforementioned edges appearing in the depth map facilitates the process of grasp set-point extraction in the same way as image processing methods with the focus on small-size 2D image areas rather than clustering and analyzing huge sets of 3D point-cloud coordinates. In fact, this approach dismisses reconstruction of objects. These features result in low computational costs and make it possible to run the proposed algorithm in real-time. Finally, the performance of the approach is successfully validated by applying it to the scenes with both single and multiple objects, in both simulation and real-world experiment setups.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007892, ucf:52757
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007892
-
-
Title
-
Visionary Ophthalmics: Confluence of Computer Vision and Deep Learning for Ophthalmology.
-
Creator
-
Morley, Dustin, Foroosh, Hassan, Bagci, Ulas, Gong, Boqing, Mohapatra, Ram, University of Central Florida
-
Abstract / Description
-
Ophthalmology is a medical field ripe with opportunities for meaningful application of computer vision algorithms. The field utilizes data from multiple disparate imaging techniques, ranging from conventional cameras to tomography, comprising a diverse set of computer vision challenges. Computer vision has a rich history of techniques that can adequately meet many of these challenges. However, the field has undergone something of a revolution in recent times as deep learning techniques have...
Show moreOphthalmology is a medical field ripe with opportunities for meaningful application of computer vision algorithms. The field utilizes data from multiple disparate imaging techniques, ranging from conventional cameras to tomography, comprising a diverse set of computer vision challenges. Computer vision has a rich history of techniques that can adequately meet many of these challenges. However, the field has undergone something of a revolution in recent times as deep learning techniques have sprung into the forefront following advances in GPU hardware. This development raises important questions regarding how to best leverage insights from both modern deep learning approaches and more classical computer vision approaches for a given problem. In this dissertation, we tackle challenging computer vision problems in ophthalmology using methods all across this spectrum. Perhaps our most significant work is a highly successful iris registration algorithm for use in laser eye surgery. This algorithm relies on matching features extracted from the structure tensor and a Gabor wavelet (-) a classically driven approach that does not utilize modern machine learning. However, drawing on insight from the deep learning revolution, we demonstrate successful application of backpropagation to optimize the registration significantly faster than the alternative of relying on finite differences. Towards the other end of the spectrum, we also present a novel framework for improving RANSAC segmentation algorithms by utilizing a convolutional neural network (CNN) trained on a RANSAC-based loss function. Finally, we apply state-of-the-art deep learning methods to solve the problem of pathological fluid detection in optical coherence tomography images of the human retina, using a novel retina-specific data augmentation technique to greatly expand the data set. Altogether, our work demonstrates benefits of applying a holistic view of computer vision, which leverages deep learning and associated insights without neglecting techniques and insights from the previous era.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007058, ucf:52001
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007058
-
-
Title
-
Enhancing Cognitive Algorithms for Optimal Performance of Adaptive Networks.
-
Creator
-
Lugo-Cordero, Hector, Guha, Ratan, Wu, Annie, Stanley, Kenneth, University of Central Florida
-
Abstract / Description
-
This research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in...
Show moreThis research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in the environment. Furthermore, the diversity of the performance objectives considered (e.g., power, coverage, anonymity, etc.) makes the objective function non-continuous and therefore not have a derivative. For these reasons, we enhance Particle Swarm Optimization (PSO) algorithm with elements that aid in exploring for better configurations to obtain optimal and sub-optimal configurations. According to results, the enhanced PSO promotes population diversity, leading to more unique optimal configurations for adapting to dynamic environments. The gradual complexification process demonstrated simpler optimal solutions than those obtained via trial and error without the enhancements.Configurations obtained by the modified PSO are further tuned in real-time upon environment changes. Such tuning occurs with a Fuzzy Logic Controller (FLC) which models human decision making by monitoring certain events in the algorithm. Example of such events include diversity and quality of solution in the environment. The FLC is able to adapt the enhanced PSO to changes in the environment, causing more exploration or exploitation as needed.By adding a Probabilistic Neural Network (PNN) classifier, the enhanced PSO is again used as a filter to aid in intrusion detection classification. This approach reduces miss classifications by consulting neighbors for classification in case of ambiguous samples. The performance of ambiguous votes via PSO filtering shows an improvement in classification, causing the simple classifier perform better the commonly used classifiers.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007046, ucf:52003
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007046
-
-
Title
-
Catalyst Design and Mechanism Study with Computational Method for Small Molecule Activation.
-
Creator
-
Liu, Muqiong, Zou, Shengli, Harper, James, Dixon, Donovan, Chen, Gang, Feng, Xiaofeng, University of Central Florida
-
Abstract / Description
-
Computational chemistry is a branch of modern chemistry that utilizes the computers to solve chemical problems. The fundamental of computational chemistry is Schr(&)#246;dinger equation. To solve the equation, researchers developed many methods based on Born-Oppenheimer Approximation, such as Hartree-Fock method and DFT method, etc. Computational chemistry is now widely used on reaction mechanism study and new chemical designing.In the first project described in Chapter 3, we designed...
Show moreComputational chemistry is a branch of modern chemistry that utilizes the computers to solve chemical problems. The fundamental of computational chemistry is Schr(&)#246;dinger equation. To solve the equation, researchers developed many methods based on Born-Oppenheimer Approximation, such as Hartree-Fock method and DFT method, etc. Computational chemistry is now widely used on reaction mechanism study and new chemical designing.In the first project described in Chapter 3, we designed phosphine oxide modified Ag3, Au3 and Cu3 nanocluster catalysts with DFT method. We found that these catalysts were able to catalyze the activation of H2 by cleaving the H-H bond asymmetrically. The activated catalyst-2H complex can be further used as reducing agent to hydrogenate CO molecule to afford HCHO. The mechanism study of these catalysts showed that the electron transfer from electron-rich metal clusters to O atom on the phosphine oxide ligand is the major driving force for H2 activation. In addition, different substituent groups on phosphine oxide ligand were tested. Both H affinity of metal and the substituent groups on ligand can both affect the activation energy.Another project described in Chapter 4 is the modelling of catalyst with DFT. We chose borane/NHC frustrated Lewis pair (FLP) catalyzed methane activation reaction as example to establish a relationship between activation energy and catalysts' physical properties. After performing simulation, we further proved the well-accepted theory that the electron transfer is the main driving force of catalysis. Furthermore, we were able to establish a linearivrelationship for each borane between activation energy and the geometrical mean value of HOMO/LUMO energy gap (?EMO). Based on that, we introduced the formation energy of borane/NHC complex (?EF) and successfully established a generalized relationship between Ea and geometrical mean value of ?EMO and ?EF. This model can be used to predict reactivity of catalysts.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007343, ucf:52112
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007343
-
-
Title
-
NONDESTRUCTIVE TESTING METHODS AIDED VIA NUMERICAL COMPUTATION MODELS FOR VARIOUS CRITICAL AEROSPACE AND POWER GENERATION SYSTEMS.
-
Creator
-
Warren, Peter, Ghosh, Ranajay, Raghavan, Seetha, Gou, Jihua, University of Central Florida
-
Abstract / Description
-
A current critical necessity for all industries which utilize various equipment that operates in hightemperature and extreme environments, is the ability to collect and analyze data via non destructivetesting (NDT) methods. Operational conditions and material health must be constantly monitoredif components are to be implemented precisely to increase the overall performance and efficiencyof the process. Currently in both aerospace and power generation systems there are many methodsthat are...
Show moreA current critical necessity for all industries which utilize various equipment that operates in hightemperature and extreme environments, is the ability to collect and analyze data via non destructivetesting (NDT) methods. Operational conditions and material health must be constantly monitoredif components are to be implemented precisely to increase the overall performance and efficiencyof the process. Currently in both aerospace and power generation systems there are many methodsthat are being employed to gather several necessary properties and parameters of a given system.This work will focus primarly on two of these NDT methods, with the ultimate goal of contributingto not only the method itself, but also the role of numerical computation to increase the resolutionof a given technique. Numerical computation can attribute knowledge onto the governing mechanicsof these NDT methods, many of which are currently being utilized in industry. An increase inthe accuracy of the data gathered from NDT methods will ultimately lead to an increase in operationalefficiency of a given system.The first method to be analyzed is a non destructive emmision technique widely referred to asaccoustic ultrasonic thermography. This work will investigate the mechanism of heat generationin acoustic thermography using a combination of numerical computational analysis and physicalexperimentation. Many of the challenges typical of this type of system are addressed in this work.The principal challenges among them are crack detection threshold, signature quality and the effectof defect interactions. Experiments and finite element based numerical simulations are employed,in order to evaluate the proposed method, as well as draw conclusions on the viability for futureextension and integration with other digital technologies for health monitoring. A method to determinethe magnitude of the different sources of heat generation during an acoustic excitation isalso achieved in this work. Defects formed through industrial operation as well as defects formedthrough artificial manufacturing methods were analyzed and compared.The second method is a photoluminescence piezospectroscopic (PLPS) for composite materials.The composite studied in this work has one host material which does not illuminate or have photoluminescenceproperties, the second material provides the luminescence properties, as well asadditional overall strength to the composite material. Understanding load transfer between the reinforcementsand matrix materials that constitute these composites hold the key to elucidating theirmechanical properties and consequent behavior in operation. Finite element simulations of loadingeffects on representative embedded alumina particles in a matrix were investigated and comparedwith experimental results. The alumina particles were doped with chromium in order to achieveluminscence capability, and therefore take advantage of the piezospectrscopic measurement technique.Mechanical loading effects on alumina nanoparticle composites can be captured with Photostimulated luminescent spectroscopy, where spectral shifts from the particles are monitored withload. The resulting piezospectroscopic (PS) coefficients are then used to calculate load transferbetween the matrix and particle. The results from the simulation and experiments are shown tobe in general agreement of increase in load transferred with increasing particle volume fractiondue to contact stresses that are dominant at these higher volume fractions. Results from this workpresent a combination of analytical and experimental insight into the effect of particle volume fractionon load transfer in ceramic composites that can serve to determine properties and eventuallyoptimize various parameters such as particle shape, size and dispersion that govern the design ofthese composites prior to manufacture and testing.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007262, ucf:52203
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007262
-
-
Title
-
Bridging the Gap between Application and Solid-State-Drives.
-
Creator
-
Zhou, Jian, Wang, Jun, Lin, Mingjie, Fan, Deliang, Ewetz, Rickard, Qi, GuoJun, University of Central Florida
-
Abstract / Description
-
Data storage is one of the important and often critical parts of the computing systemin terms of performance, cost, reliability, and energy.Numerous new memory technologies,such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor,have emerged recently.Many of them have already entered the production system.Traditional storage optimization and caching algorithms are far from optimalbecause storage I/Os do not show simple locality.To provide optimal storage we need...
Show moreData storage is one of the important and often critical parts of the computing systemin terms of performance, cost, reliability, and energy.Numerous new memory technologies,such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor,have emerged recently.Many of them have already entered the production system.Traditional storage optimization and caching algorithms are far from optimalbecause storage I/Os do not show simple locality.To provide optimal storage we need accurate predictions of I/O behavior.However, the workloads are increasingly dynamic and diverse,making the long and short time I/O prediction challenge.Because of the evolution of the storage technologiesand the increasing diversity of workloads,the storage software is becoming more and more complex.For example, Flash Translation Layer (FTL) is added for NAND-flash based Solid State Disks (NAND-SSDs).However, it introduces overhead such as address translation delay and garbage collection costs.There are many recent studies aim to address the overhead.Unfortunately, there is no one-size-fits-all solution due to the variety of workloads.Despite rapidly evolving in storage technologies,the increasing heterogeneity and diversity in machines and workloadscoupled with the continued data explosionexacerbate the gap between computing and storage speeds.In this dissertation, we improve the data storage performance from both top-down and bottom-up approach.First, we will investigate exposing the storage level parallelismso that applications can avoid I/O contentions and workloads skewwhen scheduling the jobs.Second, we will study how architecture aware task scheduling can improve the performance of the application when PCM based NVRAM are equipped.Third, we will develop an I/O correlation aware flash translation layer for NAND-flash based Solid State Disks.Fourth, we will build a DRAM-based correlation aware FTL emulator and study the performance in various filesystems.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007273, ucf:52188
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007273
-
-
Title
-
An adaptive integration architecture for software reuse.
-
Creator
-
Williams, Denver Robert Edward, Orooji, Ali, Engineering and Computer Science
-
Abstract / Description
-
University of Central Florida College of Engineering Thesis; The problem of building large, reliable software systems in a controlled, cost effective way, the so-called software crisis problem, is one of computer science's great challenges. From the very outset of computing as science, software reuse has been touted as a means to overcome the software crisis issue.
-
Date Issued
-
2001
-
Identifier
-
CFR0000786, ucf:52928
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFR0000786
-
-
Title
-
Rethinking Routing and Peering in the era of Vertical Integration of Network Functions.
-
Creator
-
Dey, Prasun, Yuksel, Murat, Wang, Jun, Ewetz, Rickard, Zhang, Wei, Hasan, Samiul, University of Central Florida
-
Abstract / Description
-
Content providers typically control the digital content consumption services and are getting the most revenue by implementing an (")all-you-can-eat(") model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing...
Show moreContent providers typically control the digital content consumption services and are getting the most revenue by implementing an (")all-you-can-eat(") model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering (-) from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation (-) one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007797, ucf:52351
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007797
-
-
Title
-
Game-Theoretic Frameworks and Strategies for Defense Against Network Jamming and Collocation Attacks.
-
Creator
-
Hemida, Ahmed, Atia, George, Simaan, Marwan, Vosoughi, Azadeh, Sukthankar, Gita, Guirguis, Mina, University of Central Florida
-
Abstract / Description
-
Modern networks are becoming increasingly more complex, heterogeneous, and densely connected. While more diverse services are enabled to an ever-increasing number of users through ubiquitous networking and pervasive computing, several important challenges have emerged. For example, densely connected networks are prone to higher levels of interference, which makes them more vulnerable to jamming attacks. Also, the utilization of software-based protocols to perform routing, load balancing and...
Show moreModern networks are becoming increasingly more complex, heterogeneous, and densely connected. While more diverse services are enabled to an ever-increasing number of users through ubiquitous networking and pervasive computing, several important challenges have emerged. For example, densely connected networks are prone to higher levels of interference, which makes them more vulnerable to jamming attacks. Also, the utilization of software-based protocols to perform routing, load balancing and power management functions in Software-Defined Networks gives rise to more vulnerabilities that could be exploited by malicious users and adversaries. Moreover, the increased reliance on cloud computing services due to a growing demand for communication and computation resources poses formidable security challenges due to the shared nature and virtualization of cloud computing. In this thesis, we study two types of attacks: jamming attacks on wireless networks and side-channel attacks on cloud computing servers. The former attacks disrupt the natural network operation by exploiting the static topology and dynamic channel assignment in wireless networks, while the latter attacks seek to gain access to unauthorized data by co-residing with target virtual machines (VMs) on the same physical node in a cloud server. In both attacks, the adversary faces a static attack surface and achieves her illegitimate goal by exploiting a stationary aspect of the network functionality. Hence, this dissertation proposes and develops counter approaches to both attacks using moving target defense strategies. We study the strategic interactions between the adversary and the network administrator within a game-theoretic framework.First, in the context of jamming attacks, we present and analyze a game-theoretic formulation between the adversary and the network defender. In this problem, the attack surface is the network connectivity (the static topology) as the adversary jams a subset of nodes to increase the level of interference in the network. On the other side, the defender makes judicious adjustments of the transmission footprint of the various nodes, thereby continuously adapting the underlying network topology to reduce the impact of the attack. The defender's strategy is based on playing Nash equilibrium strategies securing a worst-case network utility. Moreover, scalable decomposition-based approaches are developed yielding a scalable defense strategy whose performance closely approaches that of the non-decomposed game for large-scale and dense networks. We study a class of games considering discrete as well as continuous power levels.In the second problem, we consider multi-tenant clouds, where a number of VMs are typically collocated on the same physical machine to optimize performance and power consumption and maximize profit. This increases the risk of a malicious virtual machine performing side-channel attacks and leaking sensitive information from neighboring VMs. The attack surface, in this case, is the static residency of VMs on a set of physical nodes, hence we develop a timed migration defense approach. Specifically, we analyze a timing game in which the cloud provider decides when to migrate a VM to a different physical machine to mitigate the risk of being compromised by a collocated malicious VM. The adversary decides the rate at which she launches new VMs to collocate with the victim VMs. Our formulation captures a data leakage model in which the cost incurred by the cloud provider depends on the duration of collocation with malicious VMs. It also captures costs incurred by the adversary in launching new VMs and by the defender in migrating VMs. We establish sufficient conditions for the existence of Nash equilibria for general cost functions, as well as for specific instantiations, and characterize the best response for both players. Furthermore, we extend our model to characterize its impact on the attacker's payoff when the cloud utilizes intrusion detection systems that detect side-channel attacks. Our theoretical findings are corroborated with extensive numerical results in various settings as well as a proof-of-concept implementation in a realistic cloud setting.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007468, ucf:52677
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007468
-
-
Title
-
An Engineering Analytics Based Framework for Computational Advertising Systems.
-
Creator
-
Chen, Mengmeng, Rabelo, Luis, Lee, Gene, Keathley, Heather, Rahal, Ahmad, University of Central Florida
-
Abstract / Description
-
Engineering analytics is a multifaceted landscape with a diversity of analytics tools which comes from emerging fields such as big data, machine learning, and traditional operations research. Industrial engineering is capable to optimize complex process and systems using engineering analytics elements and the traditional components such as total quality management. This dissertation has proven that industrial engineering using engineering analytics can optimize the emerging area of...
Show moreEngineering analytics is a multifaceted landscape with a diversity of analytics tools which comes from emerging fields such as big data, machine learning, and traditional operations research. Industrial engineering is capable to optimize complex process and systems using engineering analytics elements and the traditional components such as total quality management. This dissertation has proven that industrial engineering using engineering analytics can optimize the emerging area of Computational Advertising. The key was to know the different fields very well and do the right selection. However, people first need to understand and be experts in the flow of the complex application of Computational Advertising and based on the characteristics of each step map the right field of Engineering analytics and traditional Industrial Engineering. Then build the apparatus and apply it to the respective problem in question.This dissertation consists of four research papers addressing the development of a framework to tame the complexity of computational advertising and improve its usage efficiency from an advertiser's viewpoint. This new framework and its respective systems architecture combine the use of support vector machines, Recurrent Neural Networks, Deep Learning Neural Networks, traditional neural networks, Game Theory/Auction Theory with Generative adversarial networks, and Web Engineering to optimize the computational advertising bidding process and achieve a higher rate of return. The system is validated with an actual case study with commercial providers such as Google AdWords and an advertiser's budget of several million dollars.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007319, ucf:52118
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007319
-
-
Title
-
Automated Synthesis of Memristor Crossbar Networks.
-
Creator
-
Chakraborty, Dwaipayan, Jha, Sumit Kumar, Leavens, Gary, Ewetz, Rickard, Valliyil Thankachan, Sharma, Xu, Mengyu, University of Central Florida
-
Abstract / Description
-
The advancement of semiconductor device technology over the past decades has enabled the design of increasingly complex electrical and computational machines. Electronic design automation (EDA) has played a significant role in the design and implementation of transistor-based machines. However, as transistors move closer toward their physical limits, the speed-up provided by Moore's law will grind to a halt. Once again, we find ourselves on the verge of a paradigm shift in the computational...
Show moreThe advancement of semiconductor device technology over the past decades has enabled the design of increasingly complex electrical and computational machines. Electronic design automation (EDA) has played a significant role in the design and implementation of transistor-based machines. However, as transistors move closer toward their physical limits, the speed-up provided by Moore's law will grind to a halt. Once again, we find ourselves on the verge of a paradigm shift in the computational sciences as newer devices pave the way for novel approaches to computing. One of such devices is the memristor -- a resistor with non-volatile memory.Memristors can be used as junctional switches in crossbar circuits, which comprise of intersecting sets of vertical and horizontal nanowires. The major contribution of this dissertation lies in automating the design of such crossbar circuits -- doing a new kind of EDA for a new kind of computational machinery. In general, this dissertation attempts to answer the following questions:a. How can we synthesize crossbars for computing large Boolean formulas, up to 128-bit?b. How can we synthesize more compact crossbars for small Boolean formulas, up to 8-bit?c. For a given loop-free C program doing integer arithmetic, is it possible to synthesize an equivalent crossbar circuit?We have presented novel solutions to each of the above problems. Our new, proposed solutions resolve a number of significant bottlenecks in existing research, via the usage of innovative logic representation and artificial intelligence techniques. For large Boolean formulas (up to 128-bit), we have utilized Reduced Ordered Binary Decision Diagrams (ROBDDs) to automatically synthesize linearly growing crossbar circuits that compute them. This cutting edge approach towards flow-based computing has yielded state-of-the-art results. It is worth noting that this approach is scalable to n-bit Boolean formulas. We have made significant original contributions by leveraging artificial intelligence for automatic synthesis of compact crossbar circuits. This inventive method has been expanded to encompass crossbar networks with 1D1M (1-diode-1-memristor) switches, as well. The resultant circuits satisfy the tight constraints of the Feynman Grand Prize challenge and are able to perform 8-bit binary addition. A leading edge development for end-to-end computation with flow-based crossbars has been implemented, which involves methodical translation of loop-free C programs into crossbar circuits via automated synthesis. The original contributions described in this dissertation reflect the substantial progress we have made in the area of electronic design automation for synthesis of memristor crossbar networks.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007609, ucf:52528
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007609
-
-
Title
-
PERCEPTIONS OF REALITY.
-
Creator
-
Dombrowski, Matthew, Hall, Scott, University of Central Florida
-
Abstract / Description
-
My thesis explores the relationship between the human psyche and the perception of reality through the use of computer generated media. In a society in which we are bombarded with multimedia technology, we must look inside our selves for a true understanding of our past and memories. Rather than it acting as an escape from reality, my art becomes an opening for truth in reality.
-
Date Issued
-
2008
-
Identifier
-
CFE0002103, ucf:52847
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002103
Pages