Current Search: Wiegand, Rudolf (x)
View All Items
Pages
- Title
- Analysis of Remote Tripping Command Injection Attacks in Industrial Control Systems Through Statistical and Machine Learning Methods.
- Creator
-
Timm, Charles, Caulkins, Bruce, Wiegand, Rudolf, Lathrop, Scott, University of Central Florida
- Abstract / Description
-
In the past decade, cyber operations have been increasingly utilized to further policy goals of state-sponsored actors to shift the balance of politics and power on a global scale. One of the ways this has been evidenced is through the exploitation of electric grids via cyber means. A remote tripping command injection attack is one of the types of attacks that could have devastating effects on the North American power grid. To better understand these attacks and create detection axioms to...
Show moreIn the past decade, cyber operations have been increasingly utilized to further policy goals of state-sponsored actors to shift the balance of politics and power on a global scale. One of the ways this has been evidenced is through the exploitation of electric grids via cyber means. A remote tripping command injection attack is one of the types of attacks that could have devastating effects on the North American power grid. To better understand these attacks and create detection axioms to both quickly identify and mitigate the effects of a remote tripping command injection attack, a dataset comprised of 128 variables (primarily synchrophasor measurements) was analyzed via statistical methods and machine learning algorithms in RStudio and WEKA software respectively. While statistical methods were not successful due to the non-linearity and complexity of the dataset, machine learning algorithms surpassed accuracy metrics established in previous research given a simplified dataset of the specified attack and normal operational data. This research allows future cybersecurity researchers to better understand remote tripping command injection attacks in comparison to normal operational conditions. Further, an incorporation of the analysis has the potential to increase detection and thus mitigate risk to the North American power grid in future work.
Show less - Date Issued
- 2018
- Identifier
- CFE0007257, ucf:52193
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007257
- Title
- Necessary Conditions for Open-Ended Evolution.
- Creator
-
Soros, Lisa, Stanley, Kenneth, Gonzalez, Avelino, Wiegand, Rudolf, Cash, Mason, University of Central Florida
- Abstract / Description
-
Evolution on Earth is widely considered to be an effectively endless process. Though this phenomenon of open-ended evolution (OEE) has been a topic of interest in the artificial life communitysince its beginnings, the field still lacks an empirically validated theory of what exactly is necessary to reproduce the phenomenon in general (including in domains quite unlike Earth). Thisdissertation (1) enumerates a set of conditions hypothesized to be necessary for OEE in addition to (2)...
Show moreEvolution on Earth is widely considered to be an effectively endless process. Though this phenomenon of open-ended evolution (OEE) has been a topic of interest in the artificial life communitysince its beginnings, the field still lacks an empirically validated theory of what exactly is necessary to reproduce the phenomenon in general (including in domains quite unlike Earth). Thisdissertation (1) enumerates a set of conditions hypothesized to be necessary for OEE in addition to (2) introducing an artificial life world called Chromaria that incorporates each of the hypothesizednecessary conditions. It then (3) describes a set of experiments with Chromaria designed to empirically validate the hypothesized necessary conditions. Thus, this dissertation describes the firstscientific endeavor to systematically test an OEE framework in an alife world and thereby make progress towards solving an open question not just for evolutionary computation and artificial life,but for science in general.
Show less - Date Issued
- 2018
- Identifier
- CFE0007247, ucf:52205
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007247
- Title
- Training Neural Networks Through the Integration of Evolution and Gradient Descent.
- Creator
-
Morse, Gregory, Stanley, Kenneth, Wu, Annie, Shah, Mubarak, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
Neural networks have achieved widespread adoption due to both their applicability to a wide range of problems and their success relative to other machine learning algorithms. The training of neural networks is achieved through any of several paradigms, most prominently gradient-based approaches (including deep learning), but also through up-and-coming approaches like neuroevolution. However, while both of these neural network training paradigms have seen major improvements over the past...
Show moreNeural networks have achieved widespread adoption due to both their applicability to a wide range of problems and their success relative to other machine learning algorithms. The training of neural networks is achieved through any of several paradigms, most prominently gradient-based approaches (including deep learning), but also through up-and-coming approaches like neuroevolution. However, while both of these neural network training paradigms have seen major improvements over the past decade, little work has been invested in developing algorithms that incorporate the advances from both deep learning and neuroevolution. This dissertation introduces two new algorithms that are steps towards the integration of gradient descent and neuroevolution for training neural networks. The first is (1) the Limited Evaluation Evolutionary Algorithm (LEEA), which implements a novel form of evolution where individuals are partially evaluated, allowing rapid learning and enabling the evolutionary algorithm to behave more like gradient descent. This conception provides a critical stepping stone to future algorithms that more tightly couple evolutionary and gradient descent components. The second major algorithm (2) is Divergent Discriminative Feature Accumulation (DDFA), which combines a neuroevolution phase, where features are collected in an unsupervised manner, with a gradient descent phase for fine tuning of the neural network weights. The neuroevolution phase of DDFA utilizes an indirect encoding and novelty search, which are sophisticated neuroevolution components rarely incorporated into gradient descent-based systems. Further contributions of this work that build on DDFA include (3) an empirical analysis to identify an effective distance function for novelty search in high dimensions and (4) the extension of DDFA for the purpose of discovering convolutional features. The results of these DDFA experiments together show that DDFA discovers features that are effective as a starting point for gradient descent, with significant improvement over gradient descent alone. Additionally, the method of collecting features in an unsupervised manner allows DDFA to be applied to domains with abundant unlabeled data and relatively sparse labeled data. This ability is highlighted in the STL-10 domain, where DDFA is shown to make effective use of unlabeled data.
Show less - Date Issued
- 2019
- Identifier
- CFE0007840, ucf:52819
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007840
- Title
- Applied Software Tools for Supporting Children with Intellectual Disabilities.
- Creator
-
Abualsamid, Ahmad, Hughes, Charles, Dieker, Lisa, Sims, Valerie, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
We explored the level of technology utilization in supporting children with cognitive disabilities at schools, speech clinics, and with assistive communication at home. Anecdotal evidence, literature research, and our own survey of special needs educators in Central Florida reveal that use of technology is minimal in classrooms for students with special needs even when scientific research has shown the effectiveness of video modeling in teaching children with special needs new skills and...
Show moreWe explored the level of technology utilization in supporting children with cognitive disabilities at schools, speech clinics, and with assistive communication at home. Anecdotal evidence, literature research, and our own survey of special needs educators in Central Florida reveal that use of technology is minimal in classrooms for students with special needs even when scientific research has shown the effectiveness of video modeling in teaching children with special needs new skills and behaviors. Research also shows that speech and language therapists utilize a manual approach to elicit and analyze language samples from children with special needs. While technology is utilized in augmentative and alternative communication, many caregivers utilize paper-based picture exchange systems, storyboards, and daily schedules when assisting their children with their communication needs. We developed and validated three software frameworks to aid language therapists, teachers, and caregivers in supporting children with cognitive disabilities and related special needs. The Analysis of Social Discourse Framework proposes that language therapists use social media discourse instead of direct elicitation of language samples. The framework presents an easy-to-use approach to analyzing language samples based on natural language processing. We validated the framework by analyzing public social discourse from three unrelated sources. The Applied Interventions for eXceptional-needs (AIX) framework allows classroom teachers to implement and track interventions using easy-to-use smartphone applications. We validated the framework by conducting a sixteen-week pilot case study in a school for students with special needs in Central Florida. The Language Enhancements for eXceptioanl Youth (LEXY) framework allows for the development of a new class of augmentative and alternative communication tools that are based on conversational chatbots that assist children with special needs while utilizing a model of the world curated by their caregivers. We validated the framework by simulating an interaction between a prototype chatbot that we developed, a child with special needs, and the child's caregiver.
Show less - Date Issued
- 2018
- Identifier
- CFE0006964, ucf:52908
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006964
- Title
- Evolution Through the Search for Novelty.
- Creator
-
Lehman, Joel, Stanley, Kenneth, Gonzalez, Avelino, Wiegand, Rudolf, Hoffman, Eric, University of Central Florida
- Abstract / Description
-
I present a new approach to evolutionary search called novelty search, wherein only behavioral novelty is rewarded, thereby abstracting evolution as a search for novel forms. This new approach contrasts with the traditional approach of rewarding progress towards the objective through an objective function. Although they are designed to light a path to the objective, objective functions can instead deceive search into converging to dead ends called local optima.As a significant problem in...
Show moreI present a new approach to evolutionary search called novelty search, wherein only behavioral novelty is rewarded, thereby abstracting evolution as a search for novel forms. This new approach contrasts with the traditional approach of rewarding progress towards the objective through an objective function. Although they are designed to light a path to the objective, objective functions can instead deceive search into converging to dead ends called local optima.As a significant problem in evolutionary computation, deception has inspired many techniques designed to mitigate it. However, nearly all such methods are still ultimately susceptible to deceptive local optima because they still measure progress with respect to the objective, which this dissertation will show is often a broken compass. Furthermore, although novelty search completely abandons the objective, it counterintuitively often outperforms methods that search directly for the objective in deceptive tasks and can induce evolutionary dynamics closer in spirit to natural evolution. The main contributions are to (1) introduce novelty search, an example of an effective search method that is not guided by actively measuring or encouraging objective progress; (2) validate novelty search by applying it to biped locomotion; (3) demonstrate novelty search's benefits for evolvability (i.e. the abilityof an organism to further evolve) in a variety of domains; (4) introduce an extension of novelty search called minimal criteria novelty search that brings a new abstraction of natural evolution to evolutionary computation (i.e. evolution as a search for many ways of meeting the minimal criteria of life); (5) present a second extension of novelty search called novelty search with local competition that abstracts evolution instead as a process driven towards diversity with competition playing a subservient role; and (6) evolve a diversity of functional virtual creatures in a single run as a culminating application of novelty search with local competition. Overall these contributions establish novelty search as an important new research direction for the field of evolutionary computation.
Show less - Date Issued
- 2012
- Identifier
- CFE0004398, ucf:49390
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004398
- Title
- Towards Evolving More Brain-Like Artificial Neural Networks.
- Creator
-
Risi, Sebastian, Stanley, Kenneth, Hughes, Charles, Sukthankar, Gita, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
An ambitious long-term goal for neuroevolution, which studies how artificial evolutionary processes can be driven to produce brain-like structures, is to evolve neurocontrollers with a high density of neurons and connections that can adapt and learn from past experience. Yet while neuroevolution has produced successful results in a variety of domains, the scale of natural brains remains far beyond reach. In this dissertation two extensions to the recently introduced Hypercube-based...
Show moreAn ambitious long-term goal for neuroevolution, which studies how artificial evolutionary processes can be driven to produce brain-like structures, is to evolve neurocontrollers with a high density of neurons and connections that can adapt and learn from past experience. Yet while neuroevolution has produced successful results in a variety of domains, the scale of natural brains remains far beyond reach. In this dissertation two extensions to the recently introduced Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach are presented that are a step towards more brain-like artificial neural networks (ANNs). First, HyperNEAT is extended to evolve plastic ANNs that can learn from past experience. This new approach, called adaptive HyperNEAT, allows not only patterns of weights across the connectivity of an ANN to be generated by a function of its geometry, but also patterns of arbitrary local learning rules. Second, evolvable-substrate HyperNEAT (ES-HyperNEAT) is introduced, which relieves the user from deciding where the hidden nodes should be placed in a geometry that is potentially infinitely dense. This approach not only can evolve the location of every neuron in the network, but also can represent regions of varying density, which means resolution can increase holistically over evolution. The combined approach, adaptive ES-HyperNEAT, unifies for the first time in neuroevolution the abilities to indirectly encode connectivity through geometry, generate patterns of heterogeneous plasticity, and simultaneously encode the density and placement of nodes in space. The dissertation culminates in a major application domain that takes a step towards the general goal of adaptive neurocontrollers for legged locomotion.
Show less - Date Issued
- 2012
- Identifier
- CFE0004287, ucf:49477
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004287
- Title
- Application of Modeling and Simulation to Reduce Costs of Acquisition within Triple Constraints.
- Creator
-
Mohammad, Syed, Kincaid, John, Shumaker, Randall, Wiegand, Rudolf, Richardson, Paul, University of Central Florida
- Abstract / Description
-
A key component of defense acquisition programs operating using the Integrated Defense Acquisition, Technology, and Logistics Life Cycle Management System is the reliance on the triple constraints of cost, schedule, and performance. While the use of Modeling and Simulation tools and capabilities is prevalent and well established in the Research and Development, Analysis, and Training domains, acquisition programs have been reluctant to use Modeling and Simulation in any great depth due to...
Show moreA key component of defense acquisition programs operating using the Integrated Defense Acquisition, Technology, and Logistics Life Cycle Management System is the reliance on the triple constraints of cost, schedule, and performance. While the use of Modeling and Simulation tools and capabilities is prevalent and well established in the Research and Development, Analysis, and Training domains, acquisition programs have been reluctant to use Modeling and Simulation in any great depth due to inaccessibility of tools, Subject Matter Experts, and implications to cost and schedule. This presents a unique Simulation Management challenge which requires an in-depth understanding of the technical capabilities available within an organization, their applicability to support immediate needs, and the flexibility to utilize these capabilities within the programmatic environment to provide a value added service. The focus of this dissertation is to study the use of Modeling and Simulation in the Defense arena, and to review the applicability of Modeling and Simulation within programmatic acquisition environments which are constrained by cost, schedule, and performance. This research draws comparisons between Modeling and Simulation to other Process Improvement initiatives, such as Lean and Six Sigma, and reviews case studies involving the application of Modeling and Simulation within triple constrained environments. The development of alternate scenarios allows cost benefit analysis to be conducted for each scenario and alternate scenario, developing a case for whether or not the application of Modeling and Simulation within the triple constrained environment delivered any consequential benefit to the acquisition process. Observations are made regarding the level of Modeling and Simulation as applied within each case study, and generalized recommendations are made for the inclusion of cost benefit analysis methodologies for analyzing proposed Modeling and Simulation activities within acquisition programs. Limitations and shortcomings of the research activity are discussed, along with recommendations for potential future work in the Simulation Management field, both with respect to the specific case studies reviewed in this study and the general field.
Show less - Date Issued
- 2012
- Identifier
- CFE0004415, ucf:49396
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004415
- Title
- A Posteriori and Interactive Approaches for Decision-Making with Multiple Stochastic Objectives.
- Creator
-
Bakhsh, Ahmed, Geiger, Christopher, Mollaghasemi, Mansooreh, Xanthopoulos, Petros, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
Computer simulation is a popular method that is often used as a decision support tool in industry to estimate the performance of systems too complex for analytical solutions. It is a tool that assists decision-makers to improve organizational performance and achieve performance objectives in which simulated conditions can be randomly varied so that critical situations can be investigated without real-world risk. Due to the stochastic nature of many of the input process variables in simulation...
Show moreComputer simulation is a popular method that is often used as a decision support tool in industry to estimate the performance of systems too complex for analytical solutions. It is a tool that assists decision-makers to improve organizational performance and achieve performance objectives in which simulated conditions can be randomly varied so that critical situations can be investigated without real-world risk. Due to the stochastic nature of many of the input process variables in simulation models, the output from the simulation model experiments are random. Thus, experimental runs of computer simulations yield only estimates of the values of performance objectives, where these estimates are themselves random variables.Most real-world decisions involve the simultaneous optimization of multiple, and often conflicting, objectives. Researchers and practitioners use various approaches to solve these multiobjective problems. Many of the approaches that integrate the simulation models with stochastic multiple objective optimization algorithms have been proposed, many of which use the Pareto-based approaches that generate a finite set of compromise, or tradeoff, solutions. Nevertheless, identification of the most preferred solution can be a daunting task to the decision-maker and is an order of magnitude harder in the presence of stochastic objectives. However, to the best of this researcher's knowledge, there has been no focused efforts and existing work that attempts to reduce the number of tradeoff solutions while considering the stochastic nature of a set of objective functions.In this research, two approaches that consider multiple stochastic objectives when reducing the set of the tradeoff solutions are designed and proposed. The first proposed approach is an a posteriori approach, which uses a given set of Pareto optima as input. The second approach is an interactive-based approach that articulates decision-maker preferences during the optimization process. A detailed description of both approaches is given, and computational studies are conducted to evaluate the efficacy of the two approaches. The computational results show the promise of the proposed approaches, in that each approach effectively reduces the set of compromise solutions to a reasonably manageable size for the decision-maker. This is a significant step beyond current applications of decision-making process in the presence of multiple stochastic objectives and should serve as an effective approach to support decision-making under uncertainty.
Show less - Date Issued
- 2013
- Identifier
- CFE0004973, ucf:49574
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004973
- Title
- Federal, State and Local Law Enforcement Agency Interoperability Capabilities and Cyber Vulnerabilities.
- Creator
-
Trapnell, Tyrone, Caulkins, Bruce, Wiegand, Rudolf, Bockelman, Patricia, Canham, Matthew, University of Central Florida
- Abstract / Description
-
The National Data Exchange (N-DEx) System is the central informational hub located at the Federal Bureau of Investigation (FBI). Its purpose is to provide network subscriptions to all Federal, state and local level law enforcement agencies while increasing information collaboration across all domains. The National Data Exchange users must satisfy the Advanced Permission Requirements, confirming the terms of N-DEx information use, and the Verification Requirement (verifying the completeness,...
Show moreThe National Data Exchange (N-DEx) System is the central informational hub located at the Federal Bureau of Investigation (FBI). Its purpose is to provide network subscriptions to all Federal, state and local level law enforcement agencies while increasing information collaboration across all domains. The National Data Exchange users must satisfy the Advanced Permission Requirements, confirming the terms of N-DEx information use, and the Verification Requirement (verifying the completeness, timeliness, accuracy, and relevancy of N-DEx information) through coordination with the record-owning agency (Management, 2018). A network infection model is proposed to simulate the spread impact of various cyber-attacks within Federal, state and local level law enforcement networks that are linked together through the topologies merging with the National Data Exchange (N-DEx) System as the ability to manipulate the live network is limited. The model design methodology is conducted in a manner that creates a level of organization from the state level to the local level of law enforcement agencies allowing for each organizational infection probability to be calculated and entered, thus making the model very specific in nature for determining spread or outbreaks of cyber-attacks among law enforcement agencies at all levels. This research will enable future researchers to further develop a model that is capable of detecting weak points within an information structure when multiple topologies merge, allowing for more secure operations among law enforcement networks.
Show less - Date Issued
- 2019
- Identifier
- CFE0007543, ucf:52621
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007543
- Title
- A Review and Selective Analysis of 3D Display Technologies for Anatomical Education.
- Creator
-
Hackett, Matthew, Proctor, Michael, Allen, Christine, Wiegand, Rudolf, Sims, Valerie, University of Central Florida
- Abstract / Description
-
The study of anatomy is complex and difficult for students in both graduate and undergraduate education. Researchers have attempted to improve anatomical education with the inclusion of three-dimensional visualization, with the prevailing finding that 3D is beneficial to students. However, there is limited research on the relative efficacy of different 3D modalities, including monoscopic, stereoscopic, and autostereoscopic displays. This study analyzes educational performance, confidence,...
Show moreThe study of anatomy is complex and difficult for students in both graduate and undergraduate education. Researchers have attempted to improve anatomical education with the inclusion of three-dimensional visualization, with the prevailing finding that 3D is beneficial to students. However, there is limited research on the relative efficacy of different 3D modalities, including monoscopic, stereoscopic, and autostereoscopic displays. This study analyzes educational performance, confidence, cognitive load, visual-spatial ability, and technology acceptance in participants using autostereoscopic 3D visualization (holograms), monoscopic 3D visualization (3DPDFs), and a control visualization (2D printed images). Participants were randomized into three treatment groups: holograms (n=60), 3DPDFs (n=60), and printed images (n=59). Participants completed a pre-test followed by a self-study period using the treatment visualization. Immediately following the study period, participants completed the NASA TLX cognitive load instrument, a technology acceptance instrument, visual-spatial ability instruments, a confidence instrument, and a post-test. Post-test results showed the hologram treatment group (Mdn=80.0) performed significantly better than both 3DPDF (Mdn=66.7, p=.008) and printed images (Mdn=66.7, p=.007). Participants in the hologram and 3DPDF treatment groups reported lower cognitive load compared to the printed image treatment (p (<) .01). Participants also responded more positively towards the holograms than printed images (p (<) .001). Overall, the holograms demonstrated significant learning improvement over printed images and monoscopic 3DPDF models. This finding suggests additional depth cues from holographic visualization, notably head-motion parallax and stereopsis, provide substantial benefit towards understanding spatial anatomy. The reduction in cognitive load suggests monoscopic and autostereoscopic 3D may utilize the visual system more efficiently than printed images, thereby reducing mental effort during the learning process. Finally, participants reported positive perceptions of holograms suggesting implementation of holographic displays would be met with enthusiasm from student populations. These findings highlight the need for additional studies regarding the effect of novel 3D technologies on learning performance.
Show less - Date Issued
- 2018
- Identifier
- CFE0007569, ucf:52571
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007569
- Title
- Team Interaction Dynamics during Collaborative Problem Solving.
- Creator
-
Wiltshire, Travis, Fiore, Stephen, Jentsch, Florian, Salas, Eduardo, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
This dissertation contributes an enhanced understanding of team cognition, in general, and collaborative problem solving (CPS), specifically, through an integration of methods that measure team interaction dynamics and knowledge building as it occurs during a complex CPS task. The need for better understanding CPS has risen in prominence as many organizations have increasingly worked to address complex problems requiring the combination of diverse sets of individual expertise to achieve...
Show moreThis dissertation contributes an enhanced understanding of team cognition, in general, and collaborative problem solving (CPS), specifically, through an integration of methods that measure team interaction dynamics and knowledge building as it occurs during a complex CPS task. The need for better understanding CPS has risen in prominence as many organizations have increasingly worked to address complex problems requiring the combination of diverse sets of individual expertise to achieve solutions for novel problems. Towards this end, the present research drew from theoretical and empirical work on Macrocognition in Teams that describes the knowledge coordination arising from team communications during CPS. It built from this by incorporating the study of team interaction during complex collaborative cognition. Interaction between team members in such contexts has proven to be inherently dynamic and exhibiting nonlinear patterns not accounted for by extant research methods. To redress this gap, the present research drew from work in cognitive science designed to study social and team interaction as a nonlinear dynamical system. CPS was examined by studying knowledge building and interaction processes of 43 dyads working on NASA's Moonbase Alpha simulation, a CPS task. Both non-verbal and verbal interaction dynamics were examined. Specifically, frame-differencing, an automated video analysis technique, was used to capture the bodily movements of participants and content coding was applied to the teams' communications to characterize their CPS processes. A combination of linear (i.e., multiple regression, t-test, and time-lagged cross-correlation analysis), as well as nonlinear analytic techniques (i.e., recurrence quantification analysis; RQA) were applied. In terms of the predicted interaction dynamics, it was hypothesized that teams would exhibit synchronization in their bodily movements and complementarity in their communications and further, that teams more strongly exhibiting these forms of coordination will produce better problem solving outcomes. Results showed that teams did exhibit a pattern of bodily movements that could be characterized as synchronized, but higher synchronization was not systematically related to performance. Further, results showed that teams did exhibit communicative interaction that was complementary, but this was not predictive of better problem solving performance. Several exploratory research questions were proposed as a way of refining the application of these techniques to the investigation of CPS. Results showed that semantic code-based communications time-series and %REC and ENTROPY recurrence-based measures were most sensitive to differences in performance. Overall, this dissertation adds to the scientific body of knowledge by advancing theory and empirical knowledge on the forms of verbal and non-verbal team interaction during CPS, but future work remains to be conducted to identify the relationship between interaction dynamics and CPS performance.
Show less - Date Issued
- 2015
- Identifier
- CFE0005907, ucf:50867
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005907
- Title
- An Exploratory Comparison of a Traditional and an Adaptive Instructional Approach for College Algebra.
- Creator
-
Kasha, Ryan, Kincaid, John, Wiegand, Rudolf, Hartshorne, Richard, Morris, Cliff, University of Central Florida
- Abstract / Description
-
This research effort compared student learning gains and attitudinal changes through the implementation of two varying instructional approaches on the topic of functions in College Algebra. Attitudinal changes were measured based on the Attitude Towards Mathematics Inventory (ATMI). The ATMI also provided four sub-scales scores for self-confidence, value of learning, enjoyment, and motivation. Furthermore, this research explored and compared relationships between students' level of mastery...
Show moreThis research effort compared student learning gains and attitudinal changes through the implementation of two varying instructional approaches on the topic of functions in College Algebra. Attitudinal changes were measured based on the Attitude Towards Mathematics Inventory (ATMI). The ATMI also provided four sub-scales scores for self-confidence, value of learning, enjoyment, and motivation. Furthermore, this research explored and compared relationships between students' level of mastery and their actual level of learning. This study implemented a quasi-experimental research design using a sample that consisted of 56 College Algebra students in a public, state college in Florida. The sample was enrolled in one of two College Algebra sections, in which one section followed a self-adaptive instructional approach using ALEKS (Assessment and Learning in Knowledge Space) and the other section followed a traditional approach using MyMathLab. Learning gains in each class were measured as the difference between the pre-test and post-test scores on the topic of functions in College Algebra. Attitude changes in each class were measured as the difference between the holistic scores on the ATMI, as well as each of the four sub-scale scores, which was administered once in the beginning of the semester and again after the unit of functions, approximately eight weeks into the course. Utilizing an independent t-test, results indicated that there was not a significant difference in actual learning gains for the compared instructional approaches. Additionally, independent t-test results indicated that there was not a statistical difference for attitude change holistically and on each of the four sub-scales for the compared instructional approaches. However, correlational analyses revealed a strong relationship between students' level of mastery learning and their actual learning level for each class with the self-adaptive instructional approach having a stronger correlation than the non-adaptive section, as measured by an r-to-z Fisher transformation test. The results of this study indicate that the self-adaptive instructional approach using ALEKS could more accurately report students' true level of learning compared to a non-adaptive instructional approach. Overall, this study found the compared instructional approaches to be equivalent in terms of learning and effect on students' attitude. While not statistically different, the results of this study have implications for math educators, instructional designers, and software developers. For example, a non-adaptive instructional approach can be equivalent to a self-adaptive instructional approach in terms of learning with appropriate planning and design. Future recommendations include further case studies of self-adaptive technology in developmental and college mathematics in other modalities such as hybrid or on-line courses. Also, this study should be replicated on a larger scale with other self-adaptive math software in addition to focusing on other student populations, such as K - 12. There is much potential for intelligent tutoring to supplement different instructional approaches, but should not be viewed as a replacement for teacher-to-student interactions.
Show less - Date Issued
- 2015
- Identifier
- CFE0005963, ucf:50821
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005963
- Title
- Improved Interpolation in SPH in Cases of Less Smooth Flow.
- Creator
-
Brun, Oddny, Wiegand, Rudolf, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
ABSTRACTWe introduced a method presented in Information Field Theory (IFT) [Abramovich et al.,2007] to improve interpolation in Smoothed Particle Hydrodynamics (SPH) in cases of less smoothflow. The method makes use of wavelet theory combined with B-splines for interpolation. The ideais to identify any jumps a function may have and then reconstruct the smoother segments betweenthe jumps. The results of our work demonstrated superior capability when compared to a particularchallenging SPH...
Show moreABSTRACTWe introduced a method presented in Information Field Theory (IFT) [Abramovich et al.,2007] to improve interpolation in Smoothed Particle Hydrodynamics (SPH) in cases of less smoothflow. The method makes use of wavelet theory combined with B-splines for interpolation. The ideais to identify any jumps a function may have and then reconstruct the smoother segments betweenthe jumps. The results of our work demonstrated superior capability when compared to a particularchallenging SPH application, to better conserve jumps and more accurately interpolate thesmoother segments of the function. The results of our work also demonstrated increased computationalefficiency with limited loss in accuracy as number of multiplications and execution timewere reduced. Similar benefits were observed for functions with spikes analyzed by the samemethod. Lesser, but similar effects were also demonstrated for real life data sets of less smoothnature.SPH is widely used in modeling and simulation of flow of matters. SPH presents advantagescompared to grid based methods both in terms of computational efficiency and accuracy, inparticular when dealing with less smooth flow. The results we achieved through our research is animprovement to the model in cases of less smooth flow, in particular flow with jumps and spikes.Up until now such improvements have been sought through modifications to the models' physicalequations and/or kernel functions and have only partially been able to address the issue.This research, as it introduced wavelet theory and IFT to a field of science that, to ourknowledge, not currently are utilizing these methods, did lay the groundwork for future researchiiiideas to benefit SPH. Among those ideas are further development of criteria for wavelet selection,use of smoothing splines for SPH interpolation and incorporation of Bayesian field theory.Improving the method's accuracy, stability and efficiency under more challenging conditionssuch as flow with jumps and spikes, will benefit applications in a wide area of science. Justin medicine alone, such improvements will further increase real time diagnostics, treatments andtraining opportunities because jumps and spikes are often the characteristics of significant physiologicaland anatomic conditions such as pulsatile blood flow, peristaltic intestine contractions andorgans' edges appearance in imaging.
Show less - Date Issued
- 2016
- Identifier
- CFE0006446, ucf:51451
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006446
- Title
- FUNDAMENTAL UNDERSTANDING OF INTERACTIONS AMONG FLOW, TURBULENCE, AND HEAT TRANSFER IN JET IMPINGEMENT COOLING.
- Creator
-
Hossain, Md. Jahed, Kapat, Jayanta, Ahmed, Kareem, Gordon, Ali, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
The flow physics of impinging jet is very complex and is not fully understood yet. The flow field in an impingement problem comprised of three different distinct regions: a free jet with a potential core, a stagnation region where the velocity goes to zero as the jet impinges onto the wall and a creation of wall jet region where the boundary layer grows radially outward after impinging. Since impingement itself is a broad topic, effort is being made in the current study to narrow down on...
Show moreThe flow physics of impinging jet is very complex and is not fully understood yet. The flow field in an impingement problem comprised of three different distinct regions: a free jet with a potential core, a stagnation region where the velocity goes to zero as the jet impinges onto the wall and a creation of wall jet region where the boundary layer grows radially outward after impinging. Since impingement itself is a broad topic, effort is being made in the current study to narrow down on three particular geometric configurations (a narrow wall, an array impingement configuration and a curved surface impingement configuration) that shows up in a typical gas turbine impingement problem in relation to heat transfer. Impingement problems are difficult to simulate numerically using conventional RANS models. It is worth noting that the typical RANS model contains a number of calibrated constants and these have been formulated with respect to relatively simple shear flows. As a result typically these isotropic eddy viscosity models fail in predicting the correct heat transfer value and trend in impingement problem where the flow is highly anisotropic. The common RANS-based models over predict stagnation heat transfer coefficients by as much as 300% when compared to measured values. Even the best of the models, the v^2-f model, can be inaccurate by up to 30%. Even though there is myriad number of experimental and numerical work published on single jet impingement; the knowledge gathered from these works cannot be applied to real engineering impingement cooling application as the dynamics of flow changes completely. This study underlines the lack of experimental flow physics data in published literature on multiple jet impingement and the author emphasized how important it is to have experimental data to validate CFD tools and to determine the suitability of Large Eddy Simulation (LES) in industrial application. In the open literature there is not enough study where experimental heat transfer and flow physics data are combined to explain the behavior for gas turbine impingement cooling application. Often it is hard to understand the heat transfer behavior due to lack of time accurate flow physics data hence a lot of conjecture has been made to explain the phenomena. The problem is further exacerbated for array of impingement jets where the flow is much more complex than a single round jet. The experimental flow field obtained from Particle Image Velocimetry (PIV) and heat transfer data obtained from Temperature Sensitive Paint (TSP) from this work will be analyzed to understand the relationship between flow characteristics and heat transfer for the three types of novel geometry mentioned above.There has not been any effort made on implementing LES technique on array impingement problem in the published literature. Nowadays with growing computational power and resources CFD are widely used as a design tool. To support the data gathered from the experiment, LES is carried out in narrow wall impingement cooling configuration. The results will provide more accurate information on impingement flow physics phenomena where experimental techniques are limited and the typical RANS models yield erroneous resultThe objective of the current study is to provide a better understanding of impingement heat transfer in relation to flow physics associated with it. As heat transfer is basically a manifestation of the flow and most of the flow in real engineering applications is turbulent, it is very important to understand the dynamics of flow physics in an impingement problem. The work emphasis the importance of understanding mean velocities, turbulence, jet shear layer instability and its importance in heat transfer application. The present work shows detailed information of flow phenomena using Particle Image Velocimetry (PIV) in a single row narrow impingement channel. Results from the RANS and LES simulations are compared with Particle Image Velocimetry (PIV) data. The accuracy of LES in predicting the flow field and heat transfer of an impingement problem is also presented the in the current work as it is validated against experimental flow field measured through PIV.Results obtained from the PIV and LES shows excellent agreement for predicting both heat transfer and flow physics data. Some of the key findings from the study highlight the shortcomings of the typical RANS models used for the impingement heat transfer problem. It was found that the stagnation point heat transfer was over predicted by as much as 48% from RANS simulations when compared to the experimental data. A lot of conjecture has been made in the past for RANS' ability to predict the stagnation point heat transfer correctly. The length of the potential core for the first jet was found to be ~ 2D in RANS simulations as oppose to 1D in PIV and LES, confirm the possible underlying reason for this discrepancy. The jet shear layer thickness was underpredicted by ~ 40% in RANS simulations proving the model is not diffusive enough for a flow like jet impingement. Turbulence production due to shear stress was over predicted by ~130% and turbulence production due to normal stresses were underpredicted by ~40 % in RANS simulation very close to the target wall showing RANS models fail where both strain rate and shear stress plays a pivotal role in the dynamics of the flow. In the closing, turbulence is still one of the most difficult problems to solve accurately, as has been the case for about a century. A quote below from the famous mathematician, Horace Lamb (1849-1934) express the level of difficulty and frustration associated with understanding turbulence in fluid mechanics. (")I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic.(")Source: http://scienceworld.wolfram.com/biography/Lamb.htmlThis dissertation is expected to shed some light onto one specific example of turbulent flows.
Show less - Date Issued
- 2016
- Identifier
- CFE0006463, ucf:51424
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006463
- Title
- a priori synthetic sampling for increasing classification sensitivity in imbalanced data sets.
- Creator
-
Rivera, William, Xanthopoulos, Petros, Wiegand, Rudolf, Karwowski, Waldemar, Kincaid, John, University of Central Florida
- Abstract / Description
-
Building accurate classifiers for predicting group membership is made difficult when data is skewedor imbalanced which is typical of real world data sets. The classifier has the tendency to be biased towards the over represented group as a result. This imbalance is considered a class imbalance problem which will induce bias into the classifier particularly when the imbalance is high.Class imbalance data usually suffers from data intrinsic properties beyond that of imbalance alone.The problem...
Show moreBuilding accurate classifiers for predicting group membership is made difficult when data is skewedor imbalanced which is typical of real world data sets. The classifier has the tendency to be biased towards the over represented group as a result. This imbalance is considered a class imbalance problem which will induce bias into the classifier particularly when the imbalance is high.Class imbalance data usually suffers from data intrinsic properties beyond that of imbalance alone.The problem is intensified with larger levels of imbalance most commonly found in observationalstudies. Extreme cases of class imbalance are commonly found in many domains including frauddetection, mammography of cancer and post term births. These rare events are usually the mostcostly or have the highest level of risk associated with them and are therefore of most interest.To combat class imbalance the machine learning community has relied upon embedded, data preprocessing and ensemble learning approaches. Exploratory research has linked several factorsthat perpetuate the issue of misclassification in class imbalanced data. However, there remainsa lack of understanding between the relationship of the learner and imbalanced data among thecompeting approaches. The current landscape of data preprocessing approaches have appeal dueto the ability to divide the problem space in two which allows for simpler models. However, mostof these approaches have little theoretical bases although in some cases there is empirical evidence supporting the improvement.The main goals of this research is to introduce newly proposed a priori based re-sampling methodsthat improve concept learning within class imbalanced data. The results in this work highlightthe robustness of these techniques performance within publicly available data sets from differentdomains containing various levels of imbalance. In this research the theoretical and empiricalreasons are explored and discussed.
Show less - Date Issued
- 2015
- Identifier
- CFE0006169, ucf:51129
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006169
- Title
- Limitations of Micro and Macro Solutions to the Simulation Interoperability Challenge: An EASE Case Study.
- Creator
-
Barry, John, Proctor, Michael, Wiegand, Rudolf, Allen, Gary, University of Central Florida
- Abstract / Description
-
This thesis explored the history of military simulations and linked it to the current challenges of interoperability. The research illustrated the challenge of interoperability in integrating different networks, databases, standards, and interfaces and how it results in U.S. Army organizations constantly spending time and money to create and implement irreproducible Live, Virtual, and Constructive (LVC) integrating architectures to accomplish comparable tasks. Although the U.S. Army has made...
Show moreThis thesis explored the history of military simulations and linked it to the current challenges of interoperability. The research illustrated the challenge of interoperability in integrating different networks, databases, standards, and interfaces and how it results in U.S. Army organizations constantly spending time and money to create and implement irreproducible Live, Virtual, and Constructive (LVC) integrating architectures to accomplish comparable tasks. Although the U.S. Army has made advancements in interoperability, it has struggled with this challenge since the early 1990s. These improvements have been inadequate due to evolving and growing needs of the user coupled with the technical complexities of interoperating legacy systems with emergent systems arising from advances in technology. To better understand the impact of the continued evolution of simulations, this paper mapped Maslow's Hierarchy of Needs with Tolk's Levels of Conceptual Interoperability Model (LCIM). This mapping illustrated a common relationship in both the Hierarchy of Needs and the LCIM model depicting that each level increases with complexity and the proceeding lower level must first be achieved prior to reaching the next. Understanding the continuum of complexity of interoperability, as requirements or needs, helped to determine why the previous funding and technical efforts have been inadequate in mitigating the interoperability challenges within U.S. Army simulations. As the U.S. Army's simulation programs continue to evolve while the military and contractor personnel turnover rate remains near constant, a method of capturing and passing on the tacit knowledge from one personnel staffing life cycle to the next must be developed in order to economically and quickly reproduce complex simulation events. This thesis explored a potential solution to this challenge, the Executable Architecture Systems Engineering (EASE) research project managed by the U.S. Army's Simulation and Training Technology Center in the Army Research Laboratory within the Research, Development and Engineering Command. However, there are two main drawbacks to EASE; it is still in the prototype stage and has not been fully tested and evaluated as a simulation tool within the community of practice. In order to determine if EASE has the potential to reduce the micro as well as macro interoperability, an EASE experiment was conducted as part of this thesis. The following three alternative hypothesis were developed, tested, and accepted as a result of the research for this thesis:Ha1 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army technical solution to help mitigate the M(&)S interoperability challenge. Ha2 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army managerial solution to help mitigate the M(&)S interoperability challenge. Ha3 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army knowledge management solution to help mitigate the M(&)S interoperability challenge. To conduct this experiment, eleven participants representing ten different organizations across the three M(&)S Domains were selected to test EASE using a modified Technology Acceptance Model (TAM) approach developed by Davis. Indexes were created from the participants' responses to include both the quality of participants and research questions. The Cronbach Alpha Test for reliability was used to test the reliability of the adapted TAM. The Wilcoxon Signed Ranked test provided the statistical analysis that formed the basis of the research; that determined the EASE project has the potential to help mitigate the interoperability challenges in the U.S. Army's M(&)S domains.
Show less - Date Issued
- 2013
- Identifier
- CFE0005084, ucf:50740
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005084
- Title
- Streamlining the Acquisition Process: Systems Analysis for Improving Army Acquisition Corps Officer Management.
- Creator
-
Chu-Quinn, Shawn, Kincaid, John, Wiegand, Rudolf, Mohammad, Syed, University of Central Florida
- Abstract / Description
-
The Army Acquisition Officer lacks proficient experience needed to fill key leadership positions within the Acquisition Corps. The active duty Army officer is considered for the Acquisition Corps functional area between their 5th and 9th years of service as an officer (-) after completing initial career milestones. The new Acquisition Corps officer is the rank of senior Captain or Major when he arrives to his first acquisition assignment with a proficiency level of novice (in acquisition)....
Show moreThe Army Acquisition Officer lacks proficient experience needed to fill key leadership positions within the Acquisition Corps. The active duty Army officer is considered for the Acquisition Corps functional area between their 5th and 9th years of service as an officer (-) after completing initial career milestones. The new Acquisition Corps officer is the rank of senior Captain or Major when he arrives to his first acquisition assignment with a proficiency level of novice (in acquisition). The Army officer may be advanced in his primary career branch, but his level decreases when he is assigned into the Acquisition Corps functional area. The civilian grade equivalent to the officer is a GS-12 or GS-13 whose proficiency level is advanced in his career field. The purpose of this study is to use a systems analysis approach to decompose the current acquisition officer professional development system, in order to study how well the current active duty officer flow works and how well it interacts or influences an acquisition officer's professional development; and to propose a potential solution to assist in the management of Army acquisition officers, so they gain proficiency through not only education and training, but also the hands-on experience that is needed to fill key leadership positions in the Army Acquisition Corps. An increased proficiency and proven successful track record in the acquisition workforce is the basis to positively affect acquisition streamlining processes within the Department of Defense by making good decisions through quality experience.
Show less - Date Issued
- 2015
- Identifier
- CFE0005590, ucf:50254
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005590
- Title
- The Effect of Habitat for Humanity Homeownership on Student Attendance and Standardized Test Scores in Orange County Florida School District.
- Creator
-
Harris, Charles, Kincaid, John, Uddin, Nizam, Rivers, Kenyatta, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
The mobility of low-income students who do not have access to stable housing creates numerous challenges both at home and in school. Among these challenges, academic performance certainly is one of the most important. The lack of a more permanent, familiar, and safe environment is presumed to impact home life as well as students' performance in the classroom. This research compares two groups of current and former students of Orange County Public Schools (OCPS) in Florida (1) children of...
Show moreThe mobility of low-income students who do not have access to stable housing creates numerous challenges both at home and in school. Among these challenges, academic performance certainly is one of the most important. The lack of a more permanent, familiar, and safe environment is presumed to impact home life as well as students' performance in the classroom. This research compares two groups of current and former students of Orange County Public Schools (OCPS) in Florida (1) children of families who are Habitat for Humanity (HFH) homeowners, and (2) a matched socioeconomic control group. The HFH program is designed to provide a stable, affordable housing for families who cannot acquire it through standard means. The research question is: Does stability in housing make an impact on academic performance in the particular area of FCAT scores and attendance? Data were gathered from OCPS and the HFH homeowners themselves. This data were used to evaluate the impact of HFH homeownership on students' academic environment. Results showed better attendance at school, but HFH students fared worse in FCAT performance when compared to control group especially in reading.
Show less - Date Issued
- 2014
- Identifier
- CFE0005504, ucf:50360
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005504
- Title
- How many are out there? A novel approach for open and closed systems.
- Creator
-
Rehman, Zia, Kincaid, John, Wiegand, Rudolf, Finch, Craig, Uddin, Nizam, University of Central Florida
- Abstract / Description
-
We propose a ratio estimator to determine population estimates using capture-recapture sampling. It's different than traditional approaches in the following ways:(1)Ordering of recaptures: Currently data sets do not take into account the "ordering" of the recaptures, although this crucial information is available to them at no cost. (2)Dependence of trials and cluster sampling: Our model explicitly considers trials to be dependent and improves existing literature which assumes independence. ...
Show moreWe propose a ratio estimator to determine population estimates using capture-recapture sampling. It's different than traditional approaches in the following ways:(1)Ordering of recaptures: Currently data sets do not take into account the "ordering" of the recaptures, although this crucial information is available to them at no cost. (2)Dependence of trials and cluster sampling: Our model explicitly considers trials to be dependent and improves existing literature which assumes independence. (3)Rate of convergence: The percentage sampled has an inverse relationship with population size, for a chosen degree of accuracy. (4)Asymptotic Attainment of Minimum Variance (Open Systems: (=population variance).(5)Full use of data and model applicability (6)Non-parametric (7)Heterogeneity: When units being sampled are hard to identify. (8)Open and closed systems: Simpler results are presented separately for closed systems.(9)Robustness to assumptions in open systems
Show less - Date Issued
- 2014
- Identifier
- CFE0005403, ucf:50411
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005403
- Title
- Mediated Physicality: Inducing Illusory Physicality of Virtual Humans via Their Interactions with Physical Objects.
- Creator
-
Lee, Myungho, Welch, Gregory, Wisniewski, Pamela, Hughes, Charles, Bruder, Gerd, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
The term virtual human (VH) generally refers to a human-like entity comprised of computer graphics and/or physical body. In the associated research literature, a VH can be further classified as an avatar(-)a human-controlled VH, or an agent(-)a computer-controlled VH. Because of the resemblance with humans, people naturally distinguish them from non-human objects, and often treat them in ways similar to real humans. Sometimes people develop a sense of co-presence or social presence with the...
Show moreThe term virtual human (VH) generally refers to a human-like entity comprised of computer graphics and/or physical body. In the associated research literature, a VH can be further classified as an avatar(-)a human-controlled VH, or an agent(-)a computer-controlled VH. Because of the resemblance with humans, people naturally distinguish them from non-human objects, and often treat them in ways similar to real humans. Sometimes people develop a sense of co-presence or social presence with the VH(-)a phenomenon that is often exploited for training simulations where the VH assumes the role of a human. Prior research associated with VHs has primarily focused on the realism of various visual traits, e.g., appearance, shape, and gestures. However, our sense of the presence of other humans is also affected by other physical sensations conveyed through nearby space or physical objects. For example, we humans can perceive the presence of other individuals via the sound or tactile sensation of approaching footsteps, or by the presence of complementary or opposing forces when carrying a physical box with another person. In my research, I exploit the fact that these sensations, when correlated with events in the shared space, affect one's feeling of social/co-presence with another person. In this dissertation, I introduce novel methods for utilizing direct and indirect physical-virtual interactions with VHs to increase the sense of social/co-presence with the VHs(-)an approach I refer to as mediated physicality. I present results from controlled user studies, in various virtual environment settings, that support the idea that mediated physicality can increase a user's sense of social/co-presence with the VH, and/or induced realistic social behavior. I discuss relationships to prior research, possible explanations for my findings, and areas for future research.
Show less - Date Issued
- 2019
- Identifier
- CFE0007485, ucf:52687
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007485