Current Search: machines (x)
Pages
-
-
Title
-
Training Neural Networks Through the Integration of Evolution and Gradient Descent.
-
Creator
-
Morse, Gregory, Stanley, Kenneth, Wu, Annie, Shah, Mubarak, Wiegand, Rudolf, University of Central Florida
-
Abstract / Description
-
Neural networks have achieved widespread adoption due to both their applicability to a wide range of problems and their success relative to other machine learning algorithms. The training of neural networks is achieved through any of several paradigms, most prominently gradient-based approaches (including deep learning), but also through up-and-coming approaches like neuroevolution. However, while both of these neural network training paradigms have seen major improvements over the past...
Show moreNeural networks have achieved widespread adoption due to both their applicability to a wide range of problems and their success relative to other machine learning algorithms. The training of neural networks is achieved through any of several paradigms, most prominently gradient-based approaches (including deep learning), but also through up-and-coming approaches like neuroevolution. However, while both of these neural network training paradigms have seen major improvements over the past decade, little work has been invested in developing algorithms that incorporate the advances from both deep learning and neuroevolution. This dissertation introduces two new algorithms that are steps towards the integration of gradient descent and neuroevolution for training neural networks. The first is (1) the Limited Evaluation Evolutionary Algorithm (LEEA), which implements a novel form of evolution where individuals are partially evaluated, allowing rapid learning and enabling the evolutionary algorithm to behave more like gradient descent. This conception provides a critical stepping stone to future algorithms that more tightly couple evolutionary and gradient descent components. The second major algorithm (2) is Divergent Discriminative Feature Accumulation (DDFA), which combines a neuroevolution phase, where features are collected in an unsupervised manner, with a gradient descent phase for fine tuning of the neural network weights. The neuroevolution phase of DDFA utilizes an indirect encoding and novelty search, which are sophisticated neuroevolution components rarely incorporated into gradient descent-based systems. Further contributions of this work that build on DDFA include (3) an empirical analysis to identify an effective distance function for novelty search in high dimensions and (4) the extension of DDFA for the purpose of discovering convolutional features. The results of these DDFA experiments together show that DDFA discovers features that are effective as a starting point for gradient descent, with significant improvement over gradient descent alone. Additionally, the method of collecting features in an unsupervised manner allows DDFA to be applied to domains with abundant unlabeled data and relatively sparse labeled data. This ability is highlighted in the STL-10 domain, where DDFA is shown to make effective use of unlabeled data.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007840, ucf:52819
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007840
-
-
Title
-
The Development of Soil Compressibility Prediction Models and Application to Site Settlement.
-
Creator
-
Kirts, Scott, Nam, Boo Hyun, Chopra, Manoj, Sallam, Amr, Xanthopoulos, Petros, University of Central Florida
-
Abstract / Description
-
The magnitude of the overall settlement depends on several variables such as the Compression Index, Cc, and Recompression Index, Cr, which are determined by a consolidation test; however, the test is time consuming and labor intensive. Correlations have been developed to approximate these compressibility indexes. In this study, a data driven approach has been employed in order to estimate Cc and Cr. Support Vector Machines classification is used to determine the number of distinct models to...
Show moreThe magnitude of the overall settlement depends on several variables such as the Compression Index, Cc, and Recompression Index, Cr, which are determined by a consolidation test; however, the test is time consuming and labor intensive. Correlations have been developed to approximate these compressibility indexes. In this study, a data driven approach has been employed in order to estimate Cc and Cr. Support Vector Machines classification is used to determine the number of distinct models to be developed. The statistical models are built through a forward selection stepwise regression procedure. Ten variables were used, including the moisture content (w), initial void ratio (eo), dry unit weight (?dry), wet unit weight (?wet), automatic hammer SPT blow count (N), overburden stress (?), fines content (-200), liquid limit (LL), plasticity index (PI), and specific gravity (Gs). The results confirm the need for separate models for three out of four soil types, these being Coarse Grained, Fine Grained, and Organic Peat. The models for each classification have varying degrees of accuracy. The correlations were tested through a series of field tests, settlement analysis, and comparison to known site settlement. The first analysis incorporates developed correlations for Cr, and the second utilizes measured Cc and Cr for each soil layer. The predicted settlements from these two analyses were compared to the measured settlement taken in close proximity. Upon conclusion of the analyses, the results indicate that settlement predictions applying a rule of thumb equating Cc to Cr, accounting for elastic settlement, and using a conventional influence zone of settlement, compares more favorably to measured settlement than that of predictions using measured compressibility index(s). Accuracy of settlement predictions is contingent on a thorough field investigation.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007208, ucf:52284
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007208
-
-
Title
-
Moral Blameworthiness and Trustworthiness: The Role of Accounts and Apologies in Perceptions of Human and Machine Agents.
-
Creator
-
Stowers, Kimberly, Hancock, Peter, Jentsch, Florian, Mouloua, Mustapha, Chen, Jessie, Barber, Daniel, University of Central Florida
-
Abstract / Description
-
Would you trust a machine to make life-or-death decisions about your health and safety?Machines today are capable of achieving much more than they could 30 years ago(-)and thesame will be said for machines that exist 30 years from now. The rise of intelligence in machineshas resulted in humans entrusting them with ever-increasing responsibility. With this has arisenthe question of whether machines should be given equal responsibility to humans(-)or if humanswill ever perceive machines as...
Show moreWould you trust a machine to make life-or-death decisions about your health and safety?Machines today are capable of achieving much more than they could 30 years ago(-)and thesame will be said for machines that exist 30 years from now. The rise of intelligence in machineshas resulted in humans entrusting them with ever-increasing responsibility. With this has arisenthe question of whether machines should be given equal responsibility to humans(-)or if humanswill ever perceive machines as being accountable for such responsibility. For example, if anintelligent machine accidentally harms a person, should it be blamed for its mistake? Should it betrusted to continue interacting with humans? Furthermore, how does the assignment of moralblame and trustworthiness toward machines compare to such assignment to humans who harmothers? I answer these questions by exploring differences in moral blame and trustworthinessattributed to human and machine agents who make harmful moral mistakes. Additionally, Iexamine whether the knowledge and type of reason, as well as apology, for the harmful incidentaffects perceptions of the parties involved. In order to fill the gaps in understanding betweentopics in moral psychology, cognitive psychology, and artificial intelligence, valuableinformation from each of these fields have been combined to guide the research study beingpresented herein.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0007134, ucf:52311
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007134
-
-
Title
-
Effective Task Transfer Through Indirect Encoding.
-
Creator
-
Verbancsics, Phillip, Stanley, Kenneth, Sukthankar, Gita, Georgiopoulos, Michael, Garibay, Ivan, University of Central Florida
-
Abstract / Description
-
An important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Often approaches to task transfer focus on transforming the original representation to fit the new task. Such representational transformations are necessary because the target task often requires new state information that was not included in the original representation. In RoboCup Keepaway, changing from...
Show moreAn important goal for machine learning is to transfer knowledge between tasks. For example, learning to play RoboCup Keepaway should contribute to learning the full game of RoboCup soccer. Often approaches to task transfer focus on transforming the original representation to fit the new task. Such representational transformations are necessary because the target task often requires new state information that was not included in the original representation. In RoboCup Keepaway, changing from the 3 vs. 2 variant of the task to 4 vs. 3 adds state information for each of the new players. In contrast, this dissertation explores the idea that transfer is most effective if the representation is designed to be the same even across different tasks. To this end, (1) the bird's eye view (BEV) representation is introduced, which can represent different tasks on the same two-dimensional map. Because the BEV represents state information associated with positions instead of objects, it can be scaled to more objects without manipulation. In this way, both the 3 vs. 2 and 4 vs. 3 Keepaway tasks can be represented on the same BEV, which is (2) demonstrated in this dissertation.Yet a challenge for such representation is that a raw two-dimensional map is high-dimensional and unstructured. This dissertation demonstrates how this problem is addressed naturally by the Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) approach. HyperNEAT evolves an indirect encoding, which compresses the representation by exploiting its geometry. The dissertation then explores further exploiting the power of such encoding, beginning by (3) enhancing the configuration of the BEV with a focus on modularity. The need for further nonlinearity is then (4) investigated through the addition of hidden nodes. Furthermore, (5) the size of the BEV can be manipulated because it is indirectly encoded. Thus the resolution of the BEV, which is dictated by its size, is increased in precision and culminates in a HyperNEAT extension that is expressed at effectively infinite resolution. Additionally, scaling to higher resolutions through gradually increasing the size of the BEV is explored. Finally, (6) the ambitious problem of scaling from the Keepaway task to the Half-field Offense task is investigated with the BEV. Overall, this dissertation demonstrates that advanced representations in conjunction with indirect encoding can contribute to scaling learning techniques to more challenging tasks, such as the Half-field Offense RoboCup soccer domain.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004174, ucf:49071
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004174
-
-
Title
-
A CONTEXTUAL APPROACH TO LEARNING COLLABORATIVE BEHAVIOR VIA OBSERVATION.
-
Creator
-
Johnson, Cynthia, Gonzalez, Avelino, University of Central Florida
-
Abstract / Description
-
This dissertation describes a novel technique to creating a simulated team of agents through observation. Simulated human teamwork can be used for a number of purposes, such as expert examples, automated teammates for training purposes and realistic opponents in games and training simulation. Current teamwork simulations require the team member behaviors be programmed into the simulation, often requiring a great deal of time and effort. None are able to observe a team at work and replicate...
Show moreThis dissertation describes a novel technique to creating a simulated team of agents through observation. Simulated human teamwork can be used for a number of purposes, such as expert examples, automated teammates for training purposes and realistic opponents in games and training simulation. Current teamwork simulations require the team member behaviors be programmed into the simulation, often requiring a great deal of time and effort. None are able to observe a team at work and replicate the teamwork behaviors. Machine learning techniques for learning by observation and learning by demonstration have proven successful at observing behavior of humans or other software agents and creating a behavior function for a single agent. The research described here combines current research in teamwork simulations and learning by observation to effectively train a multi-agent system in effective team behavior. The dissertation describes the background and work by others as well as a detailed description of the learning method. A prototype built to evaluate the developed approach as well as the extensive experimentation conducted is also described.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003602, ucf:48869
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003602
-
-
Title
-
MACHINAL: A SOURCEBOOK FOR THE ACTRESS PLAYING "YOUNG WOMAN".
-
Creator
-
Rentschler, Brittney, Brotherton, Mark, University of Central Florida
-
Abstract / Description
-
This thesis will document four phases of my rehearsal process/performance while portraying the role of Helen in Sophie Treadwell's Machinal. The first phase of the project will be researching and analyzing historical material on: Sophie Treadwell (the playwright) Ruth Snyder (the murderess upon whom the character of Helen is based), and the actual murder that occurred in the 1920's. The second phase that will be documented is a character analysis. I will take each episode and divide...
Show moreThis thesis will document four phases of my rehearsal process/performance while portraying the role of Helen in Sophie Treadwell's Machinal. The first phase of the project will be researching and analyzing historical material on: Sophie Treadwell (the playwright) Ruth Snyder (the murderess upon whom the character of Helen is based), and the actual murder that occurred in the 1920's. The second phase that will be documented is a character analysis. I will take each episode and divide it into the following sections: given circumstances, what is said about the character by the playwright, by others, or by herself, objectives, tactics, vocal traits, and physical traits. The third phase will include a written journal of my experiences as an actor as they occurred during the rehearsals and performances. The fourth and final phase will include a self-analysis of the performance. I will reflect on my abilities in synthesizing the research and character analysis found in phase one and two into the actual performance. In addition, Committee Chair, Mark Brotherton, and my thesis Committee Members, Kate Ingram and Vanduyn Wood will also give written responses. The performances will be held February 14-17, and 21-24, 2007 in the University of Central Florida's Black Box Theatre. Dr. Julia Listengarten will direct the performance.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002643, ucf:48205
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002643
-
-
Title
-
Machine Learning from Casual Conversation.
-
Creator
-
Mohammed Ali, Awrad, Sukthankar, Gita, Wu, Annie, Boloni, Ladislau, University of Central Florida
-
Abstract / Description
-
Human social learning is an effective process that has inspired many existing machine learning techniques, such as learning from observation and learning by demonstration. In this dissertation, we introduce another form of social learning, Learning from a Casual Conversation (LCC). LCC is an open-ended machine learning system in which an artificially intelligent agent learns from an extended dialog with a human. Our system enables the agent to incorporate changes into its knowledge base,...
Show moreHuman social learning is an effective process that has inspired many existing machine learning techniques, such as learning from observation and learning by demonstration. In this dissertation, we introduce another form of social learning, Learning from a Casual Conversation (LCC). LCC is an open-ended machine learning system in which an artificially intelligent agent learns from an extended dialog with a human. Our system enables the agent to incorporate changes into its knowledge base, based on the human's conversational text input. This system emulates how humans learn from each other through a dialog. LCC closes the gap in the current research that is focused on teaching specific tasks to computer agents. Furthermore, LCC aims to provide an easy way to enhance the knowledge of the system without requiring the involvement of a programmer. This system does not require the user to enter specific information; instead, the user can chat naturally with the agent. LCC identifies the inputs that contain information relevant to its knowledge base in the learning process. LCC's architecture consists of multiple sub-systems combined to perform the task. Its learning component can add new knowledge to existing information in the knowledge base, confirm existing information, and/or update existing information found to be related to the user input. %The test results indicate that the prototype was successful in learning from a conversation. The LCC system functionality was assessed using different evaluation methods. This includes tests performed by the developer, as well as by 130 human test subjects. Thirty of those test subjects interacted directly with the system and completed a survey of 13 questions/statements that asked the user about his/her experience using LCC. A second group of 100 human test subjects evaluated the dialogue logs of a subset of the first group of human testers. The collected results were all found to be acceptable and within the range of our expectations.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007503, ucf:52634
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007503
-
-
Title
-
Virtual resistance based DC-link voltage regulation for Microgrid DG inverters.
-
Creator
-
Shinde, Siddhesh, Batarseh, Issa, Mikhael, Wasfy, Kutkut, Nasser, University of Central Florida
-
Abstract / Description
-
This research addresses the practical issues faced by Microgrid Distributed Generation (DG) inverters when operated in islanded mode. A Microgrid (MG) is an interconnection of domestic distributed loads and low voltage distributed energy sources such as micro-turbine, wind-turbine, PVs and storage devices. These energy sources are power limited in nature and constrain the operation of DG inverters to which they are coupled. DG inverters operated in islanded mode should maintain the power...
Show moreThis research addresses the practical issues faced by Microgrid Distributed Generation (DG) inverters when operated in islanded mode. A Microgrid (MG) is an interconnection of domestic distributed loads and low voltage distributed energy sources such as micro-turbine, wind-turbine, PVs and storage devices. These energy sources are power limited in nature and constrain the operation of DG inverters to which they are coupled. DG inverters operated in islanded mode should maintain the power balance between generation and demand. If DG inverter operating in islanded mode drains its source power below a certain limit or if it is incapable of supplying demanded power due to its hardware rating, it turns on its safety mechanism and isolates itself from the MG. This, in turn, increases the power demand on the rest of the DG units and can have a catastrophic impact on the viability of the entire system. This research presents a Virtual Resistance based DC Link Voltage Regulation technique which will allow DG inverters to continue to source their available power even when the power demand by the load is higher than their capacity without shutting off and isolating from the MG.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006503, ucf:51403
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006503
-
-
Title
-
Data Representation in Machine Learning Methods with its Application to Compilation Optimization and Epitope Prediction.
-
Creator
-
Sher, Yevgeniy, Zhang, Shaojie, Dechev, Damian, Leavens, Gary, Gonzalez, Avelino, Zhi, Degui, University of Central Florida
-
Abstract / Description
-
In this dissertation we explore the application of machine learning algorithms to compilation phase order optimization, and epitope prediction. The common thread running through these two disparate domains is the type of data being dealt with. In both problem domains we are dealing with categorical data, with its representation playing a significant role in the performance of classification algorithms.We first present a neuroevolutionary approach which orders optimization phases to generate...
Show moreIn this dissertation we explore the application of machine learning algorithms to compilation phase order optimization, and epitope prediction. The common thread running through these two disparate domains is the type of data being dealt with. In both problem domains we are dealing with categorical data, with its representation playing a significant role in the performance of classification algorithms.We first present a neuroevolutionary approach which orders optimization phases to generate compiled programs with performance superior to those compiled using LLVM's -O3 optimization level. Performance improvements calculated as the speed of the compiled program's execution ranged from 27% for the ccbench program, to 40.8% for bzip2.This dissertation then explores the problem of data representation of 3D biological data, such as amino acids. A new approach for distributed representation of 3D biological data through the process of embedding is proposed and explored. Analogously to word embedding, we developed a system that uses atomic and residue coordinates to generate distributed representation for residues, which we call 3D Residue BioVectors. Preliminary results are presented which demonstrate that even the low dimensional 3D Residue BioVectors can be used to predict conformational epitopes and protein-protein interactions, with promising proficiency. The generation of such 3D BioVectors, and the proposed methodology, opens the door for substantial future improvements, and application domains.The dissertation then explores the problem domain of linear B-Cell epitope prediction. This problem domain deals with predicting epitopes based strictly on the protein sequence. We present the DRREP system, which demonstrates how an ensemble of shallow neural networks can be combined with string kernels and analytical learning algorithm to produce state of the art epitope prediction results. DRREP was tested on the SARS subsequence, the HIV, Pellequer, AntiJen datasets, and the standard SEQ194 test dataset. AUC improvements achieved over the state of the art ranged from 3% to 8%.Finally, we present the SEEP epitope classifier, which is a multi-resolution SMV ensemble based classifier which uses conjoint triad feature representation, and produces state of the art classification results. SEEP leverages the domain specific knowledge based protein sequence encoding developed within the protein-protein interaction research domain. Using an ensemble of multi-resolution SVMs, and a sliding window based pre and post processing pipeline, SEEP achieves an AUC of 91.2 on the standard SEQ194 test dataset, a 24% improvement over the state of the art.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006793, ucf:51829
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006793
-
-
Title
-
On Kernel-base Multi-Task Learning.
-
Creator
-
Li, Cong, Georgiopoulos, Michael, Anagnostopoulos, Georgios, Tappen, Marshall, Hu, Haiyan, Ni, Liqiang, University of Central Florida
-
Abstract / Description
-
Multi-Task Learning (MTL) has been an active research area in machine learning for two decades. By training multiple relevant tasks simultaneously with information shared across tasks, it is possible to improve the generalization performance of each task, compared to training each individual task independently. During the past decade, most MTL research has been based on the Regularization-Loss framework due to its flexibility in specifying various types of information sharing strategies, the...
Show moreMulti-Task Learning (MTL) has been an active research area in machine learning for two decades. By training multiple relevant tasks simultaneously with information shared across tasks, it is possible to improve the generalization performance of each task, compared to training each individual task independently. During the past decade, most MTL research has been based on the Regularization-Loss framework due to its flexibility in specifying various types of information sharing strategies, the opportunity it offers to yield a kernel-based methods and its capability in promoting sparse feature representations.However, certain limitations exist in both theoretical and practical aspects of Regularization-Loss-based MTL. Theoretically, previous research on generalization bounds in connection to MTL Hypothesis Space (HS)s, where data of all tasks are pre-processed by a (partially) common operator, has been limited in two aspects: First, all previous works assumed linearity of the operator, therefore completely excluding kernel-based MTL HSs, for which the operator is potentially non-linear. Secondly, all previous works, rather unnecessarily, assumed that all the task weights to be constrained within norm-balls, whose radii are equal. The requirement of equal radii leads to significant inflexibility of the relevant HSs, which may cause the generalization performance of the corresponding MTL models to deteriorate. Practically, various algorithms have been developed for kernel-based MTL models, due to different characteristics of the formulations. Most of these algorithms are a burden to develop and end up being quite sophisticated, so that practitioners may face a hard task in interpreting and implementing them, especially when multiple models are involved. This is even more so, when Multi-Task Multiple Kernel Learning (MT-MKL) models are considered. This research largely resolves the above limitations. Theoretically, a pair of new kernel-based HSs are proposed: one for single-kernel MTL, and another one for MT-MKL. Unlike previous works, we allow each task weight to be constrained within a norm-ball, whose radius is learned during training. By deriving and analyzing the generalization bounds of these two HSs, we show that, indeed, such a flexibility leads to much tighter generalization bounds, which often results to significantly better generalization performance. Based on this observation, a pair of new models is developed, one for each case: single-kernel MTL, and another one for MT-MKL. From a practical perspective, we propose a general MT-MKL framework that covers most of the prominent MT-MKL approaches, including our new MT-MKL formulation. Then, a general purpose algorithm is developed to solve the framework, which can also be employed for training all other models subsumed by this framework. A series of experiments is conducted to assess the merits of the proposed mode when trained by the new algorithm. Certain properties of our HSs and formulations are demonstrated, and the advantage of our model in terms of classification accuracy is shown via these experiments.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005517, ucf:50321
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005517
-
-
Title
-
Modeling and Contour Control of Multi-Axis Linear Driven Machine Tools.
-
Creator
-
Zhao, Ran, Lin, Kuo-Chi, Xu, Chengying, Bai, Yuanli, Das, Tuhin, An, Linan, University of Central Florida
-
Abstract / Description
-
In modern manufacturing industries, many applications require precision motion control of multi-agent systems, like multi-joint robot arms and multi-axis machine tools. Cutter (end effector) should stay as close as possible to the reference trajectory to ensure the quality of the final products. In conventional computer numerical control (CNC), the control unit of each axis is independently designed to achieve the best individual tracking performance. However, this becomes less effective when...
Show moreIn modern manufacturing industries, many applications require precision motion control of multi-agent systems, like multi-joint robot arms and multi-axis machine tools. Cutter (end effector) should stay as close as possible to the reference trajectory to ensure the quality of the final products. In conventional computer numerical control (CNC), the control unit of each axis is independently designed to achieve the best individual tracking performance. However, this becomes less effective when dealing with multi-axis contour following tasks because of the lack of coordination among axes. This dissertation studies the control of multi-axis machine tools with focus on reducing the contour error. The proposed research explicitly addresses the minimization of contour error and treats the multi-axis machine tool as a multi-input-multi-output (MIMO) system instead of several decoupled single-input-single-output (SISO) systems. New control schemes are developed to achieve superior contour following performance even in the presence of disturbances. This study also extends the applications of the proposed control system from plane contours to regular contours in R3. The effectiveness of the developed control systems is experimentally verified on a micro milling machine.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005287, ucf:50552
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005287
-
-
Title
-
Cost-Sensitive Learning-based Methods for Imbalanced Classification Problems with Applications.
-
Creator
-
Razzaghi, Talayeh, Xanthopoulos, Petros, Karwowski, Waldemar, Pazour, Jennifer, Mikusinski, Piotr, University of Central Florida
-
Abstract / Description
-
Analysis and predictive modeling of massive datasets is an extremely significant problem that arises in many practical applications. The task of predictive modeling becomes even more challenging when data are imperfect or uncertain. The real data are frequently affected by outliers, uncertain labels, and uneven distribution of classes (imbalanced data). Such uncertainties createbias and make predictive modeling an even more difficult task. In the present work, we introduce a cost-sensitive...
Show moreAnalysis and predictive modeling of massive datasets is an extremely significant problem that arises in many practical applications. The task of predictive modeling becomes even more challenging when data are imperfect or uncertain. The real data are frequently affected by outliers, uncertain labels, and uneven distribution of classes (imbalanced data). Such uncertainties createbias and make predictive modeling an even more difficult task. In the present work, we introduce a cost-sensitive learning method (CSL) to deal with the classification of imperfect data. Typically, most traditional approaches for classification demonstrate poor performance in an environment with imperfect data. We propose the use of CSL with Support Vector Machine, which is a well-known data mining algorithm. The results reveal that the proposed algorithm produces more accurate classifiers and is more robust with respect to imperfect data. Furthermore, we explore the best performance measures to tackle imperfect data along with addressing real problems in quality control and business analytics.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005542, ucf:50298
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005542
-
-
Title
-
Modeling and Simulation of All-electric Aircraft Power Generation and Actuation.
-
Creator
-
Woodburn, David, Wu, Xinzhang, Batarseh, Issa, Georgiopoulos, Michael, Haralambous, Michael, Chow, Louis, University of Central Florida
-
Abstract / Description
-
Modern aircraft, military and commercial, rely extensively on hydraulic systems. However, there is great interest in the avionics community to replace hydraulic systems with electric systems. There are physical challenges to replacing hydraulic actuators with electromechanical actuators (EMAs), especially for flight control surface actuation. These include dynamic heat generation and power management.Simulation is seen as a powerful tool in making the transition to all-electric aircraft by...
Show moreModern aircraft, military and commercial, rely extensively on hydraulic systems. However, there is great interest in the avionics community to replace hydraulic systems with electric systems. There are physical challenges to replacing hydraulic actuators with electromechanical actuators (EMAs), especially for flight control surface actuation. These include dynamic heat generation and power management.Simulation is seen as a powerful tool in making the transition to all-electric aircraft by predicting the dynamic heat generated and the power flow in the EMA. Chapter 2 of this dissertation describes the nonlinear, lumped-element, integrated modeling of a permanent magnet (PM) motor used in an EMA. This model is capable of representing transient dynamics of an EMA, mechanically, electrically, and thermally.Inductance is a primary parameter that links the electrical and mechanical domains and, therefore, is of critical importance to the modeling of the whole EMA. In the dynamic mode of operation of an EMA, the inductances are quite nonlinear. Chapter 3 details the careful analysis of the inductances from finite element software and the mathematical modeling of these inductances for use in the overall EMA model.Chapter 4 covers the design and verification of a nonlinear, transient simulation model of a two-step synchronous generator with three-phase rectifiers. Simulation results are shown.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005074, ucf:49975
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005074
-
-
Title
-
TOWARDS A SELF-CALIBRATING VIDEO CAMERA NETWORK FOR CONTENT ANALYSIS AND FORENSICS.
-
Creator
-
Junejo, Imran, Foroosh, Hassan, University of Central Florida
-
Abstract / Description
-
Due to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space....
Show moreDue to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space. This thesis addresses the issue of how cameras in such a network can be calibrated and how the network as a whole can be calibrated, such that each camera as a unit in the network is aware of its orientation with respect to all the other cameras in the network. Different types of cameras might be present in a multiple camera network and novel techniques are presented for efficient calibration of these cameras. Specifically: (i) For a stationary camera, we derive new constraints on the Image of the Absolute Conic (IAC). These new constraints are shown to be intrinsic to IAC; (ii) For a scene where object shadows are cast on a ground plane, we track the shadows on the ground plane cast by at least two unknown stationary points, and utilize the tracked shadow positions to compute the horizon line and hence compute the camera intrinsic and extrinsic parameters; (iii) A novel solution to a scenario where a camera is observing pedestrians is presented. The uniqueness of formulation lies in recognizing two harmonic homologies present in the geometry obtained by observing pedestrians; (iv) For a freely moving camera, a novel practical method is proposed for its self-calibration which even allows it to change its internal parameters by zooming; and (v) due to the increased application of the pan-tilt-zoom (PTZ) cameras, a technique is presented that uses only two images to estimate five camera parameters. For an automatically configurable multi-camera network, having non-overlapping field of view and possibly containing moving cameras, a practical framework is proposed that determines the geometry of such a dynamic camera network. It is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the geometry of a dynamic network. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic as well as on real data. Applications to path modeling, GPS coordinate estimation, and configuring mixed-reality environment are explored.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001743, ucf:47296
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001743
-
-
Title
-
AN ANALYSIS OF MISCLASSIFICATION RATES FOR DECISION TREES.
-
Creator
-
Zhong, Mingyu, Georgiopoulos, Michael, University of Central Florida
-
Abstract / Description
-
The decision tree is a well-known methodology for classification and regression. In this dissertation, we focus on the minimization of the misclassification rate for decision tree classifiers. We derive the necessary equations that provide the optimal tree prediction, the estimated risk of the tree's prediction, and the reliability of the tree's risk estimation. We carry out an extensive analysis of the application of Lidstone's law of succession for the estimation of the class...
Show moreThe decision tree is a well-known methodology for classification and regression. In this dissertation, we focus on the minimization of the misclassification rate for decision tree classifiers. We derive the necessary equations that provide the optimal tree prediction, the estimated risk of the tree's prediction, and the reliability of the tree's risk estimation. We carry out an extensive analysis of the application of Lidstone's law of succession for the estimation of the class probabilities. In contrast to existing research, we not only compute the expected values of the risks but also calculate the corresponding reliability of the risk (measured by standard deviations). We also provide an explicit expression of the k-norm estimation for the tree's misclassification rate that combines both the expected value and the reliability. Furthermore, our proposed and proven theorem on k-norm estimation suggests an efficient pruning algorithm that has a clear theoretical interpretation, is easily implemented, and does not require a validation set. Our experiments show that our proposed pruning algorithm produces accurate trees quickly that compares very favorably with two other well-known pruning algorithms, CCP of CART and EBP of C4.5. Finally, our work provides a deeper understanding of decision trees.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001774, ucf:47271
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001774
-
-
Title
-
A REINFORCEMENT LEARNING TECHNIQUE FOR ENHANCING HUMAN BEHAVIOR MODELS IN A CONTEXT-BASED ARCHITECTURE.
-
Creator
-
Aihe, David, Gonzalez, Avelino, University of Central Florida
-
Abstract / Description
-
A reinforcement-learning technique for enhancing human behavior models in a context-based learning architecture is presented. Prior to the introduction of this technique, human models built and developed in a Context-Based reasoning framework lacked learning capabilities. As such, their performance and quality of behavior was always limited by what the subject matter expert whose knowledge is modeled was able to articulate or demonstrate. Results from experiments performed show that subject...
Show moreA reinforcement-learning technique for enhancing human behavior models in a context-based learning architecture is presented. Prior to the introduction of this technique, human models built and developed in a Context-Based reasoning framework lacked learning capabilities. As such, their performance and quality of behavior was always limited by what the subject matter expert whose knowledge is modeled was able to articulate or demonstrate. Results from experiments performed show that subject matter experts are prone to making errors and at times they lack information on situations that are inherently necessary for the human models to behave appropriately and optimally in those situations. The benefits of the technique presented is two fold; 1) It shows how human models built in a context-based framework can be modified to correctly reflect the knowledge learnt in a simulator; and 2) It presents a way for subject matter experts to verify and validate the knowledge they share. The results obtained from this research show that behavior models built in a context-based framework can be enhanced by learning and reflecting the constraints in the environment. From the results obtained, it was shown that after the models are enhanced, the agents performed better based on the metrics evaluated. Furthermore, after learning, the agent was shown to recognize unknown situations and behave appropriately in previously unknown situations. The overall performance and quality of behavior of the agent improved significantly.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002466, ucf:47715
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002466
-
-
Title
-
CONCEPT LEARNING BY EXAMPLE DECOMPOSITION.
-
Creator
-
Joshi, Sameer, Hughes, Charles, University of Central Florida
-
Abstract / Description
-
For efficient understanding and prediction in natural systems, even in artificially closed ones, we usually need to consider a number of factors that may combine in simple or complex ways. Additionally, many modern scientific disciplines face increasingly large datasets from which to extract knowledge (for example, genomics). Thus to learn all but the most trivial regularities in the natural world, we rely on different ways of simplifying the learning problem. One simplifying technique that...
Show moreFor efficient understanding and prediction in natural systems, even in artificially closed ones, we usually need to consider a number of factors that may combine in simple or complex ways. Additionally, many modern scientific disciplines face increasingly large datasets from which to extract knowledge (for example, genomics). Thus to learn all but the most trivial regularities in the natural world, we rely on different ways of simplifying the learning problem. One simplifying technique that is highly pervasive in nature is to break down a large learning problem into smaller ones; to learn the smaller, more manageable problems; and then to recombine them to obtain the larger picture. It is widely accepted in machine learning that it is easier to learn several smaller decomposed concepts than a single large one. Though many machine learning methods exploit it, the process of decomposition of a learning problem has not been studied adequately from a theoretical perspective. Typically such decomposition of concepts is achieved in highly constrained environments, or aided by human experts. In this work, we investigate concept learning by example decomposition in a general probably approximately correct (PAC) setting for Boolean learning. We develop sample complexity bounds for the different steps involved in the process. We formally show that if the cost of example partitioning is kept low then it is highly advantageous to learn by example decomposition. To demonstrate the efficacy of this framework, we interpret the theory in the context of feature extraction. We discover that many vague concepts in feature extraction, starting with what exactly a feature is, can be formalized unambiguously by this new theory of feature extraction. We analyze some existing feature learning algorithms in light of this theory, and finally demonstrate its constructive nature by generating a new learning algorithm from theoretical results.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002504, ucf:47694
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002504
-
-
Title
-
OPTICAL CHARACTER RECOGNITION: A STATISTICAL MODEL OF MULTI-ENGINE OPTICAL CHARACTER RECOGNITION SYSTEMS.
-
Creator
-
McDonald, Mercedes Terre, M Richie, Samuel, University of Central Florida
-
Abstract / Description
-
This thesis is a benchmark performed on three commercial Optical Character Recognition (OCR) engines. The purpose of this benchmark is to characterize the performance of the OCR engines with emphasis on the correlation of errors between each engine. The benchmarks are performed for the evaluation of the effect of a multi-OCR system employing a voting scheme to increase overall recognition accuracy. This is desirable since currently OCR systems are still unable to recognize characters with 100...
Show moreThis thesis is a benchmark performed on three commercial Optical Character Recognition (OCR) engines. The purpose of this benchmark is to characterize the performance of the OCR engines with emphasis on the correlation of errors between each engine. The benchmarks are performed for the evaluation of the effect of a multi-OCR system employing a voting scheme to increase overall recognition accuracy. This is desirable since currently OCR systems are still unable to recognize characters with 100% accuracy. The existing error rates of OCR engines pose a major problem for applications where a single error can possibly effect significant outcomes, such as in legal applications. The results obtained from this benchmark are the primary determining factor in the decision of implementing a voting scheme. The experiment performed displayed a very high accuracy rate for each of these commercial OCR engines. The average accuracy rate found for each engine was near 99.5% based on a less than 6,000 word document. While these error rates are very low, the goal is 100% accuracy in legal applications. Based on the work in this thesis, it has been determined that a simple voting scheme will help to improve the accuracy rate.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000123, ucf:46188
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000123
-
-
Title
-
DECISION THEORY CLASSIFICATION OF HIGH-DIMENSIONAL VECTORS BASED ON SMALL SAMPLES.
-
Creator
-
Bradshaw, David, Pensky, Marianna, University of Central Florida
-
Abstract / Description
-
In this paper, we review existing classification techniques and suggest an entirely new procedure for the classification of high-dimensional vectors on the basis of a few training samples. The proposed method is based on the Bayesian paradigm and provides posterior probabilities that a new vector belongs to each of the classes, therefore it adapts naturally to any number of classes. Our classification technique is based on a small vector which is related to the projection of the observation...
Show moreIn this paper, we review existing classification techniques and suggest an entirely new procedure for the classification of high-dimensional vectors on the basis of a few training samples. The proposed method is based on the Bayesian paradigm and provides posterior probabilities that a new vector belongs to each of the classes, therefore it adapts naturally to any number of classes. Our classification technique is based on a small vector which is related to the projection of the observation onto the space spanned by the training samples. This is achieved by employing matrix-variate distributions in classification, which is an entirely new idea. In addition, our method mimics time-tested classification techniques based on the assumption of normally distributed samples. By assuming that the samples have a matrix-variate normal distribution, we are able to replace classification on the basis of a large covariance matrix with classification on the basis of a smaller matrix that describes the relationship of sample vectors to each other.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000753, ucf:46593
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000753
-
-
Title
-
IMPROVING FMRI CLASSIFICATION THROUGH NETWORK DECONVOLUTION.
-
Creator
-
Martinek, Jacob, Zhang, Shaojie, University of Central Florida
-
Abstract / Description
-
The structure of regional correlation graphs built from fMRI-derived data is frequently used in algorithms to automatically classify brain data. Transformation on the data is performed during pre-processing to remove irrelevant or inaccurate information to ensure that an accurate representation of the subject's resting-state connectivity is attained. Our research suggests and confirms that such pre-processed data still exhibits inherent transitivity, which is expected to obscure the true...
Show moreThe structure of regional correlation graphs built from fMRI-derived data is frequently used in algorithms to automatically classify brain data. Transformation on the data is performed during pre-processing to remove irrelevant or inaccurate information to ensure that an accurate representation of the subject's resting-state connectivity is attained. Our research suggests and confirms that such pre-processed data still exhibits inherent transitivity, which is expected to obscure the true relationships between regions. This obfuscation prevents known solutions from developing an accurate understanding of a subject's functional connectivity. By removing correlative transitivity, connectivity between regions is made more specific and automated classification is expected to improve. The task of utilizing fMRI to automatically diagnose Attention Deficit/Hyperactivity Disorder was posed by the ADHD-200 Consortium in a competition to draw in researchers and new ideas from outside of the neuroimaging discipline. Researchers have since worked with the competition dataset to produce ever-increasing detection rates. Our approach was empirically tested with a known solution to this problem to compare processing of treated and untreated data, and the detection rates were shown to improve in all cases with a weighted average increase of 5.88%.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFH0004895, ucf:45410
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004895
Pages