Current Search: machine learning (x)
View All Items
Pages
- Title
- Task Focused Robotic Imitation Learning.
- Creator
-
Abolghasemi, Pooya, Boloni, Ladislau, Sukthankar, Gita, Shah, Mubarak, Willenberg, Bradley, University of Central Florida
- Abstract / Description
-
For many years, successful applications of robotics were the domain of controlled environments, such as industrial assembly lines. Such environments are custom designed for the convenience of the robot and separated from human operators. In recent years, advances in artificial intelligence, in particular, deep learning and computer vision, allowed researchers to successfully demonstrate robots that operate in unstructured environments and directly interact with humans. One of the major...
Show moreFor many years, successful applications of robotics were the domain of controlled environments, such as industrial assembly lines. Such environments are custom designed for the convenience of the robot and separated from human operators. In recent years, advances in artificial intelligence, in particular, deep learning and computer vision, allowed researchers to successfully demonstrate robots that operate in unstructured environments and directly interact with humans. One of the major applications of such robots is in assistive robotics. For instance, a wheelchair mounted robotic arm can help disabled users in the performance of activities of daily living (ADLs) such as feeding and personal grooming. Early systems relied entirely on the control of the human operator, something that is difficult to accomplish by a user with motor and/or cognitive disabilities. In this dissertation, we are describing research results that advance the field of assistive robotics. The overall goal is to improve the ability of the wheelchair / robotic arm assembly to help the user with the performance of the ADLs by requiring only high-level commands from the user. Let us consider an ADL involving the manipulation of an object in the user's home. This task can be naturally decomposed into two components: the movement of the wheelchair in such a way that the manipulator can conveniently grasp the object and the movement of the manipulator itself. This dissertation we provide an approach for addressing the challenge of finding the position appropriate for the required manipulation. We introduce the ease-of-reach score (ERS), a metric that quantifies the preferences for the positioning of the base while taking into consideration the shape and position of obstacles and clutter in the environment. As the brute force computation of ERS is computationally expensive, we propose a machine learning approach to estimate the ERS based on features and characteristics of the obstacles. This dissertation addresses the second component as well, the ability of the robotic arm to manipulate objects. Recent work in end-to-end learning of robotic manipulation had demonstrated that a deep learning-based controller of vision-enabled robotic arms can be thought to manipulate objects from a moderate number of demonstrations. However, the current state of the art systems are limited in robustness to physical and visual disturbances and do not generalize well to new objects. We describe new techniques based on task-focused attention that show significant improvement in the robustness of manipulation and performance in clutter.
Show less - Date Issued
- 2019
- Identifier
- CFE0007771, ucf:52392
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007771
- Title
- FALCONET: FORCE-FEEDBACK APPROACH FOR LEARNING FROM COACHING AND OBSERVATION USING NATURAL AND EXPERIENTIAL TRAINING.
- Creator
-
Stein, Gary, Gonzalez, Avelino, University of Central Florida
- Abstract / Description
-
Building an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine...
Show moreBuilding an intelligent agent model from scratch is a difficult task. Thus, it would be preferable to have an automated process perform this task. There have been many manual and automatic techniques, however, each of these has various issues with obtaining, organizing, or making use of the data. Additionally, it can be difficult to get perfect data or, once the data is obtained, impractical to get a human subject to explain why some action was performed. Because of these problems, machine learning from observation emerged to produce agent models based on observational data. Learning from observation uses unobtrusive and purely observable information to construct an agent that behaves similarly to the observed human. Typically, an observational system builds an agent only based on prerecorded observations. This type of system works well with respect to agent creation, but lacks the ability to be trained and updated on-line. To overcome these deficiencies, the proposed system works by adding an augmented force-feedback system of training that senses the agents intentions haptically. Furthermore, because not all possible situations can be observed or directly trained, a third stage of learning from practice is added for the agent to gain additional knowledge for a particular mission. These stages of learning mimic the natural way a human might learn a task by first watching the task being performed, then being coached to improve, and finally practicing to self improve. The hypothesis is that a system that is initially trained using human recorded data (Observational), then tuned and adjusted using force-feedback (Instructional), and then allowed to perform the task in different situations (Experiential) will be better than any individual step or combination of steps.
Show less - Date Issued
- 2009
- Identifier
- CFE0002746, ucf:48157
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002746
- Title
- A REINFORCEMENT LEARNING TECHNIQUE FOR ENHANCING HUMAN BEHAVIOR MODELS IN A CONTEXT-BASED ARCHITECTURE.
- Creator
-
Aihe, David, Gonzalez, Avelino, University of Central Florida
- Abstract / Description
-
A reinforcement-learning technique for enhancing human behavior models in a context-based learning architecture is presented. Prior to the introduction of this technique, human models built and developed in a Context-Based reasoning framework lacked learning capabilities. As such, their performance and quality of behavior was always limited by what the subject matter expert whose knowledge is modeled was able to articulate or demonstrate. Results from experiments performed show that subject...
Show moreA reinforcement-learning technique for enhancing human behavior models in a context-based learning architecture is presented. Prior to the introduction of this technique, human models built and developed in a Context-Based reasoning framework lacked learning capabilities. As such, their performance and quality of behavior was always limited by what the subject matter expert whose knowledge is modeled was able to articulate or demonstrate. Results from experiments performed show that subject matter experts are prone to making errors and at times they lack information on situations that are inherently necessary for the human models to behave appropriately and optimally in those situations. The benefits of the technique presented is two fold; 1) It shows how human models built in a context-based framework can be modified to correctly reflect the knowledge learnt in a simulator; and 2) It presents a way for subject matter experts to verify and validate the knowledge they share. The results obtained from this research show that behavior models built in a context-based framework can be enhanced by learning and reflecting the constraints in the environment. From the results obtained, it was shown that after the models are enhanced, the agents performed better based on the metrics evaluated. Furthermore, after learning, the agent was shown to recognize unknown situations and behave appropriately in previously unknown situations. The overall performance and quality of behavior of the agent improved significantly.
Show less - Date Issued
- 2008
- Identifier
- CFE0002466, ucf:47715
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002466
- Title
- CONCEPT LEARNING BY EXAMPLE DECOMPOSITION.
- Creator
-
Joshi, Sameer, Hughes, Charles, University of Central Florida
- Abstract / Description
-
For efficient understanding and prediction in natural systems, even in artificially closed ones, we usually need to consider a number of factors that may combine in simple or complex ways. Additionally, many modern scientific disciplines face increasingly large datasets from which to extract knowledge (for example, genomics). Thus to learn all but the most trivial regularities in the natural world, we rely on different ways of simplifying the learning problem. One simplifying technique that...
Show moreFor efficient understanding and prediction in natural systems, even in artificially closed ones, we usually need to consider a number of factors that may combine in simple or complex ways. Additionally, many modern scientific disciplines face increasingly large datasets from which to extract knowledge (for example, genomics). Thus to learn all but the most trivial regularities in the natural world, we rely on different ways of simplifying the learning problem. One simplifying technique that is highly pervasive in nature is to break down a large learning problem into smaller ones; to learn the smaller, more manageable problems; and then to recombine them to obtain the larger picture. It is widely accepted in machine learning that it is easier to learn several smaller decomposed concepts than a single large one. Though many machine learning methods exploit it, the process of decomposition of a learning problem has not been studied adequately from a theoretical perspective. Typically such decomposition of concepts is achieved in highly constrained environments, or aided by human experts. In this work, we investigate concept learning by example decomposition in a general probably approximately correct (PAC) setting for Boolean learning. We develop sample complexity bounds for the different steps involved in the process. We formally show that if the cost of example partitioning is kept low then it is highly advantageous to learn by example decomposition. To demonstrate the efficacy of this framework, we interpret the theory in the context of feature extraction. We discover that many vague concepts in feature extraction, starting with what exactly a feature is, can be formalized unambiguously by this new theory of feature extraction. We analyze some existing feature learning algorithms in light of this theory, and finally demonstrate its constructive nature by generating a new learning algorithm from theoretical results.
Show less - Date Issued
- 2009
- Identifier
- CFE0002504, ucf:47694
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002504
- Title
- Decision-making for Vehicle Path Planning.
- Creator
-
Xu, Jun, Turgut, Damla, Zhang, Shaojie, Zhang, Wei, Hasan, Samiul, University of Central Florida
- Abstract / Description
-
This dissertation presents novel algorithms for vehicle path planning in scenarios where the environment changes. In these dynamic scenarios the path of the vehicle needs to adapt to changes in the real world. In these scenarios, higher performance paths can be achieved if we are able to predict the future state of the world, by learning the way it evolves from historical data. We are relying on recent advances in the field of deep learning and reinforcement learning to learn appropriate...
Show moreThis dissertation presents novel algorithms for vehicle path planning in scenarios where the environment changes. In these dynamic scenarios the path of the vehicle needs to adapt to changes in the real world. In these scenarios, higher performance paths can be achieved if we are able to predict the future state of the world, by learning the way it evolves from historical data. We are relying on recent advances in the field of deep learning and reinforcement learning to learn appropriate world models and path planning behaviors.There are many different practical applications that map to this model. In this dissertation we propose algorithms for two applications that are very different in domain but share important formal similarities: the scheduling of taxi services in a large city and tracking wild animals with an unmanned aerial vehicle.The first application models a centralized taxi dispatch center in a big city. It is a multivariate optimization problem for taxi time scheduling and path planning. The first goal here is to balance the taxi service demand and supply ratio in the city. The second goal is to minimize passenger waiting time and taxi idle driving distance. We design different learning models that capture taxi demand and destination distribution patterns from historical taxi data. The predictions are evaluated with real-world taxi trip records. The predicted taxi demand and destination is used to build a taxi dispatch model. The taxi assignment and re-balance is optimized by solving a Mixed Integer Programming (MIP) problem.The second application concerns animal monitoring using an unmanned aerial vehicle (UAV) to search and track wild animals in a large geographic area. We propose two different path planing approaches for the UAV. The first one is based on the UAV controller solving Markov decision process (MDP). The second algorithms relies on the past recorded animal appearances. We designed a learning model that captures animal appearance patterns and predicts the distribution of future animal appearances. We compare the proposed path planning approaches with traditional methods and evaluated them in terms of collected value of information (VoI), message delay and percentage of events collected.
Show less - Date Issued
- 2019
- Identifier
- CFE0007557, ucf:52606
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007557
- Title
- An Engineering Analytics Based Framework for Computational Advertising Systems.
- Creator
-
Chen, Mengmeng, Rabelo, Luis, Lee, Gene, Keathley, Heather, Rahal, Ahmad, University of Central Florida
- Abstract / Description
-
Engineering analytics is a multifaceted landscape with a diversity of analytics tools which comes from emerging fields such as big data, machine learning, and traditional operations research. Industrial engineering is capable to optimize complex process and systems using engineering analytics elements and the traditional components such as total quality management. This dissertation has proven that industrial engineering using engineering analytics can optimize the emerging area of...
Show moreEngineering analytics is a multifaceted landscape with a diversity of analytics tools which comes from emerging fields such as big data, machine learning, and traditional operations research. Industrial engineering is capable to optimize complex process and systems using engineering analytics elements and the traditional components such as total quality management. This dissertation has proven that industrial engineering using engineering analytics can optimize the emerging area of Computational Advertising. The key was to know the different fields very well and do the right selection. However, people first need to understand and be experts in the flow of the complex application of Computational Advertising and based on the characteristics of each step map the right field of Engineering analytics and traditional Industrial Engineering. Then build the apparatus and apply it to the respective problem in question.This dissertation consists of four research papers addressing the development of a framework to tame the complexity of computational advertising and improve its usage efficiency from an advertiser's viewpoint. This new framework and its respective systems architecture combine the use of support vector machines, Recurrent Neural Networks, Deep Learning Neural Networks, traditional neural networks, Game Theory/Auction Theory with Generative adversarial networks, and Web Engineering to optimize the computational advertising bidding process and achieve a higher rate of return. The system is validated with an actual case study with commercial providers such as Google AdWords and an advertiser's budget of several million dollars.
Show less - Date Issued
- 2018
- Identifier
- CFE0007319, ucf:52118
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007319
- Title
- Human Action Localization and Recognition in Unconstrained Videos.
- Creator
-
Boyraz, Hakan, Tappen, Marshall, Foroosh, Hassan, Lin, Mingjie, Zhang, Shaojie, Sukthankar, Rahul, University of Central Florida
- Abstract / Description
-
As imaging systems become ubiquitous, the ability to recognize human actions is becoming increasingly important. Just as in the object detection and recognition literature, action recognition can be roughly divided into classification tasks, where the goal is to classify a video according to the action depicted in the video, and detection tasks, where the goal is to detect and localize a human performing a particular action. A growing literature is demonstrating the benefits of localizing...
Show moreAs imaging systems become ubiquitous, the ability to recognize human actions is becoming increasingly important. Just as in the object detection and recognition literature, action recognition can be roughly divided into classification tasks, where the goal is to classify a video according to the action depicted in the video, and detection tasks, where the goal is to detect and localize a human performing a particular action. A growing literature is demonstrating the benefits of localizing discriminative sub-regions of images and videos when performing recognition tasks. In this thesis, we address the action detection and recognition problems. Action detection in video is a particularly difficult problem because actions must not only be recognized correctly, but must also be localized in the 3D spatio-temporal volume. We introduce a technique that transforms the 3D localization problem into a series of 2D detection tasks. This is accomplished by dividing the video into overlapping segments, then representing each segment with a 2D video projection. The advantage of the 2D projection is that it makes it convenient to apply the best techniques from object detection to the action detection problem. We also introduce a novel, straightforward method for searching the 2D projections to localize actions, termed Two-Point Subwindow Search (TPSS). Finally, we show how to connect the local detections in time using a chaining algorithm to identify the entire extent of the action. Our experiments show that video projection outperforms the latest results on action detection in a direct comparison.Second, we present a probabilistic model learning to identify discriminative regions in videos from weakly-supervised data where each video clip is only assigned a label describing what action is present in the frame or clip. While our first system requires every action to be manually outlined in every frame of the video, this second system only requires that the video be given a single high-level tag. From this data, the system is able to identify discriminative regions that correspond well to the regions containing the actual actions. Our experiments on both the MSR Action Dataset II and UCF Sports Dataset show that the localizations produced by this weakly supervised system are comparable in quality to localizations produced by systems that require each frame to be manually annotated. This system is able to detect actions in both 1) non-temporally segmented action videos and 2) recognition tasks where a single label is assigned to the clip. We also demonstrate the action recognition performance of our method on two complex datasets, i.e. HMDB and UCF101. Third, we extend our weakly-supervised framework by replacing the recognition stage with a two-stage neural network and apply dropout for preventing overfitting of the parameters on the training data. Dropout technique has been recently introduced to prevent overfitting of the parameters in deep neural networks and it has been applied successfully to object recognition problem. To our knowledge, this is the first system using dropout for action recognition problem. We demonstrate that using dropout improves the action recognition accuracies on HMDB and UCF101 datasets.
Show less - Date Issued
- 2013
- Identifier
- CFE0004977, ucf:49562
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004977
- Title
- BIOSIGNAL PROCESSING CHALLENGES IN EMOTION RECOGNITIONFOR ADAPTIVE LEARNING.
- Creator
-
Vartak, Aniket, Mikhael, Wasfy, University of Central Florida
- Abstract / Description
-
User-centered computer based learning is an emerging field of interdisciplinary research. Research in diverse areas such as psychology, computer science, neuroscience and signal processing is making contributions the promise to take this field to the next level. Learning systems built using contributions from these fields could be used in actual training and education instead of just laboratory proof-of-concept. One of the important advances in this research is the detection and assessment of...
Show moreUser-centered computer based learning is an emerging field of interdisciplinary research. Research in diverse areas such as psychology, computer science, neuroscience and signal processing is making contributions the promise to take this field to the next level. Learning systems built using contributions from these fields could be used in actual training and education instead of just laboratory proof-of-concept. One of the important advances in this research is the detection and assessment of the cognitive and emotional state of the learner using such systems. This capability moves development beyond the use of traditional user performance metrics to include system intelligence measures that are based on current neuroscience theories. These advances are of paramount importance in the success and wide spread use of learning systems that are automated and intelligent. Emotion is considered an important aspect of how learning occurs, and yet estimating it and making adaptive adjustments are not part of most learning systems. In this research we focus on one specific aspect of constructing an adaptive and intelligent learning system, that is, estimation of the emotion of the learner as he/she is using the automated training system. The challenge starts with the definition of the emotion and the utility of it in human life. The next challenge is to measure the co-varying factors of the emotions in a non-invasive way, and find consistent features from these measures that are valid across wide population. In this research we use four physiological sensors that are non-invasive, and establish a methodology of utilizing the data from these sensors using different signal processing tools. A validated set of visual stimuli used worldwide in the research of emotion and attention, called International Affective Picture System (IAPS), is used. A dataset is collected from the sensors in an experiment designed to elicit emotions from these validated visual stimuli. We describe a novel wavelet method to calculate hemispheric asymmetry metric using electroencephalography data. This method is tested against typically used power spectral density method. We show overall improvement in accuracy in classifying specific emotions using the novel method. We also show distinctions between different discrete emotions from the autonomic nervous system activity using electrocardiography, electrodermal activity and pupil diameter changes. Findings from different features from these sensors are used to give guidelines to use each of the individual sensors in the adaptive learning environment.
Show less - Date Issued
- 2010
- Identifier
- CFE0003301, ucf:48503
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003301
- Title
- Improved Multi-Task Learning Based on Local Rademacher Analysis.
- Creator
-
Yousefi, Niloofar, Mollaghasemi, Mansooreh, Rabelo, Luis, Zheng, Qipeng, Anagnostopoulos, Georgios, Xanthopoulos, Petros, Georgiopoulos, Michael, University of Central Florida
- Abstract / Description
-
Considering a single prediction task at a time is the most commonly paradigm in machine learning practice. This methodology, however, ignores the potentially relevant information that might be available in other related tasks in the same domain. This becomes even more critical where facing the lack of a sufficient amount of data in a prediction task of an individual subject may lead to deteriorated generalization performance. In such cases, learning multiple related tasks together might offer...
Show moreConsidering a single prediction task at a time is the most commonly paradigm in machine learning practice. This methodology, however, ignores the potentially relevant information that might be available in other related tasks in the same domain. This becomes even more critical where facing the lack of a sufficient amount of data in a prediction task of an individual subject may lead to deteriorated generalization performance. In such cases, learning multiple related tasks together might offer a better performance by allowing tasks to leverage information from each other. Multi-Task Learning (MTL) is a machine learning framework, which learns multiple related tasks simultaneously to overcome data scarcity limitations of Single Task Learning (STL), and therefore, it results in an improved performance. Although MTL has been actively investigated by the machine learning community, there are only a few studies examining the theoretical justification of this learning framework. The focus of previous studies is on providing learning guarantees in the form of generalization error bounds. The study of generalization bounds is considered as an important problem in machine learning, and, more specifically, in statistical learning theory. This importance is twofold: (1) generalization bounds provide an upper-tail confidence interval for the true risk of a learning algorithm the latter of which cannot be precisely calculated due to its dependency to some unknown distribution P from which the data are drawn, (2) this type of bounds can also be employed as model selection tools, which lead to identifying more accurate learning models. The generalization error bounds are typically expressed in terms of the empirical risk of the learning hypothesis along with a complexity measure of that hypothesis. Although different complexity measures can be used in deriving error bounds, Rademacher complexity has received considerable attention in recent years, due to its superiority to other complexity measures. In fact, Rademacher complexity can potentially lead to tighter error bounds compared to the ones obtained by other complexity measures. However, one shortcoming of the general notion of Rademacher complexity is that it provides a global complexity estimate of the learning hypothesis space, which does not take into consideration the fact that learning algorithms, by design, select functions belonging to a more favorable subset of this space and, therefore, they yield better performing models than the worst case. To overcome the limitation of global Rademacher complexity, a more nuanced notion of Rademacher complexity, the so-called local Rademacher complexity, has been considered, which leads to sharper learning bounds, and as such, compared to its global counterpart, guarantees faster convergence rates in terms of number of samples. Also, considering the fact that locally-derived bounds are expected to be tighter than globally-derived ones, they can motivate better (more accurate) model selection algorithms.While the previous MTL studies provide generalization bounds based on some other complexity measures, in this dissertation, we prove excess risk bounds for some popular kernel-based MTL hypothesis spaces based on the Local Rademacher Complexity (LRC) of those hypotheses. We show that these local bounds have faster convergence rates compared to the previous Global Rademacher Complexity (GRC)-based bounds. We then use our LRC-based MTL bounds to design a new kernel-based MTL model, which enjoys strong learning guarantees. Moreover, we develop an optimization algorithm to solve our new MTL formulation. Finally, we run simulations on experimental data that compare our MTL model to some classical Multi-Task Multiple Kernel Learning (MT-MKL) models designed based on the GRCs. Since the local Rademacher complexities are expected to be tighter than the global ones, our new model is also expected to exhibit better performance compared to the GRC-based models.
Show less - Date Issued
- 2017
- Identifier
- CFE0006827, ucf:51778
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006827
- Title
- Cost-Sensitive Learning-based Methods for Imbalanced Classification Problems with Applications.
- Creator
-
Razzaghi, Talayeh, Xanthopoulos, Petros, Karwowski, Waldemar, Pazour, Jennifer, Mikusinski, Piotr, University of Central Florida
- Abstract / Description
-
Analysis and predictive modeling of massive datasets is an extremely significant problem that arises in many practical applications. The task of predictive modeling becomes even more challenging when data are imperfect or uncertain. The real data are frequently affected by outliers, uncertain labels, and uneven distribution of classes (imbalanced data). Such uncertainties createbias and make predictive modeling an even more difficult task. In the present work, we introduce a cost-sensitive...
Show moreAnalysis and predictive modeling of massive datasets is an extremely significant problem that arises in many practical applications. The task of predictive modeling becomes even more challenging when data are imperfect or uncertain. The real data are frequently affected by outliers, uncertain labels, and uneven distribution of classes (imbalanced data). Such uncertainties createbias and make predictive modeling an even more difficult task. In the present work, we introduce a cost-sensitive learning method (CSL) to deal with the classification of imperfect data. Typically, most traditional approaches for classification demonstrate poor performance in an environment with imperfect data. We propose the use of CSL with Support Vector Machine, which is a well-known data mining algorithm. The results reveal that the proposed algorithm produces more accurate classifiers and is more robust with respect to imperfect data. Furthermore, we explore the best performance measures to tackle imperfect data along with addressing real problems in quality control and business analytics.
Show less - Date Issued
- 2014
- Identifier
- CFE0005542, ucf:50298
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005542
- Title
- Learning Algorithms for Fat Quantification and Tumor Characterization.
- Creator
-
Hussein, Sarfaraz, Bagci, Ulas, Shah, Mubarak, Heinrich, Mark, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
Obesity is one of the most prevalent health conditions. About 30% of the world's and over 70% of the United States' adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the...
Show moreObesity is one of the most prevalent health conditions. About 30% of the world's and over 70% of the United States' adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice.
Show less - Date Issued
- 2018
- Identifier
- CFE0007196, ucf:52288
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007196
- Title
- TOWARDS A SELF-CALIBRATING VIDEO CAMERA NETWORK FOR CONTENT ANALYSIS AND FORENSICS.
- Creator
-
Junejo, Imran, Foroosh, Hassan, University of Central Florida
- Abstract / Description
-
Due to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space....
Show moreDue to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space. This thesis addresses the issue of how cameras in such a network can be calibrated and how the network as a whole can be calibrated, such that each camera as a unit in the network is aware of its orientation with respect to all the other cameras in the network. Different types of cameras might be present in a multiple camera network and novel techniques are presented for efficient calibration of these cameras. Specifically: (i) For a stationary camera, we derive new constraints on the Image of the Absolute Conic (IAC). These new constraints are shown to be intrinsic to IAC; (ii) For a scene where object shadows are cast on a ground plane, we track the shadows on the ground plane cast by at least two unknown stationary points, and utilize the tracked shadow positions to compute the horizon line and hence compute the camera intrinsic and extrinsic parameters; (iii) A novel solution to a scenario where a camera is observing pedestrians is presented. The uniqueness of formulation lies in recognizing two harmonic homologies present in the geometry obtained by observing pedestrians; (iv) For a freely moving camera, a novel practical method is proposed for its self-calibration which even allows it to change its internal parameters by zooming; and (v) due to the increased application of the pan-tilt-zoom (PTZ) cameras, a technique is presented that uses only two images to estimate five camera parameters. For an automatically configurable multi-camera network, having non-overlapping field of view and possibly containing moving cameras, a practical framework is proposed that determines the geometry of such a dynamic camera network. It is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the geometry of a dynamic network. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic as well as on real data. Applications to path modeling, GPS coordinate estimation, and configuring mixed-reality environment are explored.
Show less - Date Issued
- 2007
- Identifier
- CFE0001743, ucf:47296
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001743
- Title
- AN ANALYSIS OF MISCLASSIFICATION RATES FOR DECISION TREES.
- Creator
-
Zhong, Mingyu, Georgiopoulos, Michael, University of Central Florida
- Abstract / Description
-
The decision tree is a well-known methodology for classification and regression. In this dissertation, we focus on the minimization of the misclassification rate for decision tree classifiers. We derive the necessary equations that provide the optimal tree prediction, the estimated risk of the tree's prediction, and the reliability of the tree's risk estimation. We carry out an extensive analysis of the application of Lidstone's law of succession for the estimation of the class...
Show moreThe decision tree is a well-known methodology for classification and regression. In this dissertation, we focus on the minimization of the misclassification rate for decision tree classifiers. We derive the necessary equations that provide the optimal tree prediction, the estimated risk of the tree's prediction, and the reliability of the tree's risk estimation. We carry out an extensive analysis of the application of Lidstone's law of succession for the estimation of the class probabilities. In contrast to existing research, we not only compute the expected values of the risks but also calculate the corresponding reliability of the risk (measured by standard deviations). We also provide an explicit expression of the k-norm estimation for the tree's misclassification rate that combines both the expected value and the reliability. Furthermore, our proposed and proven theorem on k-norm estimation suggests an efficient pruning algorithm that has a clear theoretical interpretation, is easily implemented, and does not require a validation set. Our experiments show that our proposed pruning algorithm produces accurate trees quickly that compares very favorably with two other well-known pruning algorithms, CCP of CART and EBP of C4.5. Finally, our work provides a deeper understanding of decision trees.
Show less - Date Issued
- 2007
- Identifier
- CFE0001774, ucf:47271
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001774
- Title
- Predicting Students' Academic Performance with Decision Tree and Neural Network.
- Creator
-
Feng, Junshuai, Jha, Sumit Kumar, Zhang, Wei, Zhang, Shaojie, University of Central Florida
- Abstract / Description
-
Educational Data Mining (EDM) is a developing research field that involves many techniques to explore data relating to educational background. EDM can analyze and resolve educational data with computational methods to address educational questions. Similar to EDM, neural networks have been utilized in widespread and successful data mining applications. In this paper, synthetic datasets are employed since this paper aims to explore the methodologies such as decision tree classifiers and neural...
Show moreEducational Data Mining (EDM) is a developing research field that involves many techniques to explore data relating to educational background. EDM can analyze and resolve educational data with computational methods to address educational questions. Similar to EDM, neural networks have been utilized in widespread and successful data mining applications. In this paper, synthetic datasets are employed since this paper aims to explore the methodologies such as decision tree classifiers and neural networks to predict student performance in the context of EDM. Firstly, it introduces EDM and some relative works that have been accomplished previously in this field along with their datasets and computational results. Then, it demonstrates how the synthetic student dataset is generated, analyzes some input attributes from the dataset such as gender and high school GPA, and delivers with some visualization results to determine which classification methods approaches are the most efficient. After testing the data with decision tree classifiers and neural networks methodologies, it concludes the effectiveness of both approaches in terms of the model evaluation performance as well as discussing some of the most promising future work of this research.
Show less - Date Issued
- 2019
- Identifier
- CFE0007455, ucf:52680
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007455
- Title
- Enhancing Cognitive Algorithms for Optimal Performance of Adaptive Networks.
- Creator
-
Lugo-Cordero, Hector, Guha, Ratan, Wu, Annie, Stanley, Kenneth, University of Central Florida
- Abstract / Description
-
This research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in...
Show moreThis research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in the environment. Furthermore, the diversity of the performance objectives considered (e.g., power, coverage, anonymity, etc.) makes the objective function non-continuous and therefore not have a derivative. For these reasons, we enhance Particle Swarm Optimization (PSO) algorithm with elements that aid in exploring for better configurations to obtain optimal and sub-optimal configurations. According to results, the enhanced PSO promotes population diversity, leading to more unique optimal configurations for adapting to dynamic environments. The gradual complexification process demonstrated simpler optimal solutions than those obtained via trial and error without the enhancements.Configurations obtained by the modified PSO are further tuned in real-time upon environment changes. Such tuning occurs with a Fuzzy Logic Controller (FLC) which models human decision making by monitoring certain events in the algorithm. Example of such events include diversity and quality of solution in the environment. The FLC is able to adapt the enhanced PSO to changes in the environment, causing more exploration or exploitation as needed.By adding a Probabilistic Neural Network (PNN) classifier, the enhanced PSO is again used as a filter to aid in intrusion detection classification. This approach reduces miss classifications by consulting neighbors for classification in case of ambiguous samples. The performance of ambiguous votes via PSO filtering shows an improvement in classification, causing the simple classifier perform better the commonly used classifiers.
Show less - Date Issued
- 2018
- Identifier
- CFE0007046, ucf:52003
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007046
- Title
- Analysis of Remote Tripping Command Injection Attacks in Industrial Control Systems Through Statistical and Machine Learning Methods.
- Creator
-
Timm, Charles, Caulkins, Bruce, Wiegand, Rudolf, Lathrop, Scott, University of Central Florida
- Abstract / Description
-
In the past decade, cyber operations have been increasingly utilized to further policy goals of state-sponsored actors to shift the balance of politics and power on a global scale. One of the ways this has been evidenced is through the exploitation of electric grids via cyber means. A remote tripping command injection attack is one of the types of attacks that could have devastating effects on the North American power grid. To better understand these attacks and create detection axioms to...
Show moreIn the past decade, cyber operations have been increasingly utilized to further policy goals of state-sponsored actors to shift the balance of politics and power on a global scale. One of the ways this has been evidenced is through the exploitation of electric grids via cyber means. A remote tripping command injection attack is one of the types of attacks that could have devastating effects on the North American power grid. To better understand these attacks and create detection axioms to both quickly identify and mitigate the effects of a remote tripping command injection attack, a dataset comprised of 128 variables (primarily synchrophasor measurements) was analyzed via statistical methods and machine learning algorithms in RStudio and WEKA software respectively. While statistical methods were not successful due to the non-linearity and complexity of the dataset, machine learning algorithms surpassed accuracy metrics established in previous research given a simplified dataset of the specified attack and normal operational data. This research allows future cybersecurity researchers to better understand remote tripping command injection attacks in comparison to normal operational conditions. Further, an incorporation of the analysis has the potential to increase detection and thus mitigate risk to the North American power grid in future work.
Show less - Date Issued
- 2018
- Identifier
- CFE0007257, ucf:52193
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007257
- Title
- REMOTE SENSING WITH COMPUTATIONAL INTELLIGENCE MODELLING FOR MONITORING THE ECOSYSTEM STATE AND HYDRAULIC PATTERN IN A CONSTRUCTED WETLAND.
- Creator
-
Mohiuddin, Golam, Chang, Ni-bin, Lee, Woo Hyoung, Wanielista, Martin, University of Central Florida
- Abstract / Description
-
Monitoring the heterogeneous aquatic environment such as the Stormwater Treatment Areas (STAs) located at the northeast of the Everglades is extremely important in understanding the land processes of the constructed wetland in its capacity to remove nutrient. Direct monitoring and measurements of ecosystem evolution and changing velocities at every single part of the STA are not always feasible. Integrated remote sensing, monitoring, and modeling technique can be a state-of-the-art tool to...
Show moreMonitoring the heterogeneous aquatic environment such as the Stormwater Treatment Areas (STAs) located at the northeast of the Everglades is extremely important in understanding the land processes of the constructed wetland in its capacity to remove nutrient. Direct monitoring and measurements of ecosystem evolution and changing velocities at every single part of the STA are not always feasible. Integrated remote sensing, monitoring, and modeling technique can be a state-of-the-art tool to estimate the spatial and temporal distributions of flow velocity regimes and ecological functioning in such dynamic aquatic environments. In this presentation, comparison between four computational intelligence models including Extreme Learning Machine (ELM), Genetic Programming (GP) and Artificial Neural Network (ANN) models were organized to holistically assess the flow velocity and direction as well as ecosystem states within a vegetative wetland area. First the local sensor network was established using Acoustic Doppler Velocimeter (ADV). Utilizing the local sensor data along with the help of external driving forces parameters, trained models of ELM, GP and ANN were developed, calibrated, validated, and compared to select the best computational capacity of velocity prediction over time. Besides, seasonal images collected by French satellite Pleiades have been analyzed to address the seasonality effect of plant species evolution and biomass changes in the constructed wetland. The key finding of this research is to characterize the interactions between geophysical and geochemical processes in this wetland system based on ground-based monitoring sensors and satellite images to discover insight of hydraulic residence time, plant species variation, and water quality and improve the overall understanding of possible nutrient removal in this constructed wetland.
Show less - Date Issued
- 2014
- Identifier
- CFE0005533, ucf:52864
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005533
- Title
- Approximate In-memory computing on RERAMs.
- Creator
-
Khokhar, Salman Anwar, Heinrich, Mark, Leavens, Gary, Yuksel, Murat, Bagci, Ulas, Rahman, Talat, University of Central Florida
- Abstract / Description
-
Computing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a...
Show moreComputing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a memory-processor communication bottleneck, which is commonly referredto as the 'memory wall'. The relatively slower progress in memory technology compared with processing units has continued to exacerbate the memory wall problem. As feature sizes in the CMOSlogic family reduce further, quantum tunneling effects are becoming more prominent. Simultaneously, chip transistor density is already so high that all transistors cannot be powered up at the same time without violating temperature constraints, a phenomenon characterized as dark-silicon. Coupled with this, there is also an increase in leakage currents with smaller feature sizes, resultingin a breakdown of 'Dennard's' scaling. All these challenges cannot be met without fundamental changes in current computing paradigms. One viable solution is in-memory computing, wherecomputing and storage are performed alongside each other. A number of emerging memory fabrics such as ReRAMS, STT-RAMs, and PCM RAMs are capable of performing logic in-memory.ReRAMs possess high storage density, have extremely low power consumption and a low cost of fabrication. These advantages are due to the simple nature of its basic constituting elements whichallow nano-scale fabrication. We use flow-based computing on ReRAM crossbars for computing that exploits natural sneak paths in those crossbars.Another concurrent development in computing is the maturation of domains that are error resilient while being highly data and power intensive. These include machine learning, pattern recognition,computer vision, image processing, and networking, etc. This shift in the nature of computing workloads has given weight to the idea of (")approximate computing("), in which device efficiency is improved by sacrificing tolerable amounts of accuracy in computation. We present a mathematically rigorous foundation for the synthesis of approximate logic and its mapping to ReRAM crossbars using search based and graphical methods.
Show less - Date Issued
- 2019
- Identifier
- CFE0007827, ucf:52817
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007827
- Title
- Reliability and Robustness Enhancement of Cooperative Vehicular Systems: A Bayesian Machine Learning Perspective.
- Creator
-
Nourkhiz Mahjoub, Hossein, Pourmohammadi Fallah, Yaser, Vosoughi, Azadeh, Yuksel, Murat, Atia, George, Eluru, Naveen, University of Central Florida
- Abstract / Description
-
Autonomous vehicles are expected to greatly transform the transportation domain in the near future. Some even envision that the human drivers may be fully replaced by automated systems. It is plausible to assume that at least a significant part of the driving task will be done by automated systems in not a distant future. Although we are observing a rapid advance towards this goal, which gradually pushes the traditional human-based driving toward more advanced autonomy levels, the full...
Show moreAutonomous vehicles are expected to greatly transform the transportation domain in the near future. Some even envision that the human drivers may be fully replaced by automated systems. It is plausible to assume that at least a significant part of the driving task will be done by automated systems in not a distant future. Although we are observing a rapid advance towards this goal, which gradually pushes the traditional human-based driving toward more advanced autonomy levels, the full autonomy concept still has a long way before being completely fulfilled and realized due to numerous technical and societal challenges. During this long transition phase, blended driving scenarios, composed of agents with different levels of autonomy, seems to be inevitable. Therefore, it is critical to design appropriate driving systems with different levels of intelligence in order to benefit all participants. Vehicular safety systems and their more advanced successors, i.e., Cooperative Vehicular Systems (CVS), have originated from this perspective. These systems aim to enhance the overall quality and performance of the current driving situation by incorporating the most advanced available technologies, ranging from on-board sensors such as radars, LiDARs, and cameras to other promising solutions e.g. Vehicle-to-Everything (V2X) communications. However, it is still challenging to attain the ideal anticipated benefits out of the cooperative vehicular systems, due to the inherent issues and challenges of their different components, such as sensors' failures in severe weather conditions or the poor performance of V2X technologies under dense communication channel loads. In this research we aim to address some of these challenges from a Bayesian Machine- Learning perspective, by proposing several novel ideas and solutions which facilitate the realization of more robust, reliable, and agile cooperative vehicular systems. More precisely, we have a two-fold contribution here. In one aspect, we have investigated the notion of Model-Based Communications (MBC) and demonstrated its effectiveness for V2X communication performance enhancement. This improvement is achieved due to the more intelligent communication strategy of MBC in comparison with the current state-of-the-art V2X technologies. Essentially, MBC proposes a conceptual change in the nature of the disseminated and shared information over the communication channel compared to what is being disseminated in current technologies. In the MBC framework, instead of sharing the raw dynamic information among the network agents, each agent shares the parameters of a stochastic forecasting model which represents its current and future behavior and updates these parameters as needed. This model sharing strategy enables the receivers to precisely predict the future behaviors of the transmitter even when the update frequency is very low. On the other hand, we have also proposed receiver-side solutions in order to enhance the CVS performance and reliability and mitigate the issues caused by imperfect communication and detection processes. The core concept for these solutions is incorporating other informative elements in the system to compensate for the lack of information which is lost during the imperfect communication or detection phases. For proof of concept, we have designed an adaptive FCW framework which considers the driver's feedbacks to the CVS system. This adaptive framework mitigates the negative impact of imperfectly received or detected information on system performance, using the inherent information of these feedbacks and responses. The effectiveness and superiority of this adaptive framework over traditional design has been demonstrated in this research.
Show less - Date Issued
- 2019
- Identifier
- CFE0007845, ucf:52807
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007845
- Title
- A Methodology for Data-Driven Decision-Making in Last Mile Delivery Operations.
- Creator
-
Gutierrez Franco, Edgar, Rabelo, Luis, Karwowski, Waldemar, Zheng, Qipeng, Sarmiento, Alfonso, University of Central Florida
- Abstract / Description
-
Across all industries, from manufacturing to services, decision-makers must deal day to day with the outcomes from past and current decisions that affect their business. Last-mile delivery is the term used in supply chain management to describe the movement of goods from a hub to final destinations. This research proposes a methodology that supports decision making for the execution of last-mile delivery operations in a supply chain. This methodology offers diverse, hybrid, and complementary...
Show moreAcross all industries, from manufacturing to services, decision-makers must deal day to day with the outcomes from past and current decisions that affect their business. Last-mile delivery is the term used in supply chain management to describe the movement of goods from a hub to final destinations. This research proposes a methodology that supports decision making for the execution of last-mile delivery operations in a supply chain. This methodology offers diverse, hybrid, and complementary techniques (e.g., optimization, simulation, machine learning, and geographic information systems) to understand last-mile delivery operations through data-driven decision-making. The hybrid modeling might create better warning systems and support the delivery stage in a supply chain. The methodology proposes self-learning procedures to iteratively test and adjust the gaps between the expected and real performance. This methodology supports the process of making effective decisions promptly, optimization, simulation, and machine learning models are used to support execution processes and adjust plans according to changes in conditions, circumstances, and critical factors. This research is applied in two case studies. The first one is in maritime logistics, which discusses the decision process to find the type of vessels and routes to deliver petroleum from ships to villages. The second is in city logistics, where a network of stakeholders during the city distribution process is analyzed, showing the potential benefits of this methodology, especially in metropolitan areas. Potential applications of this system will leverage growing technological trends (e.g., machine learning in supply chain management and logistics, internet of things). The main research impact is the design and implementation of a methodology, which can support real-time decisions and adjust last-mile operations depending on the circumstances. The methodology allows taking decisions under conditions of stakeholder behavior patterns like vehicle drivers, customers, locations, and traffic. As the main benefit is the possibility to predict future scenarios and plan strategies for the most likely situations in last-mile delivery. This will help determine and support the accurate calculation of performance indicators. The research brings a unified methodology, where different solution approaches can be used in a synchronized form, which allows researches and other interested people to see the connection between techniques. With this research, it was possible to bring advanced technologies in routing practices and algorithms to decrease operating cost and leverage the use of offline and online information, thanks to connected sensors to support decisions.
Show less - Date Issued
- 2019
- Identifier
- CFE0007645, ucf:52505
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007645