Current Search: machine learning (x)
View All Items
Pages
- Title
- A Study of Localization and Latency Reduction for Action Recognition.
- Creator
-
Masood, Syed, Tappen, Marshall, Foroosh, Hassan, Stanley, Kenneth, Sukthankar, Rahul, University of Central Florida
- Abstract / Description
-
The success of recognizing periodic actions in single-person-simple-background datasets, such as Weizmann and KTH, has created a need for more complex datasets to push the performance of action recognition systems. In this work, we create a new synthetic action dataset and use it to highlight weaknesses in current recognition systems. Experiments show that introducing background complexity to action video sequences causes a significant degradation in recognition performance. Moreover, this...
Show moreThe success of recognizing periodic actions in single-person-simple-background datasets, such as Weizmann and KTH, has created a need for more complex datasets to push the performance of action recognition systems. In this work, we create a new synthetic action dataset and use it to highlight weaknesses in current recognition systems. Experiments show that introducing background complexity to action video sequences causes a significant degradation in recognition performance. Moreover, this degradation cannot be fixed by fine-tuning system parameters or by selecting better feature points. Instead, we show that the problem lies in the spatio-temporal cuboid volume extracted from the interest point locations. Having identified the problem, we show how improved results can be achieved by simple modifications to the cuboids.For the above method however, one requires near-perfect localization of the action within a video sequence. To achieve this objective, we present a two stage weakly supervised probabilistic model for simultaneous localization and recognition of actions in videos. Different from previous approaches, our method is novel in that it (1) eliminates the need for manual annotations for the training procedure and (2) does not require any human detection or tracking in the classification stage. The first stage of our framework is a probabilistic action localization model which extracts the most promising sub-windows in a video sequence where an action can take place. We use a non-linear classifier in the second stage of our framework for the final classification task. We show the effectiveness of our proposed model on two well known real-world datasets: UCF Sports and UCF11 datasets.Another application of the weakly supervised probablistic model proposed above is in the gaming environment. An important aspect in designing interactive, action-based interfaces is reliably recognizing actions with minimal latency. High latency causes the system's feedback to lag behind and thus significantly degrade the interactivity of the user experience. With slight modification to the weakly supervised probablistic model we proposed for action localization, we show how it can be used for reducing latency when recognizing actions in Human Computer Interaction (HCI) environments. This latency-aware learning formulation trains a logistic regression-based classifier that automatically determines distinctive canonical poses from the data and uses these to robustly recognize actions in the presence of ambiguous poses. We introduce a novel (publicly released) dataset for the purpose of our experiments. Comparisons of our method against both a Bag of Words and a Conditional Random Field (CRF) classifier show improved recognition performance for both pre-segmented and online classification tasks.
Show less - Date Issued
- 2012
- Identifier
- CFE0004575, ucf:49210
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004575
- Title
- An Unsupervised Consensus Control Chart Pattern Recognition Framework.
- Creator
-
Haghtalab, Siavash, Xanthopoulos, Petros, Pazour, Jennifer, Rabelo, Luis, University of Central Florida
- Abstract / Description
-
Early identification and detection of abnormal time series patterns is vital for a number of manufacturing.Slide shifts and alterations of time series patterns might be indicative of some anomalyin the production process, such as machinery malfunction. Usually due to the continuous flow of data monitoring of manufacturing processes requires automated Control Chart Pattern Recognition(CCPR) algorithms. The majority of CCPR literature consists of supervised classification algorithms. Less...
Show moreEarly identification and detection of abnormal time series patterns is vital for a number of manufacturing.Slide shifts and alterations of time series patterns might be indicative of some anomalyin the production process, such as machinery malfunction. Usually due to the continuous flow of data monitoring of manufacturing processes requires automated Control Chart Pattern Recognition(CCPR) algorithms. The majority of CCPR literature consists of supervised classification algorithms. Less studies consider unsupervised versions of the problem. Despite the profound advantageof unsupervised methodology for less manual data labeling their use is limited due to thefact that their performance is not robust enough for practical purposes. In this study we propose the use of a consensus clustering framework. Computational results show robust behavior compared to individual clustering algorithms.
Show less - Date Issued
- 2014
- Identifier
- CFE0005178, ucf:50670
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005178
- Title
- Model Selection via Racing.
- Creator
-
Zhang, Tiantian, Georgiopoulos, Michael, Anagnostopoulos, Georgios, Wu, Annie, Hu, Haiyan, Nickerson, David, University of Central Florida
- Abstract / Description
-
Model Selection (MS) is an important aspect of machine learning, as necessitated by the No Free Lunch theorem. Briefly speaking, the task of MS is to identify a subset of models that are optimal in terms of pre-selected optimization criteria. There are many practical applications of MS, such as model parameter tuning, personalized recommendations, A/B testing, etc. Lately, some MS research has focused on trading off exactness of the optimization with somewhat alleviating the computational...
Show moreModel Selection (MS) is an important aspect of machine learning, as necessitated by the No Free Lunch theorem. Briefly speaking, the task of MS is to identify a subset of models that are optimal in terms of pre-selected optimization criteria. There are many practical applications of MS, such as model parameter tuning, personalized recommendations, A/B testing, etc. Lately, some MS research has focused on trading off exactness of the optimization with somewhat alleviating the computational burden entailed. Recent attempts along this line include metaheuristics optimization, local search-based approaches, sequential model-based methods, portfolio algorithm approaches, and multi-armed bandits.Racing Algorithms (RAs) are an active research area in MS, which trade off some computational cost for a reduced, but acceptable likelihood that the models returned are indeed optimal among the given ensemble of models. All existing RAs in the literature are designed as Single-Objective Racing Algorithm (SORA) for Single-Objective Model Selection (SOMS), where a single optimization criterion is considered for measuring the goodness of models. Moreover, they are offline algorithms in which MS occurs before model deployment and the selected models are optimal in terms of their overall average performances on a validation set of problem instances. This work aims to investigate racing approaches along two distinct directions: Extreme Model Selection (EMS) and Multi-Objective Model Selection (MOMS). In EMS, given a problem instance and a limited computational budget shared among all the candidate models, one is interested in maximizing the final solution quality. In such a setting, MS occurs during model comparison in terms of maximum performance and involves no model validation. EMS is a natural framework for many applications. However, EMS problems remain unaddressed by current racing approaches. In this work, the first RA for EMS, named Max-Race, is developed, so that it optimizes the extreme solution quality by automatically allocating the computational resources among an ensemble of problem solvers for a given problem instance. In Max-Race, significant difference between the extreme performances of any pair of models is statistically inferred via a parametric hypothesis test under the Generalized Pareto Distribution (GPD) assumption. Experimental results have confirmed that Max-Race is capable of identifying the best extreme model with high accuracy and low computational cost. Furthermore, in machine learning, as well as in many real-world applications, a variety of MS problems are multi-objective in nature. MS which simultaneously considers multiple optimization criteria is referred to as MOMS. Under this scheme, a set of Pareto optimal models is sought that reflect a variety of compromises between optimization objectives. So far, MOMS problems have received little attention in the relevant literature. Therefore, this work also develops the first Multi-Objective Racing Algorithm (MORA) for a fixed-budget setting, namely S-Race. S-Race addresses MOMS in the proper sense of Pareto optimality. Its key decision mechanism is the non-parametric sign test, which is employed for inferring pairwise dominance relationships. Moreover, S-Race is able to strictly control the overall probability of falsely eliminating any non-dominated models at a user-specified significance level. Additionally, SPRINT-Race, the first MORA for a fixed-confidence setting, is also developed. In SPRINT-Race, pairwise dominance and non-dominance relationships are established via the Sequential Probability Ratio Test with an Indifference zone. Moreover, the overall probability of falsely eliminating any non-dominated models or mistakenly retaining any dominated models is controlled at a prescribed significance level. Extensive experimental analysis has demonstrated the efficiency and advantages of both S-Race and SPRINT-Race in MOMS.
Show less - Date Issued
- 2016
- Identifier
- CFE0006203, ucf:51094
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006203
- Title
- Analytical study of computer vision-based pavement crack quantification using machine learning techniques.
- Creator
-
Mokhtari, Soroush, Yun, Hae-Bum, Nam, Boo Hyun, Catbas, Necati, Shah, Mubarak, Xanthopoulos, Petros, University of Central Florida
- Abstract / Description
-
Image-based techniques are a promising non-destructive approach for road pavement condition evaluation. The main objective of this study is to extract, quantify and evaluate important surface defects, such as cracks, using an automated computer vision-based system to provide a better understanding of the pavement deterioration process. To achieve this objective, an automated crack-recognition software was developed, employing a series of image processing algorithms of crack extraction, crack...
Show moreImage-based techniques are a promising non-destructive approach for road pavement condition evaluation. The main objective of this study is to extract, quantify and evaluate important surface defects, such as cracks, using an automated computer vision-based system to provide a better understanding of the pavement deterioration process. To achieve this objective, an automated crack-recognition software was developed, employing a series of image processing algorithms of crack extraction, crack grouping, and crack detection. Bottom-hat morphological technique was used to remove the random background of pavement images and extract cracks, selectively based on their shapes, sizes, and intensities using a relatively small number of user-defined parameters. A technical challenge with crack extraction algorithms, including the Bottom-hat transform, is that extracted crack pixels are usually fragmented along crack paths. For de-fragmenting those crack pixels, a novel crack-grouping algorithm is proposed as an image segmentation method, so called MorphLink-C. Statistical validation of this method using flexible pavement images indicated that MorphLink-C not only improves crack-detection accuracy but also reduces crack detection time.Crack characterization was performed by analysing imagerial features of the extracted crack image components. A comprehensive statistical analysis was conducted using filter feature subset selection (FSS) methods, including Fischer score, Gini index, information gain, ReliefF, mRmR, and FCBF to understand the statistical characteristics of cracks in different deterioration stages. Statistical significance of crack features was ranked based on their relevancy and redundancy. The statistical method used in this study can be employed to avoid subjective crack rating based on human visual inspection. Moreover, the statistical information can be used as fundamental data to justify rehabilitation policies in pavement maintenance.Finally, the application of four classification algorithms, including Artificial Neural Network (ANN), Decision Tree (DT), k-Nearest Neighbours (kNN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) is investigated for the crack detection framework. The classifiers were evaluated in the following five criteria: 1) prediction performance, 2) computation time, 3) stability of results for highly imbalanced datasets in which, the number of crack objects are significantly smaller than the number of non-crack objects, 4) stability of the classifiers performance for pavements in different deterioration stages, and 5) interpretability of results and clarity of the procedure. Comparison results indicate the advantages of white-box classification methods for computer vision based pavement evaluation. Although black-box methods, such as ANN provide superior classification performance, white-box methods, such as ANFIS, provide useful information about the logic of classification and the effect of feature values on detection results. Such information can provide further insight for the image-based pavement crack detection application.
Show less - Date Issued
- 2015
- Identifier
- CFE0005671, ucf:50186
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005671
- Title
- Modeling User Transportation Patterns Using Mobile Devices.
- Creator
-
Davami, Erfan, Sukthankar, Gita, Gonzalez, Avelino, Foroosh, Hassan, Sukthankar, Rahul, University of Central Florida
- Abstract / Description
-
Participatory sensing frameworks use humans and their computing devices as a large mobile sensing network. Dramatic accessibility and affordability have turned mobile devices (smartphone and tablet computers) into the most popular computational machines in the world, exceeding laptops. By the end of 2013, more than 1.5 billion people on earth will have a smartphone. Increased coverage and higher speeds of cellular networks have given these devices the power to constantly stream large amounts...
Show moreParticipatory sensing frameworks use humans and their computing devices as a large mobile sensing network. Dramatic accessibility and affordability have turned mobile devices (smartphone and tablet computers) into the most popular computational machines in the world, exceeding laptops. By the end of 2013, more than 1.5 billion people on earth will have a smartphone. Increased coverage and higher speeds of cellular networks have given these devices the power to constantly stream large amounts of data.Most mobile devices are equipped with advanced sensors such as GPS, cameras, and microphones. This expansion of smartphone numbers and power has created a sensing system capable of achieving tasks practically impossible for conventional sensing platforms. One of the advantages of participatory sensing platforms is their mobility, since human users are often in motion. This dissertation presents a set of techniques for modeling and predicting user transportation patterns from cell-phone and social media check-ins. To study large-scale transportation patterns, I created a mobile phone app, Kpark, for estimating parking lot occupancy on the UCF campus. Kpark aggregates individual user reports on parking space availability to produce a global picture across all the campus lots using crowdsourcing. An issue with crowdsourcing is the possibility of receiving inaccurate information from users, either through error or malicious motivations. One method of combating this problem is to model the trustworthiness of individual participants to use that information to selectively include or discard data.This dissertation presents a comprehensive study of the performance of different worker quality and data fusion models with plausible simulated user populations, as well as an evaluation of their performance on the real data obtained from a full release of the Kpark app on the UCF Orlando campus. To evaluate individual trust prediction methods, an algorithm selection portfolio was introduced to take advantage of the strengths of each method and maximize the overall prediction performance.Like many other crowdsourced applications, user incentivization is an important aspect of creating a successful crowdsourcing workflow. For this project a form of non-monetized incentivization called gamification was used in order to create competition among users with the aim of increasing the quantity and quality of data submitted to the project. This dissertation reports on the performance of Kpark at predicting parking occupancy, increasing user app usage, and predicting worker quality.
Show less - Date Issued
- 2015
- Identifier
- CFE0005597, ucf:50258
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005597
- Title
- DATA MINING METHODS FOR MALWARE DETECTION.
- Creator
-
Siddiqui, Muazzam, Wang, Morgan, University of Central Florida
- Abstract / Description
-
This research investigates the use of data mining methods for malware (malicious programs) detection and proposed a framework as an alternative to the traditional signature detection methods. The traditional approaches using signatures to detect malicious programs fails for the new and unknown malwares case, where signatures are not available. We present a data mining framework to detect malicious programs. We collected, analyzed and processed several thousand malicious and clean programs to...
Show moreThis research investigates the use of data mining methods for malware (malicious programs) detection and proposed a framework as an alternative to the traditional signature detection methods. The traditional approaches using signatures to detect malicious programs fails for the new and unknown malwares case, where signatures are not available. We present a data mining framework to detect malicious programs. We collected, analyzed and processed several thousand malicious and clean programs to find out the best features and build models that can classify a given program into a malware or a clean class. Our research is closely related to information retrieval and classification techniques and borrows a number of ideas from the field. We used a vector space model to represent the programs in our collection. Our data mining framework includes two separate and distinct classes of experiments. The first are the supervised learning experiments that used a dataset, consisting of several thousand malicious and clean program samples to train, validate and test, an array of classifiers. In the second class of experiments, we proposed using sequential association analysis for feature selection and automatic signature extraction. With our experiments, we were able to achieve as high as 98.4% detection rate and as low as 1.9% false positive rate on novel malwares.
Show less - Date Issued
- 2008
- Identifier
- CFE0002303, ucf:47870
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002303
- Title
- CONTEXTUALIZING OBSERVATIONAL DATA FOR MODELING HUMAN PERFORMANCE.
- Creator
-
Trinh, Viet, Gonzalez, Avelino, University of Central Florida
- Abstract / Description
-
This research focuses on the ability to contextualize observed human behaviors in efforts to automate the process of tactical human performance modeling through learning from observations. This effort to contextualize human behavior is aimed at minimizing the role and involvement of the knowledge engineers required in building intelligent Context-based Reasoning (CxBR) agents. More specifically, the goal is to automatically discover the context in which a human actor is situated when...
Show moreThis research focuses on the ability to contextualize observed human behaviors in efforts to automate the process of tactical human performance modeling through learning from observations. This effort to contextualize human behavior is aimed at minimizing the role and involvement of the knowledge engineers required in building intelligent Context-based Reasoning (CxBR) agents. More specifically, the goal is to automatically discover the context in which a human actor is situated when performing a mission to facilitate the learning of such CxBR models. This research is derived from the contextualization problem left behind in Fernlund's research on using the Genetic Context Learner (GenCL) to model CxBR agents from observed human performance [Fernlund, 2004]. To accomplish the process of context discovery, this research proposes two contextualization algorithms: Contextualized Fuzzy ART (CFA) and Context Partitioning and Clustering (COPAC). The former is a more naive approach utilizing the well known Fuzzy ART strategy while the latter is a robust algorithm developed on the principles of CxBR. Using Fernlund's original five drivers, the CFA and COPAC algorithms were tested and evaluated on their ability to effectively contextualize each driver's individualized set of behaviors into well-formed and meaningful context bases as well as generating high-fidelity agents through the integration with Fernlund's GenCL algorithm. The resultant set of agents was able to capture and generalized each driver's individualized behaviors.
Show less - Date Issued
- 2009
- Identifier
- CFE0002563, ucf:48253
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002563
- Title
- ANALYSES OF CRASH OCCURENCE AND INURY SEVERITIES ON MULTI LANE HIGHWAYS USING MACHINE LEARNING ALGORITHMS.
- Creator
-
Das, Abhishek, Abdel-Aty, Mohamed A., University of Central Florida
- Abstract / Description
-
Reduction of crash occurrence on the various roadway locations (mid-block segments; signalized intersections; un-signalized intersections) and the mitigation of injury severity in the event of a crash are the major concerns of transportation safety engineers. Multi lane arterial roadways (excluding freeways and expressways) account for forty-three percent of fatal crashes in the state of Florida. Significant contributing causes fall under the broad categories of aggressive driver behavior;...
Show moreReduction of crash occurrence on the various roadway locations (mid-block segments; signalized intersections; un-signalized intersections) and the mitigation of injury severity in the event of a crash are the major concerns of transportation safety engineers. Multi lane arterial roadways (excluding freeways and expressways) account for forty-three percent of fatal crashes in the state of Florida. Significant contributing causes fall under the broad categories of aggressive driver behavior; adverse weather and environmental conditions; and roadway geometric and traffic factors. The objective of this research was the implementation of innovative, state-of-the-art analytical methods to identify the contributing factors for crashes and injury severity. Advances in computational methods render the use of modern statistical and machine learning algorithms. Even though most of the contributing factors are known a-priori, advanced methods unearth changing trends. Heuristic evolutionary processes such as genetic programming; sophisticated data mining methods like conditional inference tree; and mathematical treatments in the form of sensitivity analyses outline the major contributions in this research. Application of traditional statistical methods like simultaneous ordered probit models, identification and resolution of crash data problems are also key aspects of this study. In order to eliminate the use of unrealistic uniform intersection influence radius of 250 ft, heuristic rules were developed for assigning crashes to roadway segments, signalized intersection and access points using parameters, such as 'site location', 'traffic control' and node information. Use of Conditional Inference Forest instead of Classification and Regression Tree to identify variables of significance for injury severity analysis removed the bias towards the selection of continuous variable or variables with large number of categories. For the injury severity analysis of crashes on highways, the corridors were clustered into four optimum groups. The optimum number of clusters was found using Partitioning around Medoids algorithm. Concepts of evolutionary biology like crossover and mutation were implemented to develop models for classification and regression analyses based on the highest hit rate and minimum error rate, respectively. Low crossover rate and higher mutation reduces the chances of genetic drift and brings in novelty to the model development process. Annual daily traffic; friction coefficient of pavements; on-street parking; curbed medians; surface and shoulder widths; alcohol / drug usage are some of the significant factors that played a role in both crash occurrence and injury severities. Relative sensitivity analyses were used to identify the effect of continuous variables on the variation of crash counts. This study improved the understanding of the significant factors that could play an important role in designing better safety countermeasures on multi lane highways, and hence enhance their safety by reducing the frequency of crashes and severity of injuries. Educating young people about the abuses of alcohol and drugs specifically at high schools and colleges could potentially lead to lower driver aggression. Removal of on-street parking from high speed arterials unilaterally could result in likely drop in the number of crashes. Widening of shoulders could give greater maneuvering space for the drivers. Improving pavement conditions for better friction coefficient will lead to improved crash recovery. Addition of lanes to alleviate problems arising out of increased ADT and restriction of trucks to the slower right lanes on the highways would not only reduce the crash occurrences but also resulted in lower injury severity levels.
Show less - Date Issued
- 2009
- Identifier
- CFE0002928, ucf:48007
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002928
- Title
- Context-Centric Affect Recognition From Paralinguistic Features of Speech.
- Creator
-
Marpaung, Andreas, Gonzalez, Avelino, DeMara, Ronald, Sukthankar, Gita, Wu, Annie, Lisetti, Christine, University of Central Florida
- Abstract / Description
-
As the field of affect recognition has progressed, many researchers have shifted from having unimodal approaches to multimodal ones. In particular, the trends in paralinguistic speech affect recognition domain have been to integrate other modalities such as facial expression, body posture, gait, and linguistic speech. Our work focuses on integrating contextual knowledge into paralinguistic speech affect recognition. We hypothesize that a framework to recognize affect through paralinguistic...
Show moreAs the field of affect recognition has progressed, many researchers have shifted from having unimodal approaches to multimodal ones. In particular, the trends in paralinguistic speech affect recognition domain have been to integrate other modalities such as facial expression, body posture, gait, and linguistic speech. Our work focuses on integrating contextual knowledge into paralinguistic speech affect recognition. We hypothesize that a framework to recognize affect through paralinguistic features of speech can improve its performance by integrating relevant contextual knowledge. This dissertation describes our research to integrate contextual knowledge into the paralinguistic affect recognition process from acoustic features of speech. We conceived, built, and tested a two-phased system called the Context-Based Paralinguistic Affect Recognition System (CxBPARS). The first phase of this system is context-free and uses the AdaBoost classifier that applies data on the acoustic pitch, jitter, shimmer, Harmonics-to-Noise Ratio (HNR), and the Noise-to-Harmonics Ratio (NHR) to make an initial judgment about the emotion most likely exhibited by the human elicitor. The second phase then adds context modeling to improve upon the context-free classifications from phase I. CxBPARS was inspired by a human subject study performed as part of this work where test subjects were asked to classify an elicitor's emotion strictly from paralinguistic sounds, and then subsequently provided with contextual information to improve their selections. CxBPARS was rigorously tested and found to, at the worst case, improve the success rate from the state-of-the-art's 42% to 53%.
Show less - Date Issued
- 2019
- Identifier
- CFE0007836, ucf:52831
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007836
- Title
- Batch and Online Implicit Weighted Gaussian Processes for Robust Novelty Detection.
- Creator
-
Ramirez Padron, Ruben, Gonzalez, Avelino, Georgiopoulos, Michael, Stanley, Kenneth, Mederos, Boris, Wang, Chung-Ching, University of Central Florida
- Abstract / Description
-
This dissertation aims mainly at obtaining robust variants of Gaussian processes (GPs) that do not require using non-Gaussian likelihoods to compensate for outliers in the training data. Bayesian kernel methods, and in particular GPs, have been used to solve a variety of machine learning problems, equating or exceeding the performance of other successful techniques. That is the case of a recently proposed approach to GP-based novelty detection that uses standard GPs (i.e. GPs employing...
Show moreThis dissertation aims mainly at obtaining robust variants of Gaussian processes (GPs) that do not require using non-Gaussian likelihoods to compensate for outliers in the training data. Bayesian kernel methods, and in particular GPs, have been used to solve a variety of machine learning problems, equating or exceeding the performance of other successful techniques. That is the case of a recently proposed approach to GP-based novelty detection that uses standard GPs (i.e. GPs employing Gaussian likelihoods). However, standard GPs are sensitive to outliers in training data, and this limitation carries over to GP-based novelty detection. This limitation has been typically addressed by using robust non-Gaussian likelihoods. However, non-Gaussian likelihoods lead to analytically intractable inferences, which require using approximation techniques that are typically complex and computationally expensive. Inspired by the use of weights in quasi-robust statistics, this work introduces a particular type of weight functions, called here data weighers, in order to obtain robust GPs that do not require approximation techniques and retain the simplicity of standard GPs. This work proposes implicit weighted variants of batch GP, online GP, and sparse online GP (SOGP) that employ weighted Gaussian likelihoods. Mathematical expressions for calculating the posterior implicit weighted GPs are derived in this work. In our experiments, novelty detection based on our weighted batch GPs consistently and significantly outperformed standard batch GP-based novelty detection whenever data was contaminated with outliers. Additionally, our experiments show that novelty detection based on online GPs can perform similarly to batch GP-based novelty detection. Membership scores previously introduced by other authors are also compared in our experiments.
Show less - Date Issued
- 2015
- Identifier
- CFE0005869, ucf:50858
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005869
- Title
- Human-Robot Interaction For Multi-Robot Systems.
- Creator
-
Lewis, Bennie, Sukthankar, Gita, Hughes, Charles, Laviola II, Joseph, Boloni, Ladislau, Hancock, Peter, University of Central Florida
- Abstract / Description
-
Designing an effective human-robot interaction paradigm is particularly important for complex tasks such as multi robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that therobots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not...
Show moreDesigning an effective human-robot interaction paradigm is particularly important for complex tasks such as multi robot manipulation that require the human and robot to work together in a tightly coupled fashion. Although increasing the number of robots can expand the area that therobots can cover within a bounded period of time, a poor human-robot interface will ultimately compromise the performance of the team of robots. However, introducing a human operator to the team of robots, does not automatically improve performance due to the difficulty of teleoperating mobile robots with manipulators. The human operator's concentration is divided not only amongmultiple robots but also between controlling each robot's base and arm. This complexity substantially increases the potential neglect time, since the operator's inability to effectively attend to each robot during a critical phase of the task leads to a significant degradation in task performance.There are several proven paradigms for increasing the efficacy of human-robot interaction: 1) multimodal interfaces in which the user controls the robots using voice and gesture; 2) configurable interfaces which allow the user to create new commands by demonstrating them; 3) adaptive interfaceswhich reduce the operator's workload as necessary through increasing robot autonomy. This dissertation presents an evaluation of the relative benefits of different types of user interfaces for multi-robot systems composed of robots with wheeled bases and three degree of freedom arms. It describes a design for constructing low-cost multi-robot manipulation systems from off the shelfparts.User expertise was measured along three axes (navigation, manipulation, and coordination), and participants who performed above threshold on two out of three dimensions on a calibration task were rated as expert. Our experiments reveal that the relative expertise of the user was the key determinant of the best performing interface paradigm for that user, indicating that good user modeling is essential for designing a human-robot interaction system that will be used for an extended period of time. The contributions of the dissertation include: 1) a model for detecting operator distraction from robot motion trajectories; 2) adjustable autonomy paradigms for reducing operator workload; 3) a method for creating coordinated multi-robot behaviors from demonstrations with a single robot; 4) a user modeling approach for identifying expert-novice differences from short teleoperation traces.
Show less - Date Issued
- 2014
- Identifier
- CFE0005198, ucf:50613
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005198
- Title
- Data-Driven Simulation Modeling of Construction and Infrastructure Operations Using Process Knowledge Discovery.
- Creator
-
Akhavian, Reza, Behzadan, Amir, Oloufa, Amr, Yun, Hae-Bum, Sukthankar, Gita, Zheng, Qipeng, University of Central Florida
- Abstract / Description
-
Within the architecture, engineering, and construction (AEC) domain, simulation modeling is mainly used to facilitate decision-making by enabling the assessment of different operational plans and resource arrangements, that are otherwise difficult (if not impossible), expensive, or time consuming to be evaluated in real world settings. The accuracy of such models directly affects their reliability to serve as a basis for important decisions such as project completion time estimation and...
Show moreWithin the architecture, engineering, and construction (AEC) domain, simulation modeling is mainly used to facilitate decision-making by enabling the assessment of different operational plans and resource arrangements, that are otherwise difficult (if not impossible), expensive, or time consuming to be evaluated in real world settings. The accuracy of such models directly affects their reliability to serve as a basis for important decisions such as project completion time estimation and resource allocation. Compared to other industries, this is particularly important in construction and infrastructure projects due to the high resource costs and the societal impacts of these projects. Discrete event simulation (DES) is a decision making tool that can benefit the process of design, control, and management of construction operations. Despite recent advancements, most DES models used in construction are created during the early planning and design stage when the lack of factual information from the project prohibits the use of realistic data in simulation modeling. The resulting models, therefore, are often built using rigid (subjective) assumptions and design parameters (e.g. precedence logic, activity durations). In all such cases and in the absence of an inclusive methodology to incorporate real field data as the project evolves, modelers rely on information from previous projects (a.k.a. secondary data), expert judgments, and subjective assumptions to generate simulations to predict future performance. These and similar shortcomings have to a large extent limited the use of traditional DES tools to preliminary studies and long-term planning of construction projects.In the realm of the business process management, process mining as a relatively new research domain seeks to automatically discover a process model by observing activity records and extracting information about processes. The research presented in this Ph.D. Dissertation was in part inspired by the prospect of construction process mining using sensory data collected from field agents. This enabled the extraction of operational knowledge necessary to generate and maintain the fidelity of simulation models. A preliminary study was conducted to demonstrate the feasibility and applicability of data-driven knowledge-based simulation modeling with focus on data collection using wireless sensor network (WSN) and rule-based taxonomy of activities. The resulting knowledge-based simulation models performed very well in properly predicting key performance measures of real construction systems. Next, a pervasive mobile data collection and mining technique was adopted and an activity recognition framework for construction equipment and worker tasks was developed. Data was collected using smartphone accelerometers and gyroscopes from construction entities to generate significant statistical time- and frequency-domain features. The extracted features served as the input of different types of machine learning algorithms that were applied to various construction activities. The trained predictive algorithms were then used to extract activity durations and calculate probability distributions to be fused into corresponding DES models. Results indicated that the generated data-driven knowledge-based simulation models outperform static models created based upon engineering assumptions and estimations with regard to compatibility of performance measure outputs to reality.
Show less - Date Issued
- 2015
- Identifier
- CFE0006023, ucf:51014
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006023
- Title
- Towards Improving Human-Robot Interaction For Social Robots.
- Creator
-
Khan, Saad, Boloni, Ladislau, Behal, Aman, Sukthankar, Gita, Garibay, Ivan, Fiore, Stephen, University of Central Florida
- Abstract / Description
-
Autonomous robots interacting with humans in a social setting must consider the social-cultural environment when pursuing their objectives. Thus the social robot must perceive and understand the social cultural environment in order to be able to explain and predict the actions of its human interaction partners. This dissertation contributes to the emerging field of human-robot interaction for social robots in the following ways: 1. We used the social calculus technique based on culture...
Show moreAutonomous robots interacting with humans in a social setting must consider the social-cultural environment when pursuing their objectives. Thus the social robot must perceive and understand the social cultural environment in order to be able to explain and predict the actions of its human interaction partners. This dissertation contributes to the emerging field of human-robot interaction for social robots in the following ways: 1. We used the social calculus technique based on culture sanctioned social metrics (CSSMs) to quantify, analyze and predict the behavior of the robot, human soldiers and the public perception in the Market Patrol peacekeeping scenario. 2. We validated the results of the Market Patrol scenario by comparing the predicted values with the judgment of a large group of human observers cognizant of the modeled culture. 3. We modeled the movement of a socially aware mobile robot in a dense crowds, using the concept of a micro-conflict to represent the challenge of giving or not giving way to pedestrians. 4. We developed an approach for the robot behavior in micro-conflicts based on the psychological observation that human opponents will use a consistent strategy. For this, the mobile robot classifies the opponent strategy reflected by the personality and social status of the person and chooses an appropriate counter-strategy that takes into account the urgency of the robots' mission. 5. We developed an alternative approach for the resolution of micro-conflicts based on the imitation of the behavior of the human agent. This approach aims to make the behavior of an autonomous robot closely resemble that of a remotely operated one.
Show less - Date Issued
- 2015
- Identifier
- CFE0005965, ucf:50819
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005965
- Title
- An investigation of physiological measures in a marketing decision task.
- Creator
-
Lerma, Nelson, Karwowski, Waldemar, Elshennawy, Ahmad, Xanthopoulos, Petros, Reinerman, Lauren, University of Central Florida
- Abstract / Description
-
The objective of the present study was to understand the use of physiological measures as an alternative to traditional market research tools, such as self-reporting measures and focus groups. For centuries, corporations and researchers have relied almost exclusively on traditional measures to gain insights into consumer behavior. Oftentimes, traditional methods have failed to accurately predict consumer demand, and this has prompted corporations to explore alternative methods that will...
Show moreThe objective of the present study was to understand the use of physiological measures as an alternative to traditional market research tools, such as self-reporting measures and focus groups. For centuries, corporations and researchers have relied almost exclusively on traditional measures to gain insights into consumer behavior. Oftentimes, traditional methods have failed to accurately predict consumer demand, and this has prompted corporations to explore alternative methods that will accurately forecast future sales. One the most promising alternative methods currently being investigated is the use of physiological measures as an indication of consumer preference. This field, also referred to as neuromarketing, has blended the principles of psychology, neuroscience, and market research to explore consumer behavior from a physiological perspective. The goal of neuromarketing is to capture consumer behavior through the use of physiological sensors. This study investigated the extent to which physiological measures where correlated to consumer preferences by utilizing five physiological sensors which included two neurological sensors (EEG and ECG) two hemodynamic sensors (TCD and fNIR) and one optic sensor (eye-tracking). All five physiological sensors were used simultaneously to capture and record physiological changes during four distinct marketing tasks. The results showed that only one physiological sensor, EEG, was indicative of concept type and intent to purchase. The remaining four physiological sensors did not show any significant differences for concept type or intent to purchase.Furthermore, Machine Learning Algorithms (MLAs) were used to determine the extent to which MLAs (Na(&)#239;ve Bayes, Multilayer Perceptron, K-Nearest Neighbor, and Logistic Regression) could classify physiological responses to self-reporting measures obtained during a marketing task. The results demonstrated that Multilayer Perceptron, on average, performed better than the other MLAs for intent to purchase and concept type. It was also evident that the models faired best with the most popular concept when categorizing the data based on intent to purchase or final selection. Overall, the four models performed well at categorizing the most popular concept and gave some indication to the extent to which physiological measures are capable of capturing intent to purchase. The research study was intended to help better understand the possibilities and limitations of physiological measures in the field of market research. Based on the results obtained, this study demonstrated that certain physiological sensors are capable of capturing emotional changes, but only when the emotional response between two concepts is significantly different. Overall, physiological measures hold great promise in the study of consumer behavior, providing great insight on the relationship between emotions and intentions in market research.
Show less - Date Issued
- 2015
- Identifier
- CFE0006345, ucf:51563
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006345
- Title
- Integrated Data Fusion and Mining (IDFM) Technique for Monitoring Water Quality in Large and Small Lakes.
- Creator
-
Vannah, Benjamin, Chang, Ni-bin, Wanielista, Martin, Wang, Dingbao, University of Central Florida
- Abstract / Description
-
Monitoring water quality on a near-real-time basis to address water resources management and public health concerns in coupled natural systems and the built environment is by no means an easy task. Furthermore, this emerging societal challenge will continue to grow, due to the ever-increasing anthropogenic impacts upon surface waters. For example, urban growth and agricultural operations have led to an influx of nutrients into surface waters stimulating harmful algal bloom formation, and...
Show moreMonitoring water quality on a near-real-time basis to address water resources management and public health concerns in coupled natural systems and the built environment is by no means an easy task. Furthermore, this emerging societal challenge will continue to grow, due to the ever-increasing anthropogenic impacts upon surface waters. For example, urban growth and agricultural operations have led to an influx of nutrients into surface waters stimulating harmful algal bloom formation, and stormwater runoff from urban areas contributes to the accumulation of total organic carbon (TOC) in surface waters. TOC in surface waters is a known precursor of disinfection byproducts in drinking water treatment, and microcystin is a potent hepatotoxin produced by the bacteria Microcystis, which can form expansive algal blooms in eutrophied lakes. Due to the ecological impacts and human health hazards posed by TOC and microcystin, it is imperative that municipal decision makers and water treatment plant operators are equipped with a rapid and economical means to track and measure these substances.Remote sensing is an emergent solution for monitoring and measuring changes to the earth's environment. This technology allows for large regions anywhere on the globe to be observed on a frequent basis. This study demonstrates the prototype of a near-real-time early warning system using Integrated Data Fusion and Mining (IDFM) techniques with the aid of both multispectral (Landsat and MODIS) and hyperspectral (MERIS) satellite sensors to determine spatiotemporal distributions of TOC and microcystin. Landsat satellite imageries have high spatial resolution, but such application suffers from a long overpass interval of 16 days. On the other hand, free coarse resolution sensors with daily revisit times, such as MODIS, are incapable of providing detailed water quality information because of low spatial resolution. This issue can be resolved by using data or sensor fusion techniques, an instrumental part of IDFM, in which the high spatial resolution of Landsat and the high temporal resolution of MODIS imageries are fused and analyzed by a suite of regression models to optimally produce synthetic images with both high spatial and temporal resolutions. The same techniques are applied to the hyperspectral sensor MERIS with the aid of the MODIS ocean color bands to generate fused images with enhanced spatial, temporal, and spectral properties. The performance of the data mining models derived using fused hyperspectral and fused multispectral data are quantified using four statistical indices. The second task compared traditional two-band models against more powerful data mining models for TOC and microcystin prediction. The use of IDFM is illustrated for monitoring microcystin concentrations in Lake Erie (large lake), and it is applied for TOC monitoring in Harsha Lake (small lake). Analysis confirmed that data mining methods excelled beyond two-band models at accurately estimating TOC and microcystin concentrations in lakes, and the more detailed spectral reflectance data offered by hyperspectral sensors produced a noticeable increase in accuracy for the retrieval of water quality parameters.
Show less - Date Issued
- 2013
- Identifier
- CFE0005066, ucf:49979
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005066
- Title
- SketChart: A Pen-Based Tool for Chart Generation and Interaction.
- Creator
-
Vargas Gonzalez, Andres, Laviola II, Joseph, Foroosh, Hassan, Hua, Kien, University of Central Florida
- Abstract / Description
-
It has been shown that representing data with the right visualization increases the understanding of qualitative and quantitative information encoded in documents. However, current tools for generating such visualizations involve the use of traditional WIMP techniques, which perhaps makes free interaction and direct manipulation of the content harder. In this thesis, we present a pen-based prototype for data visualization using 10 different types of bar based charts. The prototype lets users...
Show moreIt has been shown that representing data with the right visualization increases the understanding of qualitative and quantitative information encoded in documents. However, current tools for generating such visualizations involve the use of traditional WIMP techniques, which perhaps makes free interaction and direct manipulation of the content harder. In this thesis, we present a pen-based prototype for data visualization using 10 different types of bar based charts. The prototype lets users sketch a chart and interact with the information once the drawing is identified. The prototype's user interface consists of an area to sketch and touch based elements that will be displayed depending on the context and nature of the outline. Brainstorming and live presentations can benefit from the prototype due to the ability to visualize and manipulate data in real time. We also perform a short, informal user study to measure effectiveness of the tool while recognizing sketches and users acceptance while interacting with the system. Results show SketChart strengths and weaknesses and areas for improvement.
Show less - Date Issued
- 2014
- Identifier
- CFE0005434, ucf:50405
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005434
- Title
- Quantifying Trust and Reputation for Defense against Adversaries in Multi-Channel Dynamic Spectrum Access Networks.
- Creator
-
Bhattacharjee, Shameek, Chatterjee, Mainak, Guha, Ratan, Zou, Changchun, Turgut, Damla, Catbas, Necati, University of Central Florida
- Abstract / Description
-
Dynamic spectrum access enabled by cognitive radio networks are envisioned to drivethe next generation wireless networks that can increase spectrum utility by opportunisticallyaccessing unused spectrum. Due to the policy constraint that there could be no interferenceto the primary (licensed) users, secondary cognitive radios have to continuously sense forprimary transmissions. Typically, sensing reports from multiple cognitive radios are fusedas stand-alone observations are prone to errors...
Show moreDynamic spectrum access enabled by cognitive radio networks are envisioned to drivethe next generation wireless networks that can increase spectrum utility by opportunisticallyaccessing unused spectrum. Due to the policy constraint that there could be no interferenceto the primary (licensed) users, secondary cognitive radios have to continuously sense forprimary transmissions. Typically, sensing reports from multiple cognitive radios are fusedas stand-alone observations are prone to errors due to wireless channel characteristics. Suchdependence on cooperative spectrum sensing is vulnerable to attacks such as SecondarySpectrum Data Falsification (SSDF) attacks when multiple malicious or selfish radios falsifythe spectrum reports. Hence, there is a need to quantify the trustworthiness of radios thatshare spectrum sensing reports and devise malicious node identification and robust fusionschemes that would lead to correct inference about spectrum usage.In this work, we propose an anomaly monitoring technique that can effectively cap-ture anomalies in the spectrum sensing reports shared by individual cognitive radios duringcooperative spectrum sensing in a multi-channel distributed network. Such anomalies areused as evidence to compute the trustworthiness of a radio by its neighbours. The proposedanomaly monitoring technique works for any density of malicious nodes and for any physicalenvironment. We propose an optimistic trust heuristic for a system with a normal risk attitude and show that it can be approximated as a beta distribution. For a more conservativesystem, we propose a multinomial Dirichlet distribution based conservative trust framework,where Josang's Belief model is used to resolve any uncertainty in information that mightarise during anomaly monitoring. Using a machine learning approach, we identify maliciousnodes with a high degree of certainty regardless of their aggressiveness and variations intro-duced by the pathloss environment. We also propose extensions to the anomaly monitoringtechnique that facilitate learning about strategies employed by malicious nodes and alsoutilize the misleading information they provide. We also devise strategies to defend against a collaborative SSDF attack that islaunched by a coalition of selfish nodes. Since, defense against such collaborative attacks isdifficult with popularly used voting based inference models or node centric isolation techniques, we propose a channel centric Bayesian inference approach that indicates how much the collective decision on a channels occupancy inference can be trusted. Based on the measured observations over time, we estimate the parameters of the hypothesis of anomalous andnon-anomalous events using a multinomial Bayesian based inference. We quantitatively define the trustworthiness of a channel inference as the difference between the posterior beliefsassociated with anomalous and non-anomalous events. The posterior beliefs are updated based on a weighted average of the prior information on the belief itself and the recently observed data.Subsequently, we propose robust fusion models which utilize the trusts of the nodes to improve the accuracy of the cooperative spectrum sensing decisions. In particular, we propose three fusion models: (i) optimistic trust based fusion, (ii) conservative trust based fusion, and (iii) inversion based fusion. The former two approaches exclude untrustworthy sensing reports for fusion, while the last approach utilizes misleading information. Allschemes are analyzed under various attack strategies. We propose an asymmetric weightedmoving average based trust management scheme that quickly identifies on-off SSDF attacks and prevents quick trust redemption when such nodes revert back to temporal honest behavior. We also provide insights on what attack strategies are more effective from the adversaries' perspective.Through extensive simulation experiments we show that the trust models are effective in identifying malicious nodes with a high degree of certainty under variety of network and radio conditions. We show high true negative detection rates even when multiple malicious nodes launch collaborative attacks which is an improvement over existing voting based exclusion and entropy divergence techniques. We also show that we are able to improve the accuracy of fusion decisions compared to other popular fusion techniques. Trust based fusion schemes show worst case decision error rates of 5% while inversion based fusion show 4% as opposed majority voting schemes that have 18% error rate. We also show that the proposed channel centric Bayesian inference based trust model is able to distinguish between attacked and non-attacked channels for both static and dynamic collaborative attacks. We are also able to show that attacked channels have significantly lower trust values than channels that are not(-) a metric that can be used by nodes to rank the quality of inference on channels.
Show less - Date Issued
- 2015
- Identifier
- CFE0005764, ucf:50081
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005764
- Title
- Semiconductor Design and Manufacturing Interplay to Achieve Higher Yields at Reduced Costs using SMART Techniques.
- Creator
-
Oberai, Ankush Bharati, Yuan, Jiann-Shiun, Abdolvand, Reza, Georgiopoulos, Michael, Sundaram, Kalpathy, Reilly, Charles, University of Central Florida
- Abstract / Description
-
Since the outset of IC Semiconductor market there has been a gap between its design and manufacturing communities. This gap continued to grow as the device geometries started to shrink and the manufacturing processes and tools got more complex. This gap lowered the manufacturing yield, leading to higher cost of ICs and delay in their time to market. It also impacted performance of the ICs, impacting the overall functionality of the systems they were integrated in. However, in the recent years...
Show moreSince the outset of IC Semiconductor market there has been a gap between its design and manufacturing communities. This gap continued to grow as the device geometries started to shrink and the manufacturing processes and tools got more complex. This gap lowered the manufacturing yield, leading to higher cost of ICs and delay in their time to market. It also impacted performance of the ICs, impacting the overall functionality of the systems they were integrated in. However, in the recent years there have been major efforts to bridge the gap between design and manufacturing using software solutions by providing closer collaborations techniques between design and manufacturing communities. The root cause of this gap is inherited by the difference in the knowledge and skills required by the two communities. The IC design community is more microelectronics, electrical engineering and software driven whereas the IC manufacturing community is more driven by material science, mechanical engineering, physics and robotics. The cross training between the two is almost nonexistence and not even mandated. This gap is deemed to widen, with demand for more complex designs and miniaturization of electronic appliance-products. Growing need for MEMS, 3-D NANDS and IOTs are other drivers that could widen the gap between design and manufacturing. To bridge this gap, it is critical to have close loop solutions between design and manufacturing This could be achieved by SMART automation on both sides by using Artificial Intelligence, Machine Learning and Big Data algorithms. Lack of automation and predictive capabilities have even made the situation worse on the yield and total turnaround times. With the growing fabless and foundry business model, bridging the gap has become even more critical. Smart Manufacturing philosophy must be adapted to make this bridge possible. We need to understand the Fab-fabless collaboration requirements and the mechanism to bring design to the manufacturing floor for yield improvement. Additionally, design community must be educated with manufacturing process and tool knowledge, so they can design for improved manufacturability. This study will require understanding of elements impacting manufacturing on both ends of the design and manufacturing process. Additionally, we need to understand the process rules that need to be followed closely in the design phase. Best suited SMART automation techniques to bridge the gap need to be studied and analyzed for their effectiveness.
Show less - Date Issued
- 2018
- Identifier
- CFE0007351, ucf:52096
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007351
- Title
- Sampling and Subspace Methods for Learning Sparse Group Structures in Computer Vision.
- Creator
-
Jaberi, Maryam, Foroosh, Hassan, Pensky, Marianna, Gong, Boqing, Qi, GuoJun, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
The unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the...
Show moreThe unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the union of group-structures of different dimensions embedded in a high-dimensional ambient space.To address the problem of high volume, we propose a sampling method referred to as the Sparse Withdrawal of Inliers in a First Trial (SWIFT), which determines the smallest sample size in one grab so that all group-structures are adequately represented and discovered with high probability. The key features of SWIFT are: (i) sparsity, which is independent of the population size; (ii) no prior knowledge of the distribution of data, or the number of underlying group-structures; and (iii) robustness in the presence of an overwhelming number of outliers. We report a comprehensive study of the proposed sampling method in terms of accuracy, functionality, and effectiveness in reducing the computational cost in various applications of computer vision. In the second part of this dissertation, we study dimensionality reduction for multi-structural data. We propose a probabilistic subspace clustering method that unifies soft- and hard-clustering in a single framework. This is achieved by introducing a delayed association of uncertain points to subspaces of lower dimensions based on a confidence measure. Delayed association yields higher accuracy in clustering subspaces that have ambiguities, i.e. due to intersections and high-level of outliers/noise, and hence leads to more accurate self-representation of underlying subspaces. Altogether, this dissertation addresses the key theoretical and practically issues of size and dimension in big data analysis.
Show less - Date Issued
- 2018
- Identifier
- CFE0007017, ucf:52039
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007017
- Title
- OPTIMAL DETOUR PLANNING AROUND BLOCKED CONSTRUCTION ZONES.
- Creator
-
Jardaneh , Mutasem, Khalafallah, Ahmed, University of Central Florida
- Abstract / Description
-
Construction zones are traffic way areas where construction, maintenance or utility work is identified by warning signs, signals and indicators, including those on transport devices that mark the beginning and end of construction zones. Construction zones are among the most dangerous work areas, with workers facing workplace safety challenges that often lead to catastrophic injuries or fatalities. In addition, daily commuters are also impacted by construction zone detours that affect their...
Show moreConstruction zones are traffic way areas where construction, maintenance or utility work is identified by warning signs, signals and indicators, including those on transport devices that mark the beginning and end of construction zones. Construction zones are among the most dangerous work areas, with workers facing workplace safety challenges that often lead to catastrophic injuries or fatalities. In addition, daily commuters are also impacted by construction zone detours that affect their safety and daily commute time. These problems represent major challenges to construction planners as they are required to plan vehicle routes around construction zones in such a way that maximizes the safety of construction workers and reduces the impact on daily commuters. This research aims at developing a framework for optimizing the planning of construction detours. The main objectives of the research are to first identify all the decision variables that affect the planning of construction detours and secondly, implement a model based on shortest path formulation to identify the optimal alternatives for construction detours. The ultimate goal of this research is to offer construction planners with the essential guidelines to improve construction safety and reduce construction zone hazards as well as a robust tool for selecting and optimizing construction zone detours.
Show less - Date Issued
- 2011
- Identifier
- CFE0003586, ucf:48900
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003586