Current Search: Qi, GuoJun (x)
View All Items
- Title
- Reliable Spectrum Hole Detection in Spectrum-Heterogeneous Mobile Cognitive Radio Networks via Sequential Bayesian Non-parametric Clustering.
- Creator
-
Zaeemzadeh, Alireza, Rahnavard, Nazanin, Vosoughi, Azadeh, Qi, GuoJun, University of Central Florida
- Abstract / Description
-
In this work, the problem of detecting radio spectrum opportunities in spectrum-heterogeneous cognitive radio networks is addressed. Spectrum opportunities are the frequency channels that are underutilized by the primary licensed users. Thus, by enabling the unlicensed users to detect and utilize them, we can improve the efficiency, reliability, and the flexibility of the radio spectrum usage. The main objective of this work is to discover the spectrum opportunities in time, space, and...
Show moreIn this work, the problem of detecting radio spectrum opportunities in spectrum-heterogeneous cognitive radio networks is addressed. Spectrum opportunities are the frequency channels that are underutilized by the primary licensed users. Thus, by enabling the unlicensed users to detect and utilize them, we can improve the efficiency, reliability, and the flexibility of the radio spectrum usage. The main objective of this work is to discover the spectrum opportunities in time, space, and frequency domains, by proposing a low-cost and practical framework. Spectrum-heterogeneous networks are the networks in which different sensors experience different spectrum opportunities. Thus, the sensing data from sensors cannot be combined to reach consensus and to detect the spectrum opportunities. Moreover, unreliable data, caused by noise or malicious attacks, will deteriorate the performance of the decision-making process. The problem becomes even more challenging when the locations of the sensors are unknown. In this work, a probabilistic model is proposed to cluster the sensors based on their readings, not requiring any knowledge of location of the sensors. The complexity of the model, which is the number of clusters, is automatically inferred from the sensing data. The processing node, also referred to as the base station or the fusion center, infers the probability distributions of cluster memberships, channel availabilities, and devices' reliability in an online manner. After receiving each chunk of sensing data, the probability distributions are updated, without requiring to repeat the computations on previous sensing data. All the update rules are derived mathematically, by employing Bayesian data analysis techniques and variational inference.Furthermore, the inferred probability distributions are employed to assign unique spectrum opportunities to each of the sensors. To avoid interference among the sensors, physically adjacent devices should not utilize the same channels. However, since the location of the devices is not known, cluster membership information is used as a measure of adjacency. This is based on the assumption that the measurements of the devices are spatially correlated. Thus, adjacent devices, which experience similar spectrum opportunities, belong to the same cluster. Then, the problem is mapped into a energy minimization problem and solved via graph cuts. The goal of the proposed graph-theory-based method is to assign each device an available channel, while avoiding interference among neighboring devices. The numerical simulations illustrates the effectiveness of the proposed methods, compared to the existing frameworks.
Show less - Date Issued
- 2017
- Identifier
- CFE0006963, ucf:51639
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006963
- Title
- Leaning Robust Sequence Features via Dynamic Temporal Pattern Discovery.
- Creator
-
Hu, Hao, Wang, Liqiang, Zhang, Shaojie, Liu, Fei, Qi, GuoJun, Zhou, Qun, University of Central Florida
- Abstract / Description
-
As a major type of data, time series possess invaluable latent knowledge for describing the real world and human society. In order to improve the ability of intelligent systems for understanding the world and people, it is critical to design sophisticated machine learning algorithms for extracting robust time series features from such latent knowledge. Motivated by the successful applications of deep learning in computer vision, more and more machine learning researchers put their attentions...
Show moreAs a major type of data, time series possess invaluable latent knowledge for describing the real world and human society. In order to improve the ability of intelligent systems for understanding the world and people, it is critical to design sophisticated machine learning algorithms for extracting robust time series features from such latent knowledge. Motivated by the successful applications of deep learning in computer vision, more and more machine learning researchers put their attentions on the topic of applying deep learning techniques to time series data. However, directly employing current deep models in most time series domains could be problematic. A major reason is that temporal pattern types that current deep models are aiming at are very limited, which cannot meet the requirement of modeling different underlying patterns of data coming from various sources. In this study we address this problem by designing different network structures explicitly based on specific domain knowledge such that we can extract features via most salient temporal patterns. More specifically, we mainly focus on two types of temporal patterns: order patterns and frequency patterns. For order patterns, which are usually related to brain and human activities, we design a hashing-based neural network layer to globally encode the ordinal pattern information into the resultant features. It is further generalized into a specially designed Recurrent Neural Networks (RNN) cell which can learn order patterns in an online fashion. On the other hand, we believe audio-related data such as music and speech can benefit from modeling frequency patterns. Thus, we do so by developing two types of RNN cells. The first type tries to directly learn the long-term dependencies on frequency domain rather than time domain. The second one aims to dynamically filter out the ``noise" frequencies based on temporal contexts. By proposing various deep models based on different domain knowledge and evaluating them on extensive time series tasks, we hope this work can provide inspirations for others and increase the community's interests on the problem of applying deep learning techniques to more time series tasks.
Show less - Date Issued
- 2019
- Identifier
- CFE0007470, ucf:52679
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007470
- Title
- Sampling and Subspace Methods for Learning Sparse Group Structures in Computer Vision.
- Creator
-
Jaberi, Maryam, Foroosh, Hassan, Pensky, Marianna, Gong, Boqing, Qi, GuoJun, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
The unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the...
Show moreThe unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the union of group-structures of different dimensions embedded in a high-dimensional ambient space.To address the problem of high volume, we propose a sampling method referred to as the Sparse Withdrawal of Inliers in a First Trial (SWIFT), which determines the smallest sample size in one grab so that all group-structures are adequately represented and discovered with high probability. The key features of SWIFT are: (i) sparsity, which is independent of the population size; (ii) no prior knowledge of the distribution of data, or the number of underlying group-structures; and (iii) robustness in the presence of an overwhelming number of outliers. We report a comprehensive study of the proposed sampling method in terms of accuracy, functionality, and effectiveness in reducing the computational cost in various applications of computer vision. In the second part of this dissertation, we study dimensionality reduction for multi-structural data. We propose a probabilistic subspace clustering method that unifies soft- and hard-clustering in a single framework. This is achieved by introducing a delayed association of uncertain points to subspaces of lower dimensions based on a confidence measure. Delayed association yields higher accuracy in clustering subspaces that have ambiguities, i.e. due to intersections and high-level of outliers/noise, and hence leads to more accurate self-representation of underlying subspaces. Altogether, this dissertation addresses the key theoretical and practically issues of size and dimension in big data analysis.
Show less - Date Issued
- 2018
- Identifier
- CFE0007017, ucf:52039
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007017
- Title
- Bridging the Gap between Application and Solid-State-Drives.
- Creator
-
Zhou, Jian, Wang, Jun, Lin, Mingjie, Fan, Deliang, Ewetz, Rickard, Qi, GuoJun, University of Central Florida
- Abstract / Description
-
Data storage is one of the important and often critical parts of the computing systemin terms of performance, cost, reliability, and energy.Numerous new memory technologies,such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor,have emerged recently.Many of them have already entered the production system.Traditional storage optimization and caching algorithms are far from optimalbecause storage I/Os do not show simple locality.To provide optimal storage we need...
Show moreData storage is one of the important and often critical parts of the computing systemin terms of performance, cost, reliability, and energy.Numerous new memory technologies,such as NAND flash, phase change memory (PCM), magnetic RAM (STT-RAM) and Memristor,have emerged recently.Many of them have already entered the production system.Traditional storage optimization and caching algorithms are far from optimalbecause storage I/Os do not show simple locality.To provide optimal storage we need accurate predictions of I/O behavior.However, the workloads are increasingly dynamic and diverse,making the long and short time I/O prediction challenge.Because of the evolution of the storage technologiesand the increasing diversity of workloads,the storage software is becoming more and more complex.For example, Flash Translation Layer (FTL) is added for NAND-flash based Solid State Disks (NAND-SSDs).However, it introduces overhead such as address translation delay and garbage collection costs.There are many recent studies aim to address the overhead.Unfortunately, there is no one-size-fits-all solution due to the variety of workloads.Despite rapidly evolving in storage technologies,the increasing heterogeneity and diversity in machines and workloadscoupled with the continued data explosionexacerbate the gap between computing and storage speeds.In this dissertation, we improve the data storage performance from both top-down and bottom-up approach.First, we will investigate exposing the storage level parallelismso that applications can avoid I/O contentions and workloads skewwhen scheduling the jobs.Second, we will study how architecture aware task scheduling can improve the performance of the application when PCM based NVRAM are equipped.Third, we will develop an I/O correlation aware flash translation layer for NAND-flash based Solid State Disks.Fourth, we will build a DRAM-based correlation aware FTL emulator and study the performance in various filesystems.
Show less - Date Issued
- 2018
- Identifier
- CFE0007273, ucf:52188
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007273
- Title
- Spatiotemporal Graphs for Object Segmentation and Human Pose Estimation in Videos.
- Creator
-
Zhang, Dong, Shah, Mubarak, Qi, GuoJun, Bagci, Ulas, Yun, Hae-Bum, University of Central Florida
- Abstract / Description
-
Images and videos can be naturally represented by graphs, with spatial graphs for images and spatiotemporal graphs for videos. However, for different applications, there are usually different formulations of the graphs, and algorithms for each formulation have different complexities. Therefore, wisely formulating the problem to ensure an accurate and efficient solution is one of the core issues in Computer Vision research. We explore three problems in this domain to demonstrate how to...
Show moreImages and videos can be naturally represented by graphs, with spatial graphs for images and spatiotemporal graphs for videos. However, for different applications, there are usually different formulations of the graphs, and algorithms for each formulation have different complexities. Therefore, wisely formulating the problem to ensure an accurate and efficient solution is one of the core issues in Computer Vision research. We explore three problems in this domain to demonstrate how to formulate all of these problems in terms of spatiotemporal graphs and obtain good and efficient solutions.The first problem we explore is video object segmentation. The goal is to segment the primary moving objects in the videos. This problem is important for many applications, such as content based video retrieval, video summarization, activity understanding and targeted content replacement. In our framework, we use object proposals, which are object-like regions obtained by low-level visual cues. Each object proposal has an object-ness score associated with it, which indicates how likely this object proposal corresponds to an object. The problem is formulated as a directed acyclic graph, for which nodes represent the object proposals and edges represent the spatiotemporal relationship between nodes. A dynamic programming solution is employed to select one object proposal from each video frame, while ensuring their consistency throughout the video frames. Gaussian mixture models (GMMs) are used for modeling the background and foreground, and Markov Random Fields (MRFs) are employed to smooth the pixel-level segmentation.In the above spatiotemporal graph formulation, we consider the object segmentation in only single video. Next, we consider multiple videos and model the video co-segmentation problem as a spatiotemporal graph. The goal here is to simultaneously segment the moving objects from multiple videos and assign common objects the same labels. The problem is formulated as a regulated maximum clique problem using object proposals. The object proposals are tracked in adjacent frames to generate a pool of candidate tracklets. Then an undirected graph is built with the nodes corresponding to the tracklets from all the videos and edges representing the similarities between the tracklets. A modified Bron-Kerbosch Algorithm is applied to the graph in order to select the prominent objects contained in these videos, hence relate the segmentation of each object in different videos.In online and surveillance videos, the most important object class is the human. In contrast to generic video object segmentation and co-segmentation, specific knowledge about humans, which is defined by a pose (i.e. human skeleton), can be employed to help the segmentation and tracking of people in the videos. We formulate the problem of human pose estimation in videos using the spatiotemporal graph. In this formulation, the nodes represent different body parts in the video frames and edges represent the spatiotemporal relationship between body parts in adjacent frames. The graph is carefully designed to ensure an exact and efficient solution. The overall objective for the new formulation is to remove the simple cycles from the traditional graph-based formulations. Dynamic programming is employed in different stages in the method to select the best tracklets and human pose configurations
Show less - Date Issued
- 2016
- Identifier
- CFE0006429, ucf:51488
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006429
- Title
- Weakly Labeled Action Recognition and Detection.
- Creator
-
Sultani, Waqas, Shah, Mubarak, Bagci, Ulas, Qi, GuoJun, Yun, Hae-Bum, University of Central Florida
- Abstract / Description
-
Research in human action recognition strives to develop increasingly generalized methods thatare robust to intra-class variability and inter-class ambiguity. Recent years have seen tremendousstrides in improving recognition accuracy on ever larger and complex benchmark datasets, comprisingrealistic actions (")in the wild(") videos. Unfortunately, the all-encompassing, dense, globalrepresentations that bring about such improvements often benefit from the inherent characteristics,specific to...
Show moreResearch in human action recognition strives to develop increasingly generalized methods thatare robust to intra-class variability and inter-class ambiguity. Recent years have seen tremendousstrides in improving recognition accuracy on ever larger and complex benchmark datasets, comprisingrealistic actions (")in the wild(") videos. Unfortunately, the all-encompassing, dense, globalrepresentations that bring about such improvements often benefit from the inherent characteristics,specific to datasets and classes, that do not necessarily reflect knowledge about the entity to berecognized. This results in specific models that perform well within datasets but generalize poorly.Furthermore, training of supervised action recognition and detection methods need several precisespatio-temporal manual annotations to achieve good recognition and detection accuracy. For instance,current deep learning architectures require millions of accurately annotated videos to learnrobust action classifiers. However, these annotations are quite difficult to achieve.In the first part of this dissertation, we explore the reasons for poor classifier performance whentested on novel datasets, and quantify the effect of scene backgrounds on action representationsand recognition. We attempt to address the problem of recognizing human actions while trainingand testing on distinct datasets when test videos are neither labeled nor available during training. Inthis scenario, learning of a joint vocabulary, or domain transfer techniques are not applicable. Weperform different types of partitioning of the GIST feature space for several datasets and computemeasures of background scene complexity, as well as, for the extent to which scenes are helpfulin action classification. We then propose a new process to obtain a measure of confidence in eachpixel of the video being a foreground region using motion, appearance, and saliency together in a3D-Markov Random Field (MRF) based framework. We also propose multiple ways to exploit theforeground confidence: to improve bag-of-words vocabulary, histogram representation of a video,and a novel histogram decomposition based representation and kernel.iiiThe above-mentioned work provides probability of each pixel being belonging to the actor, however,it does not give the precise spatio-temporal location of the actor. Furthermore, above frameworkwould require precise spatio-temporal manual annotations to train an action detector. However,manual annotations in videos are laborious, require several annotators and contain humanbiases. Therefore, in the second part of this dissertation, we propose a weakly labeled approachto automatically obtain spatio-temporal annotations of actors in action videos. We first obtain alarge number of action proposals in each video. To capture a few most representative action proposalsin each video and evade processing thousands of them, we rank them using optical flow andsaliency in a 3D-MRF based framework and select a few proposals using MAP based proposal subsetselection method. We demonstrate that this ranking preserves the high-quality action proposals.Several such proposals are generated for each video of the same action. Our next challenge is toiteratively select one proposal from each video so that all proposals are globally consistent. Weformulate this as Generalized Maximum Clique Graph problem (GMCP) using shape, global andfine-grained similarity of proposals across the videos. The output of our method is the most actionrepresentative proposals from each video. Using our method can also annotate multiple instancesof the same action in a video can also be annotated. Moreover, action detection experiments usingannotations obtained by our method and several baselines demonstrate the superiority of ourapproach.The above-mentioned annotation method uses multiple videos of the same action. Therefore, inthe third part of this dissertation, we tackle the problem of spatio-temporal action localization in avideo, without assuming the availability of multiple videos or any prior annotations. The action islocalized by employing images downloaded from the Internet using action label. Given web images,we first dampen image noise using random walk and evade distracting backgrounds withinimages using image action proposals. Then, given a video, we generate multiple spatio-temporalaction proposals. We suppress camera and background generated proposals by exploiting opticalivflow gradients within proposals. To obtain the most action representative proposals, we propose toreconstruct action proposals in the video by leveraging the action proposals in images. Moreover,we preserve the temporal smoothness of the video and reconstruct all proposal bounding boxesjointly using the constraints that push the coefficients for each bounding box toward a commonconsensus, thus enforcing the coefficient similarity across multiple frames. We solve this optimizationproblem using the variant of two-metric projection algorithm. Finally, the video proposalthat has the lowest reconstruction cost and is motion salient is used to localize the action. Ourmethod is not only applicable to the trimmed videos, but it can also be used for action localizationin untrimmed videos, which is a very challenging problem.Finally, in the third part of this dissertation, we propose a novel approach to generate a few properlyranked action proposals from a large number of noisy proposals. The proposed approach beginswith dividing each proposal into sub-proposals. We assume that the quality of proposal remainsthe same within each sub-proposal. We, then employ a graph optimization method to recombinethe sub-proposals in all action proposals in a single video in order to optimally build new actionproposals and rank them by the combined node and edge scores. For an untrimmed video, we firstdivide the video into shots and then make the above-mentioned graph within each shot. Our methodgenerates a few ranked proposals that can be better than all the existing underlying proposals. Ourexperimental results validated that the properly ranked action proposals can significantly boostaction detection results.Our extensive experimental results on different challenging and realistic action datasets, comparisonswith several competitive baselines and detailed analysis of each step of proposed methodsvalidate the proposed ideas and frameworks.
Show less - Date Issued
- 2017
- Identifier
- CFE0006801, ucf:51809
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006801
- Title
- Visual Saliency Detection and Semantic Segmentation.
- Creator
-
Souly, Nasim, Shah, Mubarak, Bagci, Ulas, Qi, GuoJun, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
Visual saliency is the ability to select the most relevant data in the scene and reduce the amount of data that needs to be processed. We propose a novel unsupervised approach to detect visual saliency in videos. For this, we employ a hierarchical segmentation technique to obtain supervoxels of a video, and simultaneously, we build a dictionary from cuboids of the video. Then we create a feature matrix from coefficients of dictionary elements. Next, we decompose this matrix into sparse and...
Show moreVisual saliency is the ability to select the most relevant data in the scene and reduce the amount of data that needs to be processed. We propose a novel unsupervised approach to detect visual saliency in videos. For this, we employ a hierarchical segmentation technique to obtain supervoxels of a video, and simultaneously, we build a dictionary from cuboids of the video. Then we create a feature matrix from coefficients of dictionary elements. Next, we decompose this matrix into sparse and redundant parts and obtain salient regions using group lasso. Our experiments provide promising results in terms of predicting eye movement. Moreover, we apply our method on action recognition task and achieve better results. Saliency detection only highlights important regions, in Semantic Segmentation, the aim is to assign a semantic label to each pixel in the image. Even though semantic segmentation can be achieved by simply applying classifiers to each pixel or a region, the results may not be desirable since general context information is not considered. To address this issue, we propose two supervised methods. First, an approach to discover interactions between labels and regions using a sparse estimation of precision matrix obtained by graphical lasso. Second, a knowledge-based method to incorporate dependencies among regions in the image during inference. High-level knowledge rules - such as co-occurrence- are extracted from training data and transformed into constraints in Integer Programming formulation. A difficulty in the most supervised semantic segmentation approaches is the lack of enough training data. To address this, a semi-supervised learning approach to exploit the plentiful amount of available unlabeled,as well as synthetic images generated via Generative Adversarial Networks (GAN), is presented. Furthermore, an extension of the proposed model to use additional weakly labeled data is proposed. We demonstrate our approaches on three challenging bench-marking datasets
Show less - Date Issued
- 2017
- Identifier
- CFE0006918, ucf:51694
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006918
- Title
- Improving Efficiency in Deep Learning for Large Scale Visual Recognition.
- Creator
-
Liu, Baoyuan, Foroosh, Hassan, Qi, GuoJun, Welch, Gregory, Sukthankar, Rahul, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
The emerging recent large scale visual recognition methods, and in particular the deep Convolutional Neural Networks (CNN), are promising to revolutionize many computer vision based artificial intelligent applications, such as autonomous driving and online image retrieval systems. One of the main challenges in large scale visual recognition is the complexity of the corresponding algorithms. This is further exacerbated by the fact that in most real-world scenarios they need to run in real time...
Show moreThe emerging recent large scale visual recognition methods, and in particular the deep Convolutional Neural Networks (CNN), are promising to revolutionize many computer vision based artificial intelligent applications, such as autonomous driving and online image retrieval systems. One of the main challenges in large scale visual recognition is the complexity of the corresponding algorithms. This is further exacerbated by the fact that in most real-world scenarios they need to run in real time and on platforms that have limited computational resources. This dissertation focuses on improving the efficiency of such large scale visual recognition algorithms from several perspectives. First, to reduce the complexity of large scale classification to sub-linear with the number of classes, a probabilistic label tree framework is proposed. A test sample is classified by traversing the label tree from the root node. Each node in the tree is associated with a probabilistic estimation of all the labels. The tree is learned recursively with iterative maximum likelihood optimization. Comparing to the hard label partition proposed previously, the probabilistic framework performs classification more accurately with similar efficiency. Second, we explore the redundancy of parameters in Convolutional Neural Networks (CNN) and employ sparse decomposition to significantly reduce both the amount of parameters and computational complexity. Both inter-channel and inner-channel redundancy is exploit to achieve more than 90\% sparsity with approximately 1\% drop of classification accuracy. We also propose a CPU based efficient sparse matrix multiplication algorithm to reduce the actual running time of CNN models with sparse convolutional kernels. Third, we propose a multi-stage framework based on CNN to achieve better efficiency than a single traditional CNN model. With a combination of cascade model and the label tree framework, the proposed method divides the input images in both the image space and the label space, and processes each image with CNN models that are most suitable and efficient. The average complexity of the framework is significantly reduced, while the overall accuracy remains the same as in the single complex model.
Show less - Date Issued
- 2016
- Identifier
- CFE0006472, ucf:51436
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006472
- Title
- Confluence of Vision and Natural Language Processing for Cross-media Semantic Relations Extraction.
- Creator
-
Tariq, Amara, Foroosh, Hassan, Qi, GuoJun, Gonzalez, Avelino, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
In this dissertation, we focus on extracting and understanding semantically meaningful relationshipsbetween data items of various modalities; especially relations between images and naturallanguage. We explore the ideas and techniques to integrate such cross-media semantic relationsfor machine understanding of large heterogeneous datasets, made available through the expansionof the World Wide Web. The datasets collected from social media websites, news media outletsand blogging platforms...
Show moreIn this dissertation, we focus on extracting and understanding semantically meaningful relationshipsbetween data items of various modalities; especially relations between images and naturallanguage. We explore the ideas and techniques to integrate such cross-media semantic relationsfor machine understanding of large heterogeneous datasets, made available through the expansionof the World Wide Web. The datasets collected from social media websites, news media outletsand blogging platforms usually contain multiple modalities of data. Intelligent systems are needed to automatically make sense out of these datasets and present them in such a way that humans can find the relevant pieces of information or get a summary of the available material. Such systems have to process multiple modalities of data such as images, text, linguistic features, and structured data in reference to each other. For example, image and video search and retrieval engines are required to understand the relations between visual and textual data so that they can provide relevant answers in the form of images and videos to the users' queries presented in the form of text.We emphasize the automatic extraction of semantic topics or concepts from the data available in any form such as images, free-flowing text or metadata. These semantic concepts/topics become the basis of semantic relations across heterogeneous data types, e.g., visual and textual data. A classic problem involving image-text relations is the automatic generation of textual descriptions of images. This problem is the main focus of our work. In many cases, large amount of text is associated with images. Deep exploration of linguistic features of such text is required to fully utilize the semantic information encoded in it. A news dataset involving images and news articles is an example of this scenario. We devise frameworks for automatic news image description generation based on the semantic relations of images, as well as semantic understanding of linguistic features of the news articles.
Show less - Date Issued
- 2016
- Identifier
- CFE0006507, ucf:51401
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006507
- Title
- Hashing for Multimedia Similarity Modeling and Large-Scale Retrieval.
- Creator
-
Li, Kai, Hua, Kien, Qi, GuoJun, Hu, Haiyan, Wang, Chung-Ching, University of Central Florida
- Abstract / Description
-
In recent years, the amount of multimedia data such as images, texts, and videos have been growing rapidly on the Internet. Motivated by such trends, this thesis is dedicated to exploiting hashing-based solutions to reveal multimedia data correlations and support intra-media and inter-media similarity search among huge volumes of multimedia data.We start by investigating a hashing-based solution for audio-visual similarity modeling and apply it to the audio-visual sound source localization...
Show moreIn recent years, the amount of multimedia data such as images, texts, and videos have been growing rapidly on the Internet. Motivated by such trends, this thesis is dedicated to exploiting hashing-based solutions to reveal multimedia data correlations and support intra-media and inter-media similarity search among huge volumes of multimedia data.We start by investigating a hashing-based solution for audio-visual similarity modeling and apply it to the audio-visual sound source localization problem. We show that synchronized signals in audio and visual modalities demonstrate similar temporal changing patterns in certain feature spaces. We propose to use a permutation-based random hashing technique to capture the temporal order dynamics of audio and visual features by hashing them along the temporal axis into a common Hamming space. In this way, the audio-visual correlation problem is transformed into a similarity search problem in the Hamming space. Our hashing-based audio-visual similarity modeling has shown superior performances in the localization and segmentation of sounding objects in videos.The success of the permutation-based hashing method motivates us to generalize and formally define the supervised ranking-based hashing problem, and study its application to large-scale image retrieval. Specifically, we propose an effective supervised learning procedure to learn optimized ranking-based hash functions that can be used for large-scale similarity search. Compared with the randomized version, the optimized ranking-based hash codes are much more compact and discriminative. Moreover, it can be easily extended to kernel space to discover more complex ranking structures that cannot be revealed in linear subspaces. Experiments on large image datasets demonstrate the effectiveness of the proposed method for image retrieval.We further studied the ranking-based hashing method for the cross-media similarity search problem. Specifically, we propose two optimization methods to jointly learn two groups of linear subspaces, one for each media type, so that features' ranking orders in different linear subspaces maximally preserve the cross-media similarities. Additionally, we develop this ranking-based hashing method in the cross-media context into a flexible hashing framework with a more general solution. We have demonstrated through extensive experiments on several real-world datasets that the proposed cross-media hashing method can achieve superior cross-media retrieval performances against several state-of-the-art algorithms.Lastly, to make better use of the supervisory label information, as well as to further improve the efficiency and accuracy of supervised hashing, we propose a novel multimedia discrete hashing framework that optimizes an instance-wise loss objective, as compared to the pairwise losses, using an efficient discrete optimization method. In addition, the proposed method decouples the binary codes learning and hash function learning into two separate stages, thus making the proposed method equally applicable for both single-media and cross-media search. Extensive experiments on both single-media and cross-media retrieval tasks demonstrate the effectiveness of the proposed method.
Show less - Date Issued
- 2017
- Identifier
- CFE0006759, ucf:51840
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006759
- Title
- Global Data Association for Multiple Pedestrian Tracking.
- Creator
-
Dehghan, Afshin, Shah, Mubarak, Qi, GuoJun, Bagci, Ulas, Zhang, Shaojie, Zheng, Qipeng, University of Central Florida
- Abstract / Description
-
Multi-object tracking is one of the fundamental problems in computer vision. Almost all multi-object tracking systems consist of two main components; detection and data association. In the detection step, object hypotheses are generated in each frame of a sequence. Later, detections that belong to the same target are linked together to form final trajectories. The latter step is called data association. There are several challenges that render this problem difficult, such as occlusion,...
Show moreMulti-object tracking is one of the fundamental problems in computer vision. Almost all multi-object tracking systems consist of two main components; detection and data association. In the detection step, object hypotheses are generated in each frame of a sequence. Later, detections that belong to the same target are linked together to form final trajectories. The latter step is called data association. There are several challenges that render this problem difficult, such as occlusion, background clutter and pose changes. This dissertation aims to address these challenges by tackling the data association component of tracking and contributes three novel methods for solving data association. Firstly, this dissertation will present a new framework for multi-target tracking that uses a novel data association technique using the Generalized Maximum Clique Problem (GMCP) formulation. The majority of current methods, such as bipartite matching, incorporate a limited temporal locality of the sequence into the data association problem. This makes these methods inherently prone to ID-switches and difficulties caused by long-term occlusions, a cluttered background and crowded scenes. On the other hand, our approach incorporates both motion and appearance in a global manner. Unlike limited temporal locality methods which incorporate a few frames into the data association problem, this method incorporates the whole temporal span and solves the data association problem for one object at a time. Generalized Minimum Clique Graph (GMCP) is used to solve the optimization problem of our data association method. The proposed method is supported by superior results on several benchmark sequences. GMCP leads us to a more accurate approach to multi-object tracking by considering all the pairwise relationships in a batch of frames; however, it has some limitations. Firstly, it finds target trajectories one-by-one, missing joint optimization. Secondly, for optimization we use a greedy solver, based on local neighborhood search, making our optimization prone to local minimas. Finally GMCP tracker is slow, which is a burden when dealing with time-sensitive applications. In order to address these problems, we propose a new graph theoretic problem, called Generalized Maximum Multi Clique Problem (GMMCP). GMMCP tracker has all the advantages of the GMCP tracker while addressing its limitations. A solution is presented to GMMCP where no simplification is assumed in problem formulation or problem optimization. GMMCP is NP hard but it can be formulated through a Binary-Integer Program where the solution to small- and medium-sized tracking problems can be found efficiently. To improve speed, Aggregated Dummy Nodes are used for modeling occlusions and miss detections. This also reduces the size of the input graph without using any heuristics. We show that using the speed-up method, our tracker lends itself to a real-time implementation, increasing its potential usefulness in many applications. In test against several tracking datasets, we show that the proposed method outperforms competitive methods. Thus far we have assumed that the number of people do not exceed a few dozens. However, this is not always the case. In many scenarios such as, marathon, political rallies or religious rites, the number of people in a frame may reach few hundreds or even few thousands. Tracking in high-density crowd sequences is a challenging problem due to several reasons. Human detection methods often fail to localize objects correctly in extremely crowded scenes. This limits the use of data association based tracking methods. Additionally, it is hard to extend existing multi-target tracking to track targets in highly-crowded scenes, because the large number of targets increases the computational complexity. Furthermore, the small apparent target size makes it challenging to extract features to discriminate targets from their surroundings. Finally, we present a tracker that addresses the above-mentioned problems. We formulate online crowd tracking as a Binary Quadratic Programing, where both detection and data association problems are solved together. Our formulation employs target's individual information in the form of appearance and motion as well as contextual cues in the form of neighborhood motion, spatial proximity and grouping constraints. Due to large number of targets, state-of-the-art commercial quadratic programing solvers fail to efficiently find the solution to the proposed optimization. In order to overcome the computational complexity of available solvers, we propose to use the most recent version of Modified Frank-Wolfe algorithms with SWAP steps. The proposed tracker can track hundreds of targets efficiently and improves state-of-the-art results by significant margin on high density crowd sequences.
Show less - Date Issued
- 2016
- Identifier
- CFE0006095, ucf:51201
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006095
- Title
- Research on High-performance and Scalable Data Access in Parallel Big Data Computing.
- Creator
-
Yin, Jiangling, Wang, Jun, Jin, Yier, Lin, Mingjie, Qi, GuoJun, Wang, Chung-Ching, University of Central Florida
- Abstract / Description
-
To facilitate big data processing, many dedicated data-intensive storage systems such as Google File System(GFS), Hadoop Distributed File System(HDFS) and Quantcast File System(QFS) have been developed. Currently, the Hadoop Distributed File System(HDFS) [20] is the state-of-art and most popular open-source distributed file system for big data processing. It is widely deployed as the bedrock for many big data processing systems/frameworks, such as the script-based pig system, MPI-based...
Show moreTo facilitate big data processing, many dedicated data-intensive storage systems such as Google File System(GFS), Hadoop Distributed File System(HDFS) and Quantcast File System(QFS) have been developed. Currently, the Hadoop Distributed File System(HDFS) [20] is the state-of-art and most popular open-source distributed file system for big data processing. It is widely deployed as the bedrock for many big data processing systems/frameworks, such as the script-based pig system, MPI-based parallel programs, graph processing systems and scala/java-based Spark frameworks. These systems/applications employ parallel processes/executors to speed up data processing within scale-out clusters.Job or task schedulers in parallel big data applications such as mpiBLAST and ParaView can maximize the usage of computing resources such as memory and CPU by tracking resource consumption/availability for task assignment. However, since these schedulers do not take the distributed I/O resources and global data distribution into consideration, the data requests from parallel processes/executors in big data processing will unfortunately be served in an imbalanced fashion on the distributed storage servers. These imbalanced access patterns among storage nodes are caused because a). unlike conventional parallel file system using striping policies to evenly distribute data among storage nodes, data-intensive file systems such as HDFS store each data unit, referred to as chunk or block file, with several copies based on a relative random policy, which can result in an uneven data distribution among storage nodes; b). based on the data retrieval policy in HDFS, the more data a storage node contains, the higher the probability that the storage node could be selected to serve the data. Therefore, on the nodes serving multiple chunk files, the data requests from different processes/executors will compete for shared resources such as hard disk head and network bandwidth. Because of this, the makespan of the entire program could be significantly prolonged and the overall I/O performance will degrade.The first part of my dissertation seeks to address aspects of these problems by creating an I/O middleware system and designing matching-based algorithms to optimize data access in parallel big data processing. To address the problem of remote data movement, we develop an I/O middleware system, called SLAM, which allows MPI-based analysis and visualization programs to benefit from locality read, i.e, each MPI process can access its required data from a local or nearby storage node. This can greatly improve the execution performance by reducing the amount of data movement over network. Furthermore, to address the problem of imbalanced data access, we propose a method called Opass, which models the data read requests that are issued by parallel applications to cluster nodes as a graph data structure where edges weights encode the demands of load capacity. We then employ matching-based algorithms to map processes to data to achieve data access in a balanced fashion. The final part of my dissertation focuses on optimizing sub-dataset analyses in parallel big data processing. Our proposed methods can benefit different analysis applications with various computational requirements and the experiments on different cluster testbeds show their applicability and scalability.
Show less - Date Issued
- 2015
- Identifier
- CFE0006021, ucf:51008
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006021