Current Search: Object Tracking (x)
View All Items
- Title
- DEBRIS TRACKING IN A SEMISTABLE BACKGROUND.
- Creator
-
Vanumamalai, KarthikKalathi, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
Object Tracking plays a very pivotal role in many computer vision applications such as video surveillance, human gesture recognition and object based video compressions such as MPEG-4. Automatic detection of any moving object and tracking its motion is always an important topic of computer vision and robotic fields. This thesis deals with the problem of detecting the presence of debris or any other unexpected objects in footage obtained during spacecraft launches, and this poses a challenge...
Show moreObject Tracking plays a very pivotal role in many computer vision applications such as video surveillance, human gesture recognition and object based video compressions such as MPEG-4. Automatic detection of any moving object and tracking its motion is always an important topic of computer vision and robotic fields. This thesis deals with the problem of detecting the presence of debris or any other unexpected objects in footage obtained during spacecraft launches, and this poses a challenge because of the non-stationary background. When the background is stationary, moving objects can be detected by frame differencing. Therefore there is a need for background stabilization before tracking any moving object in the scene. Here two problems are considered and in both footage from Space shuttle launch is considered with the objective to track any debris falling from the Shuttle. The proposed method registers two consecutive frames using FFT based image registration where the amount of transformation parameters (translation, rotation) is calculated automatically. This information is the next passed to a Kalman filtering stage which produces a mask image that is used to find high intensity areas which are of potential interest.
Show less - Date Issued
- 2005
- Identifier
- CFE0000886, ucf:46628
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000886
- Title
- Human Detection, Tracking and Segmentation in Surveillance Video.
- Creator
-
Shu, Guang, Shah, Mubarak, Boloni, Ladislau, Wang, Jun, Lin, Mingjie, Sugaya, Kiminobu, University of Central Florida
- Abstract / Description
-
This dissertation addresses the problem of human detection and tracking in surveillance videos. Even though this is a well-explored topic, many challenges remain when confronted with data from real world situations. These challenges include appearance variation, illumination changes, camera motion, cluttered scenes and occlusion. In this dissertation several novel methods for improving on the current state of human detection and tracking based on learning scene-specific information in video...
Show moreThis dissertation addresses the problem of human detection and tracking in surveillance videos. Even though this is a well-explored topic, many challenges remain when confronted with data from real world situations. These challenges include appearance variation, illumination changes, camera motion, cluttered scenes and occlusion. In this dissertation several novel methods for improving on the current state of human detection and tracking based on learning scene-specific information in video feeds are proposed.Firstly, we propose a novel method for human detection which employs unsupervised learning and superpixel segmentation. The performance of generic human detectors is usually degraded in unconstrained video environments due to varying lighting conditions, backgrounds and camera viewpoints. To handle this problem, we employ an unsupervised learning framework that improves the detection performance of a generic detector when it is applied to a particular video. In our approach, a generic DPM human detector is employed to collect initial detection examples. These examples are segmented into superpixels and then represented using Bag-of-Words (BoW) framework. The superpixel-based BoW feature encodes useful color features of the scene, which provides additional information. Finally a new scene-specific classifier is trained using the BoW features extracted from the new examples. Compared to previous work, our method learns scene-specific information through superpixel-based features, hence it can avoid many false detections typically obtained by a generic detector. We are able to demonstrate a significant improvement in the performance of the state-of-the-art detector.Given robust human detection, we propose a robust multiple-human tracking framework using a part-based model. Human detection using part models has become quite popular, yet its extension in tracking has not been fully explored. Single camera-based multiple-person tracking is often hindered by difficulties such as occlusion and changes in appearance. We address such problems by developing an online-learning tracking-by-detection method. Our approach learns part-based person-specific Support Vector Machine (SVM) classifiers which capture articulations of moving human bodies with dynamically changing backgrounds. With the part-based model, our approach is able to handle partial occlusions in both the detection and the tracking stages. In the detection stage, we select the subset of parts which maximizes the probability of detection. This leads to a significant improvement in detection performance in cluttered scenes. In the tracking stage, we dynamically handle occlusions by distributing the score of the learned person classifier among its corresponding parts, which allows us to detect and predict partial occlusions and prevent the performance of the classifiers from being degraded. Extensive experiments using the proposed method on several challenging sequences demonstrate state-of-the-art performance in multiple-people tracking.Next, in order to obtain precise boundaries of humans, we propose a novel method for multiple human segmentation in videos by incorporating human detection and part-based detection potential into a multi-frame optimization framework. In the first stage, after obtaining the superpixel segmentation for each detection window, we separate superpixels corresponding to a human and background by minimizing an energy function using Conditional Random Field (CRF). We use the part detection potentials from the DPM detector, which provides useful information for human shape. In the second stage, the spatio-temporal constraints of the video is leveraged to build a tracklet-based Gaussian Mixture Model for each person, and the boundaries are smoothed by multi-frame graph optimization. Compared to previous work, our method could automatically segment multiple people in videos with accurate boundaries, and it is robust to camera motion. Experimental results show that our method achieves better segmentation performance than previous methods in terms of segmentation accuracy on several challenging video sequences.Most of the work in Computer Vision deals with point solution; a specific algorithm for a specific problem. However, putting different algorithms into one real world integrated system is a big challenge. Finally, we introduce an efficient tracking system, NONA, for high-definition surveillance video. We implement the system using a multi-threaded architecture (Intel Threading Building Blocks (TBB)), which executes video ingestion, tracking, and video output in parallel. To improve tracking accuracy without sacrificing efficiency, we employ several useful techniques. Adaptive Template Scaling is used to handle the scale change due to objects moving towards a camera. Incremental Searching and Local Frame Differencing are used to resolve challenging issues such as scale change, occlusion and cluttered backgrounds. We tested our tracking system on a high-definition video dataset and achieved acceptable tracking accuracy while maintaining real-time performance.
Show less - Date Issued
- 2014
- Identifier
- CFE0005551, ucf:50278
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005551
- Title
- MULTI-VIEW APPROACHES TO TRACKING, 3D RECONSTRUCTION AND OBJECT CLASS DETECTION.
- Creator
-
khan, saad, Shah, Mubarak, University of Central Florida
- Abstract / Description
-
Multi-camera systems are becoming ubiquitous and have found application in a variety of domains including surveillance, immersive visualization, sports entertainment and movie special effects amongst others. From a computer vision perspective, the challenging task is how to most efficiently fuse information from multiple views in the absence of detailed calibration information and a minimum of human intervention. This thesis presents a new approach to fuse foreground likelihood information...
Show moreMulti-camera systems are becoming ubiquitous and have found application in a variety of domains including surveillance, immersive visualization, sports entertainment and movie special effects amongst others. From a computer vision perspective, the challenging task is how to most efficiently fuse information from multiple views in the absence of detailed calibration information and a minimum of human intervention. This thesis presents a new approach to fuse foreground likelihood information from multiple views onto a reference view without explicit processing in 3D space, thereby circumventing the need for complete calibration. Our approach uses a homographic occupancy constraint (HOC), which states that if a foreground pixel has a piercing point that is occupied by foreground object, then the pixel warps to foreground regions in every view under homographies induced by the reference plane, in effect using cameras as occupancy detectors. Using the HOC we are able to resolve occlusions and robustly determine ground plane localizations of the people in the scene. To find tracks we obtain ground localizations over a window of frames and stack them creating a space time volume. Regions belonging to the same person form contiguous spatio-temporal tracks that are clustered using a graph cuts segmentation approach. Second, we demonstrate that the HOC is equivalent to performing visual hull intersection in the image-plane, resulting in a cross-sectional slice of the object. The process is extended to multiple planes parallel to the reference plane in the framework of plane to plane homologies. Slices from multiple planes are accumulated and the 3D structure of the object is segmented out. Unlike other visual hull based approaches that use 3D constructs like visual cones, voxels or polygonal meshes requiring calibrated views, ours is purely-image based and uses only 2D constructs i.e. planar homographies between views. This feature also renders it conducive to graphics hardware acceleration. The current GPU implementation of our approach is capable of fusing 60 views (480x720 pixels) at the rate of 50 slices/second. We then present an extension of this approach to reconstructing non-rigid articulated objects from monocular video sequences. The basic premise is that due to motion of the object, scene occupancies are blurred out with non-occupancies in a manner analogous to motion blurred imagery. Using our HOC and a novel construct: the temporal occupancy point (TOP), we are able to fuse multiple views of non-rigid objects obtained from a monocular video sequence. The result is a set of blurred scene occupancy images in the corresponding views, where the values at each pixel correspond to the fraction of total time duration that the pixel observed an occupied scene location. We then use a motion de-blurring approach to de-blur the occupancy images and obtain the 3D structure of the non-rigid object. In the final part of this thesis, we present an object class detection method employing 3D models of rigid objects constructed using the above 3D reconstruction approach. Instead of using a complicated mechanism for relating multiple 2D training views, our approach establishes spatial connections between these views by mapping them directly to the surface of a 3D model. To generalize the model for object class detection, features from supplemental views (obtained from Google Image search) are also considered. Given a 2D test image, correspondences between the 3D feature model and the testing view are identified by matching the detected features. Based on the 3D locations of the corresponding features, several hypotheses of viewing planes can be made. The one with the highest confidence is then used to detect the object using feature location matching. Performance of the proposed method has been evaluated by using the PASCAL VOC challenge dataset and promising results are demonstrated.
Show less - Date Issued
- 2008
- Identifier
- CFE0002073, ucf:47593
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002073
- Title
- Human Action Detection, Tracking and Segmentation in Videos.
- Creator
-
Tian, Yicong, Shah, Mubarak, Bagci, Ulas, Liu, Fei, Walker, John, University of Central Florida
- Abstract / Description
-
This dissertation addresses the problem of human action detection, human tracking and segmentation in videos. They are fundamental tasks in computer vision and are extremely challenging to solve in realistic videos. We first propose a novel approach for action detection by exploring the generalization of deformable part models from 2D images to 3D spatiotemporal volumes. By focusing on the most distinctive parts of each action, our models adapt to intra-class variation and show robustness to...
Show moreThis dissertation addresses the problem of human action detection, human tracking and segmentation in videos. They are fundamental tasks in computer vision and are extremely challenging to solve in realistic videos. We first propose a novel approach for action detection by exploring the generalization of deformable part models from 2D images to 3D spatiotemporal volumes. By focusing on the most distinctive parts of each action, our models adapt to intra-class variation and show robustness to clutter. This approach deals with detecting action performed by a single person. When there are multiple humans in the scene, humans need to be segmented and tracked from frame to frame before action recognition can be performed. Next, we propose a novel approach for multiple object tracking (MOT) by formulating detection and data association in one framework. Our method allows us to overcome the confinements of data association based MOT approaches, where the performance is dependent on the object detection results provided at input level. We show that automatically detecting and tracking targets in a single framework can help resolve the ambiguities due to frequent occlusion and heavy articulation of targets. In this tracker, targets are represented by bounding boxes, which is a coarse representation. However, pixel-wise object segmentation provides fine level information, which is desirable for later tasks. Finally, we propose a tracker that simultaneously solves three main problems: detection, data association and segmentation. This is especially important because the output of each of those three problems are highly correlated and the solution of one can greatly help improve the others. The proposed approach achieves more accurate segmentation results and also helps better resolve typical difficulties in multiple target tracking, such as occlusion, ID-switch and track drifting.
Show less - Date Issued
- 2018
- Identifier
- CFE0007378, ucf:52069
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007378
- Title
- CHANGES IN RUNNING AND MULTIPLE OBJECT TRACKING PERFORMANCE DURING A 90-MINUTE INTERMITTENT SOCCER PERFORMANCE TEST (iSPT). A PILOT STUDY.
- Creator
-
Girts, Ryan, Wells, Adam, Stout, Jeffrey, Fukuda, David, Hoffman, Jay, University of Central Florida
- Abstract / Description
-
Multiple object tracking (MOT) is a cognitive process that involves the active processing of dynamic visual information. In athletes, MOT speed is critical for maintaining spatial awareness of teammates, opponents, and the ball while moving at high velocities during a match. Understanding how MOT speed changes throughout the course of a competitive game may enhance strategies for maintaining optimal player performance. The objective of this study was to examine changes in MOT speed and...
Show moreMultiple object tracking (MOT) is a cognitive process that involves the active processing of dynamic visual information. In athletes, MOT speed is critical for maintaining spatial awareness of teammates, opponents, and the ball while moving at high velocities during a match. Understanding how MOT speed changes throughout the course of a competitive game may enhance strategies for maintaining optimal player performance. The objective of this study was to examine changes in MOT speed and running performance during a 90-minute intermittent soccer performance test (iSPT). A secondary purpose was to examine the relationship between aerobic capacity and changes in MOT speed.Seven competitive female soccer players age: 20.4 (&)#177; 1.8 y, height: 166.7 (&)#177; 3.2 cm, weight: 62.4 (&)#177; 4.0 kg, VO2max: 45.8 (&)#177; 4.6 ml/kg/min-1) completed an intermittent soccer performance test (iSPT) on a Curve(TM) non-motorized treadmill (cNMT). The iSPT was divided into two 45-minute halves with a 15-minute halftime [HT] interval, and consisted of six individualized velocity zones. Velocity zones were consistent with previous time motion analyses of competitive soccer matches and based upon individual peak sprint speeds (PSS) as follows: standing (0% PSS, 17.8% of iSPT), walking (20% PSS, 36.4% of iSPT), jogging (35% PSS, 24.0% of iSPT), running (50% PSS, 11.6% of iSPT), fast running (60% PSS, 3.6% of iSPT), and sprinting (80% PSS, 6.7% of iSPT). Stand, walk, jog and run zones were combined to create a low-speed zone (LS). Fast run and sprint zones were combined to create a high-speed zone (HS). MOT speed was assessed at baseline (0 min.) and three times during each half of the iSPT. Dependent t-tests and Pearson correlation coefficients were utilized to analyze the data. Across 15-minute time blocks, significant decreases in distance covered and average speed were noted for jogging, sprinting, low-speed running, high-speed running, and total distance (p's (<) 0.05). Players covered significantly less total distance during the second half compared to the first (p = 0.025). Additionally, significant decreases in distance covered and average speed were observed during the second half for the sprint and HS zones (p's ? 0.008). No significant main effect was noted for MOT speed across 15-minute time blocks. A trend towards a decrease in MOT speed was observed between halves (p = 0.056). A significant correlation was observed between the change in MOT speed and VO2max (r = 0.888, p = 0.007). The fatigue associated with 90 minutes of soccer specific running negatively influenced running performance during the second half. However, increased aerobic capacity appears to be associated with an attenuation of cognitive decline during 90-minutes of soccer specific running. Results of this study indicate the importance of aerobic capacity on maintaining spatial awareness during a match.
Show less - Date Issued
- 2018
- Identifier
- CFE0007183, ucf:52290
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007183
- Title
- Scene Understanding for Real Time Processing of Queries over Big Data Streaming Video.
- Creator
-
Aved, Alexander, Hua, Kien, Foroosh, Hassan, Zou, Changchun, Ni, Liqiang, University of Central Florida
- Abstract / Description
-
With heightened security concerns across the globe and the increasing need to monitor, preserve and protect infrastructure and public spaces to ensure proper operation, quality assurance and safety, numerous video cameras have been deployed. Accordingly, they also need to be monitored effectively and efficiently. However, relying on human operators to constantly monitor all the video streams is not scalable or cost effective. Humans can become subjective, fatigued, even exhibit bias and it is...
Show moreWith heightened security concerns across the globe and the increasing need to monitor, preserve and protect infrastructure and public spaces to ensure proper operation, quality assurance and safety, numerous video cameras have been deployed. Accordingly, they also need to be monitored effectively and efficiently. However, relying on human operators to constantly monitor all the video streams is not scalable or cost effective. Humans can become subjective, fatigued, even exhibit bias and it is difficult to maintain high levels of vigilance when capturing, searching and recognizing events that occur infrequently or in isolation.These limitations are addressed in the Live Video Database Management System (LVDBMS), a framework for managing and processing live motion imagery data. It enables rapid development of video surveillance software much like traditional database applications are developed today. Such developed video stream processing applications and ad hoc queries are able to "reuse" advanced image processing techniques that have been developed. This results in lower software development and maintenance costs. Furthermore, the LVDBMS can be intensively tested to ensure consistent quality across all associated video database applications. Its intrinsic privacy framework facilitates a formalized approach to the specification and enforcement of verifiable privacy policies. This is an important step towards enabling a general privacy certification for video surveillance systems by leveraging a standardized privacy specification language.With the potential to impact many important fields ranging from security and assembly line monitoring to wildlife studies and the environment, the broader impact of this work is clear. The privacy framework protects the general public from abusive use of surveillance technology; success in addressing the (")trust(") issue will enable many new surveillance-related applications. Although this research focuses on video surveillance, the proposed framework has the potential to support many video-based analytical applications.
Show less - Date Issued
- 2013
- Identifier
- CFE0004648, ucf:49900
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004648
- Title
- SCENE MONITORING WITH A FOREST OF COOPERATIVE SENSORS.
- Creator
-
Javed, Omar, Shah, Mubarak, University of Central Florida
- Abstract / Description
-
In this dissertation, we present vision based scene interpretation methods for monitoring of people and vehicles, in real-time, within a busy environment using a forest of co-operative electro-optical (EO) sensors. We have developed novel video understanding algorithms with learning capability, to detect and categorize people and vehicles, track them with in a camera and hand-off this information across multiple networked cameras for multi-camera tracking. The ability to learn prevents the...
Show moreIn this dissertation, we present vision based scene interpretation methods for monitoring of people and vehicles, in real-time, within a busy environment using a forest of co-operative electro-optical (EO) sensors. We have developed novel video understanding algorithms with learning capability, to detect and categorize people and vehicles, track them with in a camera and hand-off this information across multiple networked cameras for multi-camera tracking. The ability to learn prevents the need for extensive manual intervention, site models and camera calibration, and provides adaptability to changing environmental conditions. For object detection and categorization in the video stream, a two step detection procedure is used. First, regions of interest are determined using a novel hierarchical background subtraction algorithm that uses color and gradient information for interest region detection. Second, objects are located and classified from within these regions using a weakly supervised learning mechanism based on co-training that employs motion and appearance features. The main contribution of this approach is that it is an online procedure in which separate views (features) of the data are used for co-training, while the combined view (all features) is used to make classification decisions in a single boosted framework. The advantage of this approach is that it requires only a few initial training samples and can automatically adjust its parameters online to improve the detection and classification performance. Once objects are detected and classified they are tracked in individual cameras. Single camera tracking is performed using a voting based approach that utilizes color and shape cues to establish correspondence in individual cameras. The tracker has the capability to handle multiple occluded objects. Next, the objects are tracked across a forest of cameras with non-overlapping views. This is a hard problem because of two reasons. First, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, the system learns the inter-camera relationships to constrain track correspondences. These relationships are learned in the form of multivariate probability density of space-time variables (object entry and exit locations, velocities, and inter-camera transition times) using Parzen windows. To handle the appearance change of an object as it moves from one camera to another, we show that all color transfer functions from a given camera to another camera lie in a low dimensional subspace. The tracking algorithm learns this subspace by using probabilistic principal component analysis and uses it for appearance matching. The proposed system learns the camera topology and subspace of inter-camera color transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework using both the location and appearance cues. Extensive experiments and deployment of this system in realistic scenarios has demonstrated the robustness of the proposed methods. The proposed system was able to detect and classify targets, and seamlessly tracked them across multiple cameras. It also generated a summary in terms of key frames and textual description of trajectories to a monitoring officer for final analysis and response decision. This level of interpretation was the goal of our research effort, and we believe that it is a significant step forward in the development of intelligent systems that can deal with the complexities of real world scenarios.
Show less - Date Issued
- 2005
- Identifier
- CFE0000497, ucf:46362
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000497
- Title
- Global Data Association for Multiple Pedestrian Tracking.
- Creator
-
Dehghan, Afshin, Shah, Mubarak, Qi, GuoJun, Bagci, Ulas, Zhang, Shaojie, Zheng, Qipeng, University of Central Florida
- Abstract / Description
-
Multi-object tracking is one of the fundamental problems in computer vision. Almost all multi-object tracking systems consist of two main components; detection and data association. In the detection step, object hypotheses are generated in each frame of a sequence. Later, detections that belong to the same target are linked together to form final trajectories. The latter step is called data association. There are several challenges that render this problem difficult, such as occlusion,...
Show moreMulti-object tracking is one of the fundamental problems in computer vision. Almost all multi-object tracking systems consist of two main components; detection and data association. In the detection step, object hypotheses are generated in each frame of a sequence. Later, detections that belong to the same target are linked together to form final trajectories. The latter step is called data association. There are several challenges that render this problem difficult, such as occlusion, background clutter and pose changes. This dissertation aims to address these challenges by tackling the data association component of tracking and contributes three novel methods for solving data association. Firstly, this dissertation will present a new framework for multi-target tracking that uses a novel data association technique using the Generalized Maximum Clique Problem (GMCP) formulation. The majority of current methods, such as bipartite matching, incorporate a limited temporal locality of the sequence into the data association problem. This makes these methods inherently prone to ID-switches and difficulties caused by long-term occlusions, a cluttered background and crowded scenes. On the other hand, our approach incorporates both motion and appearance in a global manner. Unlike limited temporal locality methods which incorporate a few frames into the data association problem, this method incorporates the whole temporal span and solves the data association problem for one object at a time. Generalized Minimum Clique Graph (GMCP) is used to solve the optimization problem of our data association method. The proposed method is supported by superior results on several benchmark sequences. GMCP leads us to a more accurate approach to multi-object tracking by considering all the pairwise relationships in a batch of frames; however, it has some limitations. Firstly, it finds target trajectories one-by-one, missing joint optimization. Secondly, for optimization we use a greedy solver, based on local neighborhood search, making our optimization prone to local minimas. Finally GMCP tracker is slow, which is a burden when dealing with time-sensitive applications. In order to address these problems, we propose a new graph theoretic problem, called Generalized Maximum Multi Clique Problem (GMMCP). GMMCP tracker has all the advantages of the GMCP tracker while addressing its limitations. A solution is presented to GMMCP where no simplification is assumed in problem formulation or problem optimization. GMMCP is NP hard but it can be formulated through a Binary-Integer Program where the solution to small- and medium-sized tracking problems can be found efficiently. To improve speed, Aggregated Dummy Nodes are used for modeling occlusions and miss detections. This also reduces the size of the input graph without using any heuristics. We show that using the speed-up method, our tracker lends itself to a real-time implementation, increasing its potential usefulness in many applications. In test against several tracking datasets, we show that the proposed method outperforms competitive methods. Thus far we have assumed that the number of people do not exceed a few dozens. However, this is not always the case. In many scenarios such as, marathon, political rallies or religious rites, the number of people in a frame may reach few hundreds or even few thousands. Tracking in high-density crowd sequences is a challenging problem due to several reasons. Human detection methods often fail to localize objects correctly in extremely crowded scenes. This limits the use of data association based tracking methods. Additionally, it is hard to extend existing multi-target tracking to track targets in highly-crowded scenes, because the large number of targets increases the computational complexity. Furthermore, the small apparent target size makes it challenging to extract features to discriminate targets from their surroundings. Finally, we present a tracker that addresses the above-mentioned problems. We formulate online crowd tracking as a Binary Quadratic Programing, where both detection and data association problems are solved together. Our formulation employs target's individual information in the form of appearance and motion as well as contextual cues in the form of neighborhood motion, spatial proximity and grouping constraints. Due to large number of targets, state-of-the-art commercial quadratic programing solvers fail to efficiently find the solution to the proposed optimization. In order to overcome the computational complexity of available solvers, we propose to use the most recent version of Modified Frank-Wolfe algorithms with SWAP steps. The proposed tracker can track hundreds of targets efficiently and improves state-of-the-art results by significant margin on high density crowd sequences.
Show less - Date Issued
- 2016
- Identifier
- CFE0006095, ucf:51201
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006095
- Title
- On RADAR DECEPTION, AS MOTIVATION FOR CONTROL OF CONSTRAINED SYSTEMS.
- Creator
-
Hajieghrary, Hadi, Jayasuriya, Suhada, Xu, Yunjun, Das, Tuhin, University of Central Florida
- Abstract / Description
-
This thesis studies the control algorithms used by a team of ECAVs (Electronic Combat Air Vehicle) to deceive a network of radars to detect a phantom track. Each ECAV has the electronic capability of intercepting the radar waves, and introducing an appropriate time delay before transmitting it back, and deceiving the radar into seeing a spurious target beyond its actual position. On the other hand, to avoid the errors and increase the reliability, have a complete coverage in various...
Show moreThis thesis studies the control algorithms used by a team of ECAVs (Electronic Combat Air Vehicle) to deceive a network of radars to detect a phantom track. Each ECAV has the electronic capability of intercepting the radar waves, and introducing an appropriate time delay before transmitting it back, and deceiving the radar into seeing a spurious target beyond its actual position. On the other hand, to avoid the errors and increase the reliability, have a complete coverage in various atmosphere conditions, and confronting the effort of the belligerent intruders to delude the sentinel and enter the area usually a network of radars are deployed to guard the region. However, a team of cooperating ECAVs could exploit this arrangement and plans their trajectories in a way all the radars in the network vouch for seeing a single and coherent spurious track of a phantom. Since each station in the network confirms the other, the phantom track is considered valid. This problem serves as a motivating example in trajectory planning for the multi-agent system in highly constrained operation conditions. The given control command to each agent should be a viable one in the agent limited capabilities, and also drives it in a cumulative action to keep the formation.In this thesis, three different approaches to devise a trajectory for each agent is studied, and the difficulties for deploying each one are addressed. In the first one, a command center has all information about the state of the agents, and in every step decides about the control each agent should apply. This method is very effective and robust, but needs a reliable communication. In the second method, each agent decides on its own control, and the members of the group just communicate and agree on the range of control they like to apply on the phantom. Although in this method much less data needs to communicate between the agents, it is very sensitive to the disturbances and miscalculations, and could be easily fell apart or come to a state with no feasible solution to continue. In the third method a differential geometric approach to the problem is studied. This method has a very strong backbone, and minimizes the communication needed to a binary one. However, less data provided to the agents about the system, more sensitive and infirm the system is when it faced with imperfectionalities. In this thesis, an object oriented program is developed in the Matlab software area to simulate all these three control strategies in a scalable fashion. Object oriented programming is a naturally suitable method to simulate a multi-agent system. It gives the flexibility to make the code more close to a real scenario with defining each agent as a separated and independent identity. The main objective is to understand the nature of the constrained dynamic problems, and examine various solutions in different situations. Using the flexibility of this code, we could simulate several scenarios, and incorporate various conditions on the system. Also, we could have a close look at each agent to observe its behavior in these situations. In this way we will gain a good insight of the system which could be used in designing of the agents for specific missions.
Show less - Date Issued
- 2013
- Identifier
- CFE0004857, ucf:49683
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004857