Current Search: tracking (x)
Pages
-
-
Title
-
ULTRAWIDEBAND INDOOR LOCATION AND TRACKING SYSTEM.
-
Creator
-
Chen, Qing, Turgut, Damla, University of Central Florida
-
Abstract / Description
-
The objective of this thesis is to demonstrate an indoor intruder location and tracking system with UltraWideBand (UWB) technology and use data compression and Constant False Alarm Rate (CFAR) techniques to improve the performance of the location system. Reliable and accurate indoor positioning requires a local replacement for GPS systems since satellite signals are not available indoors. UWB systems are particularly suitable for indoor location systems due their inherent capabilities such as...
Show moreThe objective of this thesis is to demonstrate an indoor intruder location and tracking system with UltraWideBand (UWB) technology and use data compression and Constant False Alarm Rate (CFAR) techniques to improve the performance of the location system. Reliable and accurate indoor positioning requires a local replacement for GPS systems since satellite signals are not available indoors. UWB systems are particularly suitable for indoor location systems due their inherent capabilities such as low-power, multi-path rejection, and wide bandwidth. In our application, we are using UWB radios as a radar system for tracking targets in indoor locations. We also use Discrete Cosine Transform (DCT) to compress the UWB scan waveforms from the receivers to the main computer to conserve bandwidth. At the main computer, we use Inverse DCT to recover the original signal. The UWB intruder detection system has the indoor tracking accuracy of four inches. There are many military and commercial applications such as tracking firefighters and locating trapped people in earthquake zones, and so on. This thesis demonstrates the capability of a UWB radar system to locate and track an intruder to an accuracy of four inches in an indoor cluttered environment.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001233, ucf:46924
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001233
-
-
Title
-
TAMING CROWDED VISUAL SCENES.
-
Creator
-
Ali, Saad, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
Computer vision algorithms have played a pivotal role in commercial video surveillance systems for a number of years. However, a common weakness among these systems is their inability to handle crowded scenes. In this thesis, we have developed algorithms that overcome some of the challenges encountered in videos of crowded environments such as sporting events, religious festivals, parades, concerts, train stations, airports, and malls. We adopt a top-down approach by first performing a global...
Show moreComputer vision algorithms have played a pivotal role in commercial video surveillance systems for a number of years. However, a common weakness among these systems is their inability to handle crowded scenes. In this thesis, we have developed algorithms that overcome some of the challenges encountered in videos of crowded environments such as sporting events, religious festivals, parades, concerts, train stations, airports, and malls. We adopt a top-down approach by first performing a global-level analysis that locates dynamically distinct crowd regions within the video. This knowledge is then employed in the detection of abnormal behaviors and tracking of individual targets within crowds. In addition, the thesis explores the utility of contextual information necessary for persistent tracking and re-acquisition of objects in crowded scenes. For the global-level analysis, a framework based on Lagrangian Particle Dynamics is proposed to segment the scene into dynamically distinct crowd regions or groupings. For this purpose, the spatial extent of the video is treated as a phase space of a time-dependent dynamical system in which transport from one region of the phase space to another is controlled by the optical flow. Next, a grid of particles is advected forward in time through the phase space using a numerical integration to generate a ``flow map''. The flow map relates the initial positions of particles to their final positions. The spatial gradients of the flow map are used to compute a Cauchy Green Deformation tensor that quantifies the amount by which the neighboring particles diverge over the length of the integration. The maximum eigenvalue of the tensor is used to construct a forward Finite Time Lyapunov Exponent (FTLE) field that reveals the Attracting Lagrangian Coherent Structures (LCS). The same process is repeated by advecting the particles backward in time to obtain a backward FTLE field that reveals the repelling LCS. The attracting and repelling LCS are the time dependent invariant manifolds of the phase space and correspond to the boundaries between dynamically distinct crowd flows. The forward and backward FTLE fields are combined to obtain one scalar field that is segmented using a watershed segmentation algorithm to obtain the labeling of distinct crowd-flow segments. Next, abnormal behaviors within the crowd are localized by detecting changes in the number of crowd-flow segments over time. Next, the global-level knowledge of the scene generated by the crowd-flow segmentation is used as an auxiliary source of information for tracking an individual target within a crowd. This is achieved by developing a scene structure-based force model. This force model captures the notion that an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in his or her vicinity. The key ingredients of the force model are three floor fields that are inspired by research in the field of evacuation dynamics; namely, Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of moving from one location to the next by converting the long-range forces into local forces. The SFF specifies regions of the scene that are attractive in nature, such as an exit location. The DFF, which is based on the idea of active walker models, corresponds to the virtual traces created by the movements of nearby individuals in the scene. The BFF specifies influences exhibited by the barriers within the scene, such as walls and no-entry areas. By combining influence from all three fields with the available appearance information, we are able to track individuals in high-density crowds. The results are reported on real-world sequences of marathons and railway stations that contain thousands of people. A comparative analysis with respect to an appearance-based mean shift tracker is also conducted by generating the ground truth. The result of this analysis demonstrates the benefit of using floor fields in crowded scenes. The occurrence of occlusion is very frequent in crowded scenes due to a high number of interacting objects. To overcome this challenge, we propose an algorithm that has been developed to augment a generic tracking algorithm to perform persistent tracking in crowded environments. The algorithm exploits the contextual knowledge, which is divided into two categories consisting of motion context (MC) and appearance context (AC). The MC is a collection of trajectories that are representative of the motion of the occluded or unobserved object. These trajectories belong to other moving individuals in a given environment. The MC is constructed using a clustering scheme based on the Lyapunov Characteristic Exponent (LCE), which measures the mean exponential rate of convergence or divergence of the nearby trajectories in a given state space. Next, the MC is used to predict the location of the occluded or unobserved object in a regression framework. It is important to note that the LCE is used for measuring divergence between a pair of particles while the FTLE field is obtained by computing the LCE for a grid of particles. The appearance context (AC) of a target object consists of its own appearance history and appearance information of the other objects that are occluded. The intent is to make the appearance descriptor of the target object more discriminative with respect to other unobserved objects, thereby reducing the possible confusion between the unobserved objects upon re-acquisition. This is achieved by learning the distribution of the intra-class variation of each occluded object using all of its previous observations. In addition, a distribution of inter-class variation for each target-unobservable object pair is constructed. Finally, the re-acquisition decision is made using both the MC and the AC.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002135, ucf:47507
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002135
-
-
Title
-
DEBRIS TRACKING IN A SEMISTABLE BACKGROUND.
-
Creator
-
Vanumamalai, KarthikKalathi, Kasparis, Takis, University of Central Florida
-
Abstract / Description
-
Object Tracking plays a very pivotal role in many computer vision applications such as video surveillance, human gesture recognition and object based video compressions such as MPEG-4. Automatic detection of any moving object and tracking its motion is always an important topic of computer vision and robotic fields. This thesis deals with the problem of detecting the presence of debris or any other unexpected objects in footage obtained during spacecraft launches, and this poses a challenge...
Show moreObject Tracking plays a very pivotal role in many computer vision applications such as video surveillance, human gesture recognition and object based video compressions such as MPEG-4. Automatic detection of any moving object and tracking its motion is always an important topic of computer vision and robotic fields. This thesis deals with the problem of detecting the presence of debris or any other unexpected objects in footage obtained during spacecraft launches, and this poses a challenge because of the non-stationary background. When the background is stationary, moving objects can be detected by frame differencing. Therefore there is a need for background stabilization before tracking any moving object in the scene. Here two problems are considered and in both footage from Space shuttle launch is considered with the objective to track any debris falling from the Shuttle. The proposed method registers two consecutive frames using FFT based image registration where the amount of transformation parameters (translation, rotation) is calculated automatically. This information is the next passed to a Kalman filtering stage which produces a mask image that is used to find high intensity areas which are of potential interest.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000886, ucf:46628
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000886
-
-
Title
-
MODELING PEDESTRIAN BEHAVIOR IN VIDEO.
-
Creator
-
Scovanner, Paul, Tappen, Marshall, University of Central Florida
-
Abstract / Description
-
The purpose of this dissertation is to address the problem of predicting pedestrian movement and behavior in and among crowds. Specifically, we will focus on an agent based approach where pedestrians are treated individually and parameters for an energy model are trained by real world video data. These learned pedestrian models are useful in applications such as tracking, simulation, and artificial intelligence. The applications of this method are explored and experimental results show that...
Show moreThe purpose of this dissertation is to address the problem of predicting pedestrian movement and behavior in and among crowds. Specifically, we will focus on an agent based approach where pedestrians are treated individually and parameters for an energy model are trained by real world video data. These learned pedestrian models are useful in applications such as tracking, simulation, and artificial intelligence. The applications of this method are explored and experimental results show that our trained pedestrian motion model is beneficial for predicting unseen or lost tracks as well as guiding appearance based tracking algorithms. The method we have developed for training such a pedestrian model operates by optimizing a set of weights governing an aggregate energy function in order to minimize a loss function computed between a model's prediction and annotated ground-truth pedestrian tracks. The formulation of the underlying energy function is such that using tight convex upper bounds, we are able to efficiently approximate the derivative of the loss function with respect to the parameters of the model. Once this is accomplished, the model parameters are updated using straightforward gradient descent techniques in order to achieve an optimal solution. This formulation also lends itself towards the development of a multiple behavior model. The multiple pedestrian behavior styles, informally referred to as "stereotypes", are common in real data. In our model we show that it is possible, due to the unique ability to compute the derivative of the loss function, to build a new model which utilizes a soft-minimization of single behavior models. This allows unsupervised training of multiple different behavior models in parallel. This novel extension makes our method unique among other methods in the attempt to accurately describe human pedestrian behavior for the myriad of applications that exist. The ability to describe multiple behaviors shows significant improvements in the task of pedestrian motion prediction.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004043, ucf:49146
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004043
-
-
Title
-
PREDICTIVE CONTROL FOR DYNAMIC SYSTEMS TO TRACK UNKNOWN INPUT IN THE PRESENCE OF TIME DELAY.
-
Creator
-
Li, Yulan, Qu, Zhihua, University of Central Florida
-
Abstract / Description
-
This study investigated a tracking system to trace unknown signal in the presence oftime delay. A predictive control method is proposed in order to compensate the time delay. Root locus method is applied when designing the controller, parameter setting is carried out through error and trail technique in w-plane. State space equation is derived for the system, with special state chose of tracking error. To analyze the asymptotic stability of the proposed predictive control system, the Lyapunov...
Show moreThis study investigated a tracking system to trace unknown signal in the presence oftime delay. A predictive control method is proposed in order to compensate the time delay. Root locus method is applied when designing the controller, parameter setting is carried out through error and trail technique in w-plane. State space equation is derived for the system, with special state chose of tracking error. To analyze the asymptotic stability of the proposed predictive control system, the Lyapunov function is constructed. It is shown that the designed system is asymptotically stable when input signal is rather low frequency signal. In order to illustrate the system performance, simulations are done based on the data profile technique. Signal profiles including acceleration pro le, velocity pro le, and trajectory profile are listed. Based on these profiles, simulations can be carried out and results can be taken as a good estimation for practical performance of the designed predictive control system. Signal noise is quite a common phenomenon in practical control systems. Under the situation that the input signal is with measurement noise, low pass filter is designed to filter out the noise and keep the low frequency input signal. Two typical kinds of noise are specified, i.e Gaussian noise and Pink noise. Simulations results are displayed to show that the proposed predictive control with low-pass filter design can achieve better performance in the case of both kinds of noise.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000819, ucf:46688
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000819
-
-
Title
-
THE EFFECTS OF ATTENTION DEFICIT/HYPERACTIVITY DISORDER ON FIXATIONS AND SACCADES DURING A SIMULATED DRIVING TASK.
-
Creator
-
Michaelis, Jessica, Smither , Janan, University of Central Florida
-
Abstract / Description
-
Individuals who have Attention Deficit/Hyperactivity Disorder (ADHD) experience adverse effects relating to driving; in addition, they experience deficits in scanning ability (Barkely et.al, 1996; Fischer et al., 2007; Munoz et al., 2003; Naja-Raja et al., 2007). The present study examined the effects of ADHD on eye tracking while driving. Ten participants consisting of both ADHD and individuals who do not have ADHD were included in this study. It was hypothesized that individuals who have...
Show moreIndividuals who have Attention Deficit/Hyperactivity Disorder (ADHD) experience adverse effects relating to driving; in addition, they experience deficits in scanning ability (Barkely et.al, 1996; Fischer et al., 2007; Munoz et al., 2003; Naja-Raja et al., 2007). The present study examined the effects of ADHD on eye tracking while driving. Ten participants consisting of both ADHD and individuals who do not have ADHD were included in this study. It was hypothesized that individuals who have ADHD will make more saccadic eye movements and thus shorter fixations than individuals who do not have ADHD. Furthermore, it was hypothesized that despite the fact that individuals who have ADHD will make more saccadic eye movements than individuals without ADHD, those individuals with ADHD will commit more traffic violations including collisions compared to individuals who do not have such a diagnosis. Findings indicated that hypothesis one was not supported by the data, whereas hypothesis two was supported in that ADHD individuals had more collisions and committed more traffic violations than the Control group. Additionally, upon conducting a Chi Square test for independence, a significant difference was found in the spatial distributions of the fixations between the ADHD and Control groups. The findings of this study could help better understand the factors involved in ADHD driving and could be used to train individuals with ADHD to become more aware of their surroundings and driving habits and thus become safer drivers.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFH0004069, ucf:44791
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004069
-
-
Title
-
A SELF-ORGANIZING HYBRID SENSOR SYSTEM WITH DISTRIBUTED DATA FUSION FOR INTRUDER TRACKING AND SURVEILLANCE.
-
Creator
-
Palaniappan, Ravishankar, Wahid, Parveen, University of Central Florida
-
Abstract / Description
-
A wireless sensor network is a network of distributed nodes each equipped with its own sensors, computational resources and transceivers. These sensors are designed to be able to sense specific phenomenon over a large geographic area and communicate this information to the user. Most sensor networks are designed to be stand-alone systems that can operate without user intervention for long periods of time. While the use of wireless sensor networks have been demonstrated in various military and...
Show moreA wireless sensor network is a network of distributed nodes each equipped with its own sensors, computational resources and transceivers. These sensors are designed to be able to sense specific phenomenon over a large geographic area and communicate this information to the user. Most sensor networks are designed to be stand-alone systems that can operate without user intervention for long periods of time. While the use of wireless sensor networks have been demonstrated in various military and commercial applications, their full potential has not been realized primarily due to the lack of efficient methods to self organize and cover the entire area of interest. Techniques currently available focus solely on homogeneous wireless sensor networks either in terms of static networks or mobile networks and suffers from device specific inadequacies such as lack of coverage, power and fault tolerance. Failing nodes result in coverage loss and breakage in communication connectivity and hence there is a pressing need for a fault tolerant system to allow replacing of the failed nodes. In this dissertation, a unique hybrid sensor network is demonstrated that includes a host of mobile sensor platforms. It is shown that the coverage area of the static sensor network can be improved by self-organizing the mobile sensor platforms to allow interaction with the static sensor nodes and thereby increase the coverage area. The performance of the hybrid sensor network is analyzed for a set of N mobile sensors to determine and optimize parameters such as the position of the mobile nodes for maximum coverage of the sensing area without loss of signal between the mobile sensors, static nodes and the central control station. A novel approach to tracking dynamic targets is also presented. Unlike other tracking methods that are based on computationally complex methods, the strategy adopted in this work is based on a computationally simple but effective technique of received signal strength indicator measurements. The algorithms developed in this dissertation are based on a number of reasonable assumptions that are easily verified in a densely distributed sensor network and require simple computations that efficiently tracks the target in the sensor field. False alarm rate, probability of detection and latency are computed and compared with other published techniques. The performance analysis of the tracking system is done on an experimental testbed and also through simulation and the improvement in accuracy over other methods is demonstrated.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003024, ucf:48347
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003024
-
-
Title
-
CONFORMAL TRACKING FOR VIRTUAL ENVIRONMENTS.
-
Creator
-
Davis, Jr., Larry Dennis, Rolland, Jannick P., University of Central Florida
-
Abstract / Description
-
A virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments.Marker-based tracking is a technique...
Show moreA virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments.Marker-based tracking is a technique which employs fiduciary marks to determine the pose of a tracked object. A collection of markers arranged in a rigid configuration is called a tracking probe. The performance of marker-based tracking systems depends upon the fidelity of the pose estimates provided by tracking probes.The realization that tracking performance is linked to probe performance necessitates investigation into the design of tracking probes for proponents of marker-based tracking. The challenges involved with probe design include prediction of the accuracy and precision of a tracking probe, the creation of arbitrarily-shaped tracking probes, and the assessment of the newly created probes.To address these issues, we present a pioneer framework for designing conformal tracking probes. Conformal in this work means to adapt to the shape of the tracked objects and to the environmental constraints. As part of the framework, the accuracy in position and orientation of a given probe may be predicted given the system noise. The framework is a methodology for designing tracking probes based upon performance goals and environmental constraints. After presenting the conformal tracking framework, the elements used for completing the steps of the framework are discussed. We start with the application of optimization methods for determining the probe geometry. Two overall methods for mapping markers on tracking probes are presented, the Intermediary Algorithm and the Viewpoints Algorithm.Next, we examine the method used for pose estimation and present a mathematical model of error propagation used for predicting probe performance in pose estimation. The model uses a first-order error propagation, perturbing the simulated marker locations with Gaussian noise. The marker locations with error are then traced through the pose estimation process and the effects of the noise are analyzed. Moreover, the effects of changing the probe size or the number of markers are discussed.Finally, the conformal tracking framework is validated experimentally. The assessment methods are divided into simulation and post-fabrication methods. Under simulation, we discuss testing of the performance of each probe design. Then, post-fabrication assessment is performed, including accuracy measurements in orientation and position. The framework is validated with four tracking probes. The first probe is a six-marker planar probe. The predicted accuracy of the probe was 0.06 deg and the measured accuracy was 0.083 plus/minus 0.015 deg. The second probe was a pair of concentric, planar tracking probes mounted together. The smaller probe had a predicted accuracy of 0.206 deg and a measured accuracy of 0.282 plus/minus 0.03 deg. The larger probe had a predicted accuracy of 0.039 deg and a measured accuracy of 0.017 plus/minus 0.02 deg. The third tracking probe was a semi-spherical head tracking probe. The predicted accuracy in orientation and position was 0.54 plus/minus 0.24 deg and 0.24 plus/minus 0.1 mm, respectively. The experimental accuracy in orientation and position was 0.60 plus/minus 0.03 deg and 0.225 plus/minus 0.05 mm, respectively. The last probe was an integrated, head-mounted display probe, created using the conformal design process. The predicted accuracy of this probe was 0.032 plus/minus 0.02 degrees in orientation and 0.14 plus/minus 0.08 mm in position. The measured accuracy of the probe was 0.028 plus/minus 0.01 degrees in orientation and 0.11 plus/minus 0.01 mm in position
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000058, ucf:52856
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000058
-
-
Title
-
VOICE TRACK COMPUTER BASED SIMULATION FOR MEDICAL TRAINING.
-
Creator
-
Makwana, Alpesh, Kincaid, J. Peter, University of Central Florida
-
Abstract / Description
-
Varying the delivery rate of audio-based text within web-based training increases the effectiveness of the learning process and improves retention when compared with a fixed audio-based text delivery rate. To answer this question, two groups of 20 participants and one group of 10 participants were tested using the Web-based Anatomy & Physiology course modules developed by Medsn, Inc. The control group received the static speed of 128 words per minute while the experimental group received the...
Show moreVarying the delivery rate of audio-based text within web-based training increases the effectiveness of the learning process and improves retention when compared with a fixed audio-based text delivery rate. To answer this question, two groups of 20 participants and one group of 10 participants were tested using the Web-based Anatomy & Physiology course modules developed by Medsn, Inc. The control group received the static speed of 128 words per minute while the experimental group received the initial speed of 128 words per minute with the option to change the speed of the audio-based text. An additional experimental group received the initial speed of 148 words per minute also having the option to vary the speed of the audio-based text. A three way single variable Analysis of Variance (ANOVA) was utilized to examine speed of voice presentation differences. The results were significant, F (2, 47) = 4.67, p=0.014, ç2 = 0.166. The mean for the control group was (M = 7.2, SD = 1.69) with the experimental groups at, (M = 8.4, SD = 1.31) and with extra groups at (M = 8.6, SD = 1.26).
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000639, ucf:46533
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000639
-
-
Title
-
STABILIZATION AND TRACKING OF THE VAN DER POL OSCILLATOR.
-
Creator
-
Zhao, Xin, Haralambous, Michael, University of Central Florida
-
Abstract / Description
-
In this thesis, the stabilization and tracking problem of the Van der Pol oscillator is studied by using advanced control techniques. First, the linear state feedback and linear adaptive state feedback controllers for the stabilization problem are designed. Then, non-linear state feedback and output feedback controllers are proposed for the tracking problem with known parameters. Finally, a dynamic output feedback controller based on adaptive backstepping technique is introduced for the...
Show moreIn this thesis, the stabilization and tracking problem of the Van der Pol oscillator is studied by using advanced control techniques. First, the linear state feedback and linear adaptive state feedback controllers for the stabilization problem are designed. Then, non-linear state feedback and output feedback controllers are proposed for the tracking problem with known parameters. Finally, a dynamic output feedback controller based on adaptive backstepping technique is introduced for the tracking problem when all parameters of the Van der Pol system are unknown.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000569, ucf:46444
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000569
-
-
Title
-
BACKGROUND STABILIZATION AND MOTION DETECTION IN LAUNCH PAD VIDEO MONITORING.
-
Creator
-
Gopalan, Kaushik, Kasparis, Takis, University of Central Florida
-
Abstract / Description
-
Automatic detection of moving objects in video sequences is a widely researched topic with application in surveillance operations. Methods based on background cancellation by frame differencing are extremely common. However this process becomes much more complicated when the background is not completely stable due to camera motion. This thesis considers a space application where surveillance cameras around a shuttle launch site are used to detect any debris from the shuttle. The ground shake...
Show moreAutomatic detection of moving objects in video sequences is a widely researched topic with application in surveillance operations. Methods based on background cancellation by frame differencing are extremely common. However this process becomes much more complicated when the background is not completely stable due to camera motion. This thesis considers a space application where surveillance cameras around a shuttle launch site are used to detect any debris from the shuttle. The ground shake due to the impact of the launch causes the background to be shaky. We stabilize the background by translation of each frame, the optimum translation being determined by minimizing the energy difference between consecutive frames. This process is optimized by using a sub-image instead of the whole frame, the sub-image being chosen by taking an edge detection plot of the background and choosing the area with greatest density of edges as the sub-image of interest. The stabilized sequence is then processed by taking the difference between consecutive frames and marking areas with high intensity as the areas where motion is taking place. The residual noise from the background stabilization part is filtered out by masking the areas where the background has edges, as these areas have the highest probability of false alarms due to background motion.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000801, ucf:46683
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000801
-
-
Title
-
Vehicle Tracking and Classification via 3D Geometries for Intelligent Transportation Systems.
-
Creator
-
Mcdowell, William, Mikhael, Wasfy, Jones, W Linwood, Haralambous, Michael, Atia, George, Mahalanobis, Abhijit, Muise, Robert, University of Central Florida
-
Abstract / Description
-
In this dissertation, we present generalized techniques which allow for the tracking and classification of vehicles by tracking various Point(s) of Interest (PoI) on a vehicle. Tracking the various PoI allows for the composition of those points into 3D geometries which are unique to a given vehicle type. We demonstrate this technique using passive, simulated image based sensor measurements and three separate inertial track formulations. We demonstrate the capability to classify the 3D...
Show moreIn this dissertation, we present generalized techniques which allow for the tracking and classification of vehicles by tracking various Point(s) of Interest (PoI) on a vehicle. Tracking the various PoI allows for the composition of those points into 3D geometries which are unique to a given vehicle type. We demonstrate this technique using passive, simulated image based sensor measurements and three separate inertial track formulations. We demonstrate the capability to classify the 3D geometries in multiple transform domains (PCA (&) LDA) using Minimum Euclidean Distance, Maximum Likelihood and Artificial Neural Networks. Additionally, we demonstrate the ability to fuse separate classifiers from multiple domains via Bayesian Networks to achieve ensemble classification.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005976, ucf:50790
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005976
-
-
Title
-
The Impact of Automation Reliability and Fatigue on Reliance.
-
Creator
-
Wohleber, Ryan, Matthews, Gerald, Reinerman, Lauren, Szalma, James, Funke, Gregory, Jentsch, Florian, University of Central Florida
-
Abstract / Description
-
The objective of this research is to inform th(&)#172;(&)#172;e design of dynamic interfaces to optimize unmanned aerial vehicle (UAV) operator reliance on automation. A broad goal of the U.S. military is to improve the ratio of UAV operators to UAVs controlled. Accomplishing this goal requires the use of automation; however, the benefits of automation are jeopardized without appropriate operator reliance. To improve reliance on automation, this effort sought to accomplish several objectives...
Show moreThe objective of this research is to inform th(&)#172;(&)#172;e design of dynamic interfaces to optimize unmanned aerial vehicle (UAV) operator reliance on automation. A broad goal of the U.S. military is to improve the ratio of UAV operators to UAVs controlled. Accomplishing this goal requires the use of automation; however, the benefits of automation are jeopardized without appropriate operator reliance. To improve reliance on automation, this effort sought to accomplish several objectives organized into phases. The first phase aimed to validate metrics that could be used to gauge operator fatigue online, to understand how the reliability of automated systems influences subjective and objective responses, and to understand how the impact of automation reliability changes with different levels of fatigue. To that end, this study employed a multiple UAV simulation containing several tasks. Findings for a challenging Image Analysis task indicated a decrease in accuracy and reliance with time. Both accuracy and reliance were lower with an unreliable automated decision making aid (60% reliability) than with a reliable automated decision making aid (86.7% reliability). Further, a significant interaction indicated that reliance diminished more quickly when the automated aid was less reliable. Concerning the identification of possible eye tracking measures for fatigue, metrics for percentage of eye closure (PERCLOS), blinks, fixations, and dwell time registered changes with time on task. Fixation metrics registered reliability differences. The second phase sought to use outcomes from the first phase to build two algorithms, based on eye tracking, to drive continuous diagnostic monitoring, one simple and another complex. These algorithms were intended to diagnose the passive fatigue state of UAV operators and used subjective task engagement as the dependent variable. The simple algorithm used PERCLOS and total dwell time within the automated tasking area. The complex algorithm added percent of cognitive fixations and frequency of express fixations. The complex algorithm successfully predicted task engagement, primarily on the strength of percentage of cognitive fixations and express fixation frequency metrics.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006548, ucf:51323
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006548
-
-
Title
-
SCENE MONITORING WITH A FOREST OF COOPERATIVE SENSORS.
-
Creator
-
Javed, Omar, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
In this dissertation, we present vision based scene interpretation methods for monitoring of people and vehicles, in real-time, within a busy environment using a forest of co-operative electro-optical (EO) sensors. We have developed novel video understanding algorithms with learning capability, to detect and categorize people and vehicles, track them with in a camera and hand-off this information across multiple networked cameras for multi-camera tracking. The ability to learn prevents the...
Show moreIn this dissertation, we present vision based scene interpretation methods for monitoring of people and vehicles, in real-time, within a busy environment using a forest of co-operative electro-optical (EO) sensors. We have developed novel video understanding algorithms with learning capability, to detect and categorize people and vehicles, track them with in a camera and hand-off this information across multiple networked cameras for multi-camera tracking. The ability to learn prevents the need for extensive manual intervention, site models and camera calibration, and provides adaptability to changing environmental conditions. For object detection and categorization in the video stream, a two step detection procedure is used. First, regions of interest are determined using a novel hierarchical background subtraction algorithm that uses color and gradient information for interest region detection. Second, objects are located and classified from within these regions using a weakly supervised learning mechanism based on co-training that employs motion and appearance features. The main contribution of this approach is that it is an online procedure in which separate views (features) of the data are used for co-training, while the combined view (all features) is used to make classification decisions in a single boosted framework. The advantage of this approach is that it requires only a few initial training samples and can automatically adjust its parameters online to improve the detection and classification performance. Once objects are detected and classified they are tracked in individual cameras. Single camera tracking is performed using a voting based approach that utilizes color and shape cues to establish correspondence in individual cameras. The tracker has the capability to handle multiple occluded objects. Next, the objects are tracked across a forest of cameras with non-overlapping views. This is a hard problem because of two reasons. First, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in illumination, pose and camera properties. To deal with the first problem, the system learns the inter-camera relationships to constrain track correspondences. These relationships are learned in the form of multivariate probability density of space-time variables (object entry and exit locations, velocities, and inter-camera transition times) using Parzen windows. To handle the appearance change of an object as it moves from one camera to another, we show that all color transfer functions from a given camera to another camera lie in a low dimensional subspace. The tracking algorithm learns this subspace by using probabilistic principal component analysis and uses it for appearance matching. The proposed system learns the camera topology and subspace of inter-camera color transfer functions during a training phase. Once the training is complete, correspondences are assigned using the maximum a posteriori (MAP) estimation framework using both the location and appearance cues. Extensive experiments and deployment of this system in realistic scenarios has demonstrated the robustness of the proposed methods. The proposed system was able to detect and classify targets, and seamlessly tracked them across multiple cameras. It also generated a summary in terms of key frames and textual description of trajectories to a monitoring officer for final analysis and response decision. This level of interpretation was the goal of our research effort, and we believe that it is a significant step forward in the development of intelligent systems that can deal with the complexities of real world scenarios.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000497, ucf:46362
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000497
-
-
Title
-
REAL-TIME MONOCULAR VISION-BASED TRACKING FOR INTERACTIVE AUGMENTED REALITY.
-
Creator
-
Spencer, Lisa, Guha, Ratan, University of Central Florida
-
Abstract / Description
-
The need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous...
Show moreThe need for real-time video analysis is rapidly increasing in today's world. The decreasing cost of powerful processors and the proliferation of affordable cameras, combined with needs for security, methods for searching the growing collection of video data, and an appetite for high-tech entertainment, have produced an environment where video processing is utilized for a wide variety of applications. Tracking is an element in many of these applications, for purposes like detecting anomalous behavior, classifying video clips, and measuring athletic performance. In this dissertation we focus on augmented reality, but the methods and conclusions are applicable to a wide variety of other areas. In particular, our work deals with achieving real-time performance while tracking with augmented reality systems using a minimum set of commercial hardware. We have built prototypes that use both existing technologies and new algorithms we have developed. While performance improvements would be possible with additional hardware, such as multiple cameras or parallel processors, we have concentrated on getting the most performance with the least equipment. Tracking is a broad research area, but an essential component of an augmented reality system. Tracking of some sort is needed to determine the location of scene augmentation. First, we investigated the effects of illumination on the pixel values recorded by a color video camera. We used the results to track a simple solid-colored object in our first augmented reality application. Our second augmented reality application tracks complex non-rigid objects, namely human faces. In the color experiment, we studied the effects of illumination on the color values recorded by a real camera. Human perception is important for many applications, but our focus is on the RGB values available to tracking algorithms. Since the lighting in most environments where video monitoring is done is close to white, (e.g., fluorescent lights in an office, incandescent lights in a home, or direct and indirect sunlight outside,) we looked at the response to "white" light sources as the intensity varied. The red, green, and blue values recorded by the camera can be converted to a number of other color spaces which have been shown to be invariant to various lighting conditions, including view angle, light angle, light intensity, or light color, using models of the physical properties of reflection. Our experiments show how well these derived quantities actually remained constant with real materials, real lights, and real cameras, while still retaining the ability to discriminate between different colors. This color experiment enabled us to find color spaces that were more invariant to changes in illumination intensity than the ones traditionally used. The first augmented reality application tracks a solid colored rectangle and replaces the rectangle with an image, so it appears that the subject is holding a picture instead. Tracking this simple shape is both easy and hard; easy because of the single color and the shape that can be represented by four points or four lines, and hard because there are fewer features available and the color is affected by illumination changes. Many algorithms for tracking fixed shapes do not run in real time or require rich feature sets. We have created a tracking method for simple solid colored objects that uses color and edge information and is fast enough for real-time operation. We also demonstrate a fast deinterlacing method to avoid "tearing" of fast moving edges when recorded by an interlaced camera, and optimization techniques that usually achieved a speedup of about 10 from an implementation that already used optimized image processing library routines. Human faces are complex objects that differ between individuals and undergo non-rigid transformations. Our second augmented reality application detects faces, determines their initial pose, and then tracks changes in real time. The results are displayed as virtual objects overlaid on the real video image. We used existing algorithms for motion detection and face detection. We present a novel method for determining the initial face pose in real time using symmetry. Our face tracking uses existing point tracking methods as well as extensions to Active Appearance Models (AAMs). We also give a new method for integrating detection and tracking data and leveraging the temporal coherence in video data to mitigate the false positive detections. While many face tracking applications assume exactly one face is in the image, our techniques can handle any number of faces. The color experiment along with the two augmented reality applications provide improvements in understanding the effects of illumination intensity changes on recorded colors, as well as better real-time methods for detection and tracking of solid shapes and human faces for augmented reality. These techniques can be applied to other real-time video analysis tasks, such as surveillance and video analysis.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001075, ucf:46786
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001075
-
-
Title
-
ANALYSIS AND DESIGN OF A MODULAR SOLAR-FED FAULT-TOLERANT POWER SYSTEM WITH MAXIMUM POWER POINT TRACKING.
-
Creator
-
Al-Atrash, Hussam, Batarseh, Issa, University of Central Florida
-
Abstract / Description
-
Solar power is becoming ever more popular in a variety of applications. It is particularly attractive because of its abundance, renewability, and environment friendliness. Solar powered spacecraft systems have ever-expanding loads with stringent power regulation specifications. Moreover, they require a light and compact design of their power system. These constraints make the optimization of power harvest from solar arrays a critical task. Florida Power Electronics Center (FPEC) at UCF set to...
Show moreSolar power is becoming ever more popular in a variety of applications. It is particularly attractive because of its abundance, renewability, and environment friendliness. Solar powered spacecraft systems have ever-expanding loads with stringent power regulation specifications. Moreover, they require a light and compact design of their power system. These constraints make the optimization of power harvest from solar arrays a critical task. Florida Power Electronics Center (FPEC) at UCF set to develop a modular fault-tolerant power system architecture for space applications. This architecture provides a number of very attractive features including Maximum Power Point Tracking (MPPT) and uniform power stress distribution across the system. MPPT is a control technique that leads the system to operate its solar sources at the point where they provide maximum power. This point constantly moves following changes in ambient operating conditions. A digital controller is setup to locate it in real time while optimizing other operating parameters. This control scheme can increase the energy yield of the system by up to 45%, and thus significantly reduces the size and weight of the designed system. The modularity of the system makes it easy to prototype and expand. It boosts its reliability and allows on-line reconfiguration and maintenance, thus reducing down-time upon faults. This thesis targets the analysis and optimization of this architecture. A new modeling technique is introduced for MPPT in practical environments, and a novel digital power stress distribution scheme is proposed in order to properly distribute peak and thermal stress and improve reliability. A 2kW four-channel prototype of the system was built and tested. Experimental results confirm the theoretical improvements, and promise great success in the field.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000469, ucf:46357
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000469
-
-
Title
-
MULTI-VIEW APPROACHES TO TRACKING, 3D RECONSTRUCTION AND OBJECT CLASS DETECTION.
-
Creator
-
khan, saad, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
Multi-camera systems are becoming ubiquitous and have found application in a variety of domains including surveillance, immersive visualization, sports entertainment and movie special effects amongst others. From a computer vision perspective, the challenging task is how to most efficiently fuse information from multiple views in the absence of detailed calibration information and a minimum of human intervention. This thesis presents a new approach to fuse foreground likelihood information...
Show moreMulti-camera systems are becoming ubiquitous and have found application in a variety of domains including surveillance, immersive visualization, sports entertainment and movie special effects amongst others. From a computer vision perspective, the challenging task is how to most efficiently fuse information from multiple views in the absence of detailed calibration information and a minimum of human intervention. This thesis presents a new approach to fuse foreground likelihood information from multiple views onto a reference view without explicit processing in 3D space, thereby circumventing the need for complete calibration. Our approach uses a homographic occupancy constraint (HOC), which states that if a foreground pixel has a piercing point that is occupied by foreground object, then the pixel warps to foreground regions in every view under homographies induced by the reference plane, in effect using cameras as occupancy detectors. Using the HOC we are able to resolve occlusions and robustly determine ground plane localizations of the people in the scene. To find tracks we obtain ground localizations over a window of frames and stack them creating a space time volume. Regions belonging to the same person form contiguous spatio-temporal tracks that are clustered using a graph cuts segmentation approach. Second, we demonstrate that the HOC is equivalent to performing visual hull intersection in the image-plane, resulting in a cross-sectional slice of the object. The process is extended to multiple planes parallel to the reference plane in the framework of plane to plane homologies. Slices from multiple planes are accumulated and the 3D structure of the object is segmented out. Unlike other visual hull based approaches that use 3D constructs like visual cones, voxels or polygonal meshes requiring calibrated views, ours is purely-image based and uses only 2D constructs i.e. planar homographies between views. This feature also renders it conducive to graphics hardware acceleration. The current GPU implementation of our approach is capable of fusing 60 views (480x720 pixels) at the rate of 50 slices/second. We then present an extension of this approach to reconstructing non-rigid articulated objects from monocular video sequences. The basic premise is that due to motion of the object, scene occupancies are blurred out with non-occupancies in a manner analogous to motion blurred imagery. Using our HOC and a novel construct: the temporal occupancy point (TOP), we are able to fuse multiple views of non-rigid objects obtained from a monocular video sequence. The result is a set of blurred scene occupancy images in the corresponding views, where the values at each pixel correspond to the fraction of total time duration that the pixel observed an occupied scene location. We then use a motion de-blurring approach to de-blur the occupancy images and obtain the 3D structure of the non-rigid object. In the final part of this thesis, we present an object class detection method employing 3D models of rigid objects constructed using the above 3D reconstruction approach. Instead of using a complicated mechanism for relating multiple 2D training views, our approach establishes spatial connections between these views by mapping them directly to the surface of a 3D model. To generalize the model for object class detection, features from supplemental views (obtained from Google Image search) are also considered. Given a 2D test image, correspondences between the 3D feature model and the testing view are identified by matching the detected features. Based on the 3D locations of the corresponding features, several hypotheses of viewing planes can be made. The one with the highest confidence is then used to detect the object using feature location matching. Performance of the proposed method has been evaluated by using the PASCAL VOC challenge dataset and promising results are demonstrated.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002073, ucf:47593
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002073
-
-
Title
-
SYSTEM IDENTIFICATION AND FAULT DETECTION OF COMPLEX SYSTEMS.
-
Creator
-
Luo, Dapeng, Leonessa, Alexander, University of Central Florida
-
Abstract / Description
-
The proposed research is devoted to devising system identification and fault detection approaches and algorithms for a system characterized by nonlinear dynamics. Mathematical models of dynamical systems and fault models are built based on observed data from systems. In particular, we will focus on statistical subspace instrumental variable methods which allow the consideration of an appealing mathematical model in many control applications consisting of a nonlinear feedback system with...
Show moreThe proposed research is devoted to devising system identification and fault detection approaches and algorithms for a system characterized by nonlinear dynamics. Mathematical models of dynamical systems and fault models are built based on observed data from systems. In particular, we will focus on statistical subspace instrumental variable methods which allow the consideration of an appealing mathematical model in many control applications consisting of a nonlinear feedback system with nonlinearities at both inputs and outputs. Different solutions within the proposed framework are presented to solve the system identification and fault detection problems. Specifically, Augmented Subspace Instrumental Variable Identification (ASIVID) approaches are proposed to identify the closed-loop nonlinear Hammerstein systems. Then fast approaches are presented to determine the system order. Hard-over failures are detected by order determination approaches when failures manifest themselves as rank deficiencies of the dynamical systems. Geometric interpretations of subspace tracking theorems are presented in this dissertation in order to propose a fault tolerance strategy. Possible fields of application considered in this research include manufacturing systems, autonomous vehicle systems, space systems and burgeoning bio-mechanical systems.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0000915, ucf:46756
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000915
-
-
Title
-
DIGITAL CONTROLLER IMPLEMENTATION FOR DISTURBANCE REJECTION IN THE OPTICAL COUPLING OF A MOBILE EXPERIMENTAL LASER TRACKING SYSTEM.
-
Creator
-
Rhodes, Matthew, Richie, Samuel, University of Central Florida
-
Abstract / Description
-
Laser tracking systems are an important aspect of the NASA space program, in particular for conducting research in relation to satellites and space port launch vehicles. Often, launches are conducted at remote sites which require all of the test equipment, including the laser tracking systems, to be portable. Portable systems are more susceptible to environmental disturbances which affect the overall tracking resolution, and consequently, the resolution of any other experimental data being...
Show moreLaser tracking systems are an important aspect of the NASA space program, in particular for conducting research in relation to satellites and space port launch vehicles. Often, launches are conducted at remote sites which require all of the test equipment, including the laser tracking systems, to be portable. Portable systems are more susceptible to environmental disturbances which affect the overall tracking resolution, and consequently, the resolution of any other experimental data being collected at any given time. This research characterizes the optical coupling between two systems in a Mobile Experimental Laser Tracking system and evaluates several control solutions to minimize disturbances within this coupling. A simulation of the optical path was developed in an extensible manner such that different control systems could be easily implemented. For an initial test, several PID controllers were utilized in parallel in order to control mirrors in the optical coupling. Despite many limiting factors of the hardware, a simple proportional control performed to expectations. Although a system implementation was never field tested, the simulation results provide the necessary insight to develop the system further. Recommendations were made for future system modifications which would allow an even higher tracking resolution.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001168, ucf:46873
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001168
-
-
Title
-
Human Action Detection, Tracking and Segmentation in Videos.
-
Creator
-
Tian, Yicong, Shah, Mubarak, Bagci, Ulas, Liu, Fei, Walker, John, University of Central Florida
-
Abstract / Description
-
This dissertation addresses the problem of human action detection, human tracking and segmentation in videos. They are fundamental tasks in computer vision and are extremely challenging to solve in realistic videos. We first propose a novel approach for action detection by exploring the generalization of deformable part models from 2D images to 3D spatiotemporal volumes. By focusing on the most distinctive parts of each action, our models adapt to intra-class variation and show robustness to...
Show moreThis dissertation addresses the problem of human action detection, human tracking and segmentation in videos. They are fundamental tasks in computer vision and are extremely challenging to solve in realistic videos. We first propose a novel approach for action detection by exploring the generalization of deformable part models from 2D images to 3D spatiotemporal volumes. By focusing on the most distinctive parts of each action, our models adapt to intra-class variation and show robustness to clutter. This approach deals with detecting action performed by a single person. When there are multiple humans in the scene, humans need to be segmented and tracked from frame to frame before action recognition can be performed. Next, we propose a novel approach for multiple object tracking (MOT) by formulating detection and data association in one framework. Our method allows us to overcome the confinements of data association based MOT approaches, where the performance is dependent on the object detection results provided at input level. We show that automatically detecting and tracking targets in a single framework can help resolve the ambiguities due to frequent occlusion and heavy articulation of targets. In this tracker, targets are represented by bounding boxes, which is a coarse representation. However, pixel-wise object segmentation provides fine level information, which is desirable for later tasks. Finally, we propose a tracker that simultaneously solves three main problems: detection, data association and segmentation. This is especially important because the output of each of those three problems are highly correlated and the solution of one can greatly help improve the others. The proposed approach achieves more accurate segmentation results and also helps better resolve typical difficulties in multiple target tracking, such as occlusion, ID-switch and track drifting.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007378, ucf:52069
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007378
Pages