Current Search: Motion (x)
Pages
-
-
Title
-
TAMING CROWDED VISUAL SCENES.
-
Creator
-
Ali, Saad, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
Computer vision algorithms have played a pivotal role in commercial video surveillance systems for a number of years. However, a common weakness among these systems is their inability to handle crowded scenes. In this thesis, we have developed algorithms that overcome some of the challenges encountered in videos of crowded environments such as sporting events, religious festivals, parades, concerts, train stations, airports, and malls. We adopt a top-down approach by first performing a global...
Show moreComputer vision algorithms have played a pivotal role in commercial video surveillance systems for a number of years. However, a common weakness among these systems is their inability to handle crowded scenes. In this thesis, we have developed algorithms that overcome some of the challenges encountered in videos of crowded environments such as sporting events, religious festivals, parades, concerts, train stations, airports, and malls. We adopt a top-down approach by first performing a global-level analysis that locates dynamically distinct crowd regions within the video. This knowledge is then employed in the detection of abnormal behaviors and tracking of individual targets within crowds. In addition, the thesis explores the utility of contextual information necessary for persistent tracking and re-acquisition of objects in crowded scenes. For the global-level analysis, a framework based on Lagrangian Particle Dynamics is proposed to segment the scene into dynamically distinct crowd regions or groupings. For this purpose, the spatial extent of the video is treated as a phase space of a time-dependent dynamical system in which transport from one region of the phase space to another is controlled by the optical flow. Next, a grid of particles is advected forward in time through the phase space using a numerical integration to generate a ``flow map''. The flow map relates the initial positions of particles to their final positions. The spatial gradients of the flow map are used to compute a Cauchy Green Deformation tensor that quantifies the amount by which the neighboring particles diverge over the length of the integration. The maximum eigenvalue of the tensor is used to construct a forward Finite Time Lyapunov Exponent (FTLE) field that reveals the Attracting Lagrangian Coherent Structures (LCS). The same process is repeated by advecting the particles backward in time to obtain a backward FTLE field that reveals the repelling LCS. The attracting and repelling LCS are the time dependent invariant manifolds of the phase space and correspond to the boundaries between dynamically distinct crowd flows. The forward and backward FTLE fields are combined to obtain one scalar field that is segmented using a watershed segmentation algorithm to obtain the labeling of distinct crowd-flow segments. Next, abnormal behaviors within the crowd are localized by detecting changes in the number of crowd-flow segments over time. Next, the global-level knowledge of the scene generated by the crowd-flow segmentation is used as an auxiliary source of information for tracking an individual target within a crowd. This is achieved by developing a scene structure-based force model. This force model captures the notion that an individual, when moving in a particular scene, is subjected to global and local forces that are functions of the layout of that scene and the locomotive behavior of other individuals in his or her vicinity. The key ingredients of the force model are three floor fields that are inspired by research in the field of evacuation dynamics; namely, Static Floor Field (SFF), Dynamic Floor Field (DFF), and Boundary Floor Field (BFF). These fields determine the probability of moving from one location to the next by converting the long-range forces into local forces. The SFF specifies regions of the scene that are attractive in nature, such as an exit location. The DFF, which is based on the idea of active walker models, corresponds to the virtual traces created by the movements of nearby individuals in the scene. The BFF specifies influences exhibited by the barriers within the scene, such as walls and no-entry areas. By combining influence from all three fields with the available appearance information, we are able to track individuals in high-density crowds. The results are reported on real-world sequences of marathons and railway stations that contain thousands of people. A comparative analysis with respect to an appearance-based mean shift tracker is also conducted by generating the ground truth. The result of this analysis demonstrates the benefit of using floor fields in crowded scenes. The occurrence of occlusion is very frequent in crowded scenes due to a high number of interacting objects. To overcome this challenge, we propose an algorithm that has been developed to augment a generic tracking algorithm to perform persistent tracking in crowded environments. The algorithm exploits the contextual knowledge, which is divided into two categories consisting of motion context (MC) and appearance context (AC). The MC is a collection of trajectories that are representative of the motion of the occluded or unobserved object. These trajectories belong to other moving individuals in a given environment. The MC is constructed using a clustering scheme based on the Lyapunov Characteristic Exponent (LCE), which measures the mean exponential rate of convergence or divergence of the nearby trajectories in a given state space. Next, the MC is used to predict the location of the occluded or unobserved object in a regression framework. It is important to note that the LCE is used for measuring divergence between a pair of particles while the FTLE field is obtained by computing the LCE for a grid of particles. The appearance context (AC) of a target object consists of its own appearance history and appearance information of the other objects that are occluded. The intent is to make the appearance descriptor of the target object more discriminative with respect to other unobserved objects, thereby reducing the possible confusion between the unobserved objects upon re-acquisition. This is achieved by learning the distribution of the intra-class variation of each occluded object using all of its previous observations. In addition, a distribution of inter-class variation for each target-unobservable object pair is constructed. Finally, the re-acquisition decision is made using both the MC and the AC.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002135, ucf:47507
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002135
-
-
Title
-
The Negro in Hollywood films.
-
Creator
-
Jerome, Victor Jeremy
-
Date Issued
-
1950
-
Identifier
-
1745500, CFDT1745500, ucf:4784
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/FCLA/DT/1745500
-
-
Title
-
CORRELATION OF ACOUSTIC EMISSION PARAMETERS WITH WEIGHT AND VELOCITY OF MOVING VEHICLES.
-
Creator
-
Kolgaonkar, Amar, Moslehy, Faissal, University of Central Florida
-
Abstract / Description
-
The thesis is motivated by the goal of doing initial investigation and experimentation for the development of Weigh-in-Motion (WIM) system using acoustic emission phenomenon. A great deal of research is going on for measuring the weight of moving vehicles. Weigh-in-motion of commercial vehicles is essential for management of freight traffic, highway infrastructure design and maintenance, and monitoring of heavy weight vehicles. The research work presents a methodology for correlating the...
Show moreThe thesis is motivated by the goal of doing initial investigation and experimentation for the development of Weigh-in-Motion (WIM) system using acoustic emission phenomenon. A great deal of research is going on for measuring the weight of moving vehicles. Weigh-in-motion of commercial vehicles is essential for management of freight traffic, highway infrastructure design and maintenance, and monitoring of heavy weight vehicles. The research work presents a methodology for correlating the weight of a moving vehicle with acoustic emission parameters (such as counts and energy). Furthermore, the correlation between the speed of vehicle with the acoustic emission parameters is developed. Preliminary analysis and experimentations were conducted for the study of propagation of acoustic signals in plate like structure and effect of dynamic loadings on Kaiser Effect. Initial testing revealed that there is a linear correlation between the impact force and the acoustic emission parameters. Also a polynomial regression of second order was found between the speed of vehicle and acoustic emission parameters. Road testing was conducted to investigate the correlation between weight of the vehicle and acoustic emission parameters. A linear relation was found between the weight of vehicle and acoustic emission parameters represented by counts, signal energy and absolute energy.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000490, ucf:46354
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000490
-
-
Title
-
BACKGROUND STABILIZATION AND MOTION DETECTION IN LAUNCH PAD VIDEO MONITORING.
-
Creator
-
Gopalan, Kaushik, Kasparis, Takis, University of Central Florida
-
Abstract / Description
-
Automatic detection of moving objects in video sequences is a widely researched topic with application in surveillance operations. Methods based on background cancellation by frame differencing are extremely common. However this process becomes much more complicated when the background is not completely stable due to camera motion. This thesis considers a space application where surveillance cameras around a shuttle launch site are used to detect any debris from the shuttle. The ground shake...
Show moreAutomatic detection of moving objects in video sequences is a widely researched topic with application in surveillance operations. Methods based on background cancellation by frame differencing are extremely common. However this process becomes much more complicated when the background is not completely stable due to camera motion. This thesis considers a space application where surveillance cameras around a shuttle launch site are used to detect any debris from the shuttle. The ground shake due to the impact of the launch causes the background to be shaky. We stabilize the background by translation of each frame, the optimum translation being determined by minimizing the energy difference between consecutive frames. This process is optimized by using a sub-image instead of the whole frame, the sub-image being chosen by taking an edge detection plot of the background and choosing the area with greatest density of edges as the sub-image of interest. The stabilized sequence is then processed by taking the difference between consecutive frames and marking areas with high intensity as the areas where motion is taking place. The residual noise from the background stabilization part is filtered out by masking the areas where the background has edges, as these areas have the highest probability of false alarms due to background motion.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000801, ucf:46683
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000801
-
-
Title
-
Applications of Compressive Sensing To Surveillance Problems.
-
Creator
-
Huff, Christopher, Mohapatra, Ram, Sun, Qiyu, Han, Deguang, University of Central Florida
-
Abstract / Description
-
In many surveillance scenarios, one concern that arises is how to construct an imager that is capable of capturing the scene with high fidelity. This could be problematic for two reasons: first, the optics and electronics in the camera may have difficulty in dealing with so much information; secondly, bandwidth constraints, may pose difficulty in transmitting information from the imager to the user efficiently for reconstruction or realization. In this thesis, we will discuss a mathematical...
Show moreIn many surveillance scenarios, one concern that arises is how to construct an imager that is capable of capturing the scene with high fidelity. This could be problematic for two reasons: first, the optics and electronics in the camera may have difficulty in dealing with so much information; secondly, bandwidth constraints, may pose difficulty in transmitting information from the imager to the user efficiently for reconstruction or realization. In this thesis, we will discuss a mathematical framework that is capable of skirting the two aforementioned issues. This framework is rooted in a technique commonly referred to as compressive sensing. We will explore two of the seminal works in compressive sensing and will present the key theorems and definitions from these two papers. We will then survey three different surveillance scenarios and their respective compressive sensing solutions. The original contribution of this thesis is the development of a distributed compressive sensing model.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004317, ucf:49473
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004317
-
-
Title
-
DEPTH FROM DEFOCUSED MOTION.
-
Creator
-
Myles, Zarina, da Vitoria Lobo, Niels, University of Central Florida
-
Abstract / Description
-
Motion in depth and/or zooming causes defocus blur. This work presents a solution to the problem of using defocus blur and optical flow information to compute depth at points that defocus when they move.We first formulate a novel algorithm which recovers defocus blur and affine parameters simultaneously. Next we formulate a novel relationship (the blur-depth relationship) between defocus blur, relative object depth and three parameters based on camera motion and intrinsic camera parameters.We...
Show moreMotion in depth and/or zooming causes defocus blur. This work presents a solution to the problem of using defocus blur and optical flow information to compute depth at points that defocus when they move.We first formulate a novel algorithm which recovers defocus blur and affine parameters simultaneously. Next we formulate a novel relationship (the blur-depth relationship) between defocus blur, relative object depth and three parameters based on camera motion and intrinsic camera parameters.We can handle the situation where a single image has points which have defocused, got sharper or are focally unperturbed. Moreover, our formulation is valid regardless of whether the defocus is due to the image plane being in front of or behind the point of sharp focus.The blur-depth relationship requires a sequence of at least three images taken with the camera moving either towards or away from the object. It can be used to obtain an initial estimate of relative depth using one of several non-linear methods. We demonstrate a solution based on the Extended Kalman Filter in which the measurement equation is the blur-depth relationship.The estimate of relative depth is then used to compute an initial estimate of camera motion parameters. In order to refine depth values, the values of relative depth and camera motion are then input into a second Extended Kalman Filter in which the measurement equations are the discrete motion equations. This set of cascaded Kalman filters can be employed iteratively over a longer sequence of images in order to further refine depth.We conduct several experiments on real scenery in order to demonstrate the range of object shapes that the algorithm can handle. We show that fairly good estimates of depth can be obtained with just three images.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000135, ucf:46179
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000135
-
-
Title
-
TOWARDS CALIBRATION OF OPTICAL FLOW OF CROWD VIDEOS USING OBSERVED TRAJECTORIES.
-
Creator
-
Elbadramany, Iman, Kaup, David, University of Central Florida
-
Abstract / Description
-
The need exists for finding a quantitative method for validating crowd simulations. One approach is to use optical flow of videos of real crowds to obtain velocities that can be used for comparison to simulations. Optical flow, in turn, needs to be calibrated to be useful. It is essential to show that optical flow velocities obtained from crowd videos can be mapped into the spatially averaged velocities of the observed trajectories of crowd members, and to quantify the extent of the...
Show moreThe need exists for finding a quantitative method for validating crowd simulations. One approach is to use optical flow of videos of real crowds to obtain velocities that can be used for comparison to simulations. Optical flow, in turn, needs to be calibrated to be useful. It is essential to show that optical flow velocities obtained from crowd videos can be mapped into the spatially averaged velocities of the observed trajectories of crowd members, and to quantify the extent of the correlation of the results. This research investigates methods to uncover the best conditions for a good correlation between optical flow and the average motion of individuals in crowd videos, with the aim that this will help in the quantitative validation of simulations. The first approach was to use a simple linear proportionality relation, with a single coefficient, alpha, between velocity vector of the optical flow and observed velocity of crowd members in a video or simulation. Since there are many variables that affect alpha, an attempt was made to find the best possible conditions for determining alpha, by varying experimental and optical flow settings. The measure of a good alpha was chosen to be that alpha does not vary excessively over a number of video frames. Best conditions of low coefficient of variation of alpha using the Lucas-Kanade optical flow algorithm were found to be when a larger aperture of 15x15 pixels was used, combined with a smaller threshold. Adequate results were found at cell size 40x40 pixels; the improvement in detecting details when smaller cells are used did not reduce the variability of alpha, and required much more computing power. Reduction in variability of alpha can be obtained by spreading the tracked location of a crowd member from a pixel into a rectangle. The Particle Image Velocimetry optical flow algorithm had better correspondence with the velocity vectors of manually tracked crowd members than results obtained using the Lukas-Kanade method. Here, also, it was found that 40x40 pixel cells were better than 15x15. A second attempt at quantifying the correlation between optical flow and actual crowd member velocities was studied using simulations. Two processes were researched, which utilized geometrical correction of the perspective distortion of the crowd videos. One process geometrically corrects the video, and then obtains optical flow data. The other obtains optical flow data from video, and then geometrically corrects the data. The results indicate that the first process worked better. Correlation was calculated between sets of data obtained from the average of twenty frames. This was found to be higher than calculating correlations between the velocities of cells in each pair of frames. An experiment was carried out to predict crowd tracks using optical flow and a calculated parameter, beta, seems to give promising results.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004024, ucf:49175
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004024
-
-
Title
-
Mitigation of Motion Sickness Symptoms in 360(&)deg; Indirect Vision Systems.
-
Creator
-
Quinn, Stephanie, Rinalducci, Edward, Hancock, Peter, Mouloua, Mustapha, French, Jonathan, Chen, Jessie, Kennedy, Robert, University of Central Florida
-
Abstract / Description
-
The present research attempted to use display design as a means to mitigate the occurrence and severity of symptoms of motion sickness and increase performance due to reduced (")general effects(") in an uncoupled motion environment. Specifically, several visual display manipulations of a 360(&)deg; indirect vision system were implemented during a target detection task while participants were concurrently immersed in a motion simulator that mimicked off-road terrain which was completely...
Show moreThe present research attempted to use display design as a means to mitigate the occurrence and severity of symptoms of motion sickness and increase performance due to reduced (")general effects(") in an uncoupled motion environment. Specifically, several visual display manipulations of a 360(&)deg; indirect vision system were implemented during a target detection task while participants were concurrently immersed in a motion simulator that mimicked off-road terrain which was completely separate from the target detection route. Results of a multiple regression analysis determined that the Dual Banners display incorporating an artificial horizon (i.e., AH Dual Banners) and perceived attentional control significantly contributed to the outcome of total severity of motion sickness, as measured by the Simulator Sickness Questionnaire (SSQ). Altogether, 33.6% (adjusted) of the variability in Total Severity was predicted by the variables used in the model. Objective measures were assessed prior to, during and after uncoupled motion. These tests involved performance while immersed in the environment (i.e., target detection and situation awareness), as well as postural stability and cognitive and visual assessment tests (i.e., Grammatical Reasoning and Manikin) both before and after immersion. Response time to Grammatical Reasoning actually decreased after uncoupled motion. However, this was the only significant difference of all the performance measures. Assessment of subjective workload (as measured by NASA-TLX) determined that participants in Dual Banners display conditions had a significantly lower level of perceived physical demand than those with Completely Separated display designs. Further, perceived temporal demand was lower for participants exposed to conditions incorporating an artificial horizon. Subjective sickness (SSQ Total Severity, Nausea, Oculomotor and Disorientation) was evaluated using non-parametric tests and confirmed that the AH Dual Banners display had significantly lower Total Severity scores than the Completely Separated display with no artificial horizon (i.e., NoAH Completely Separated). Oculomotor scores were also significantly different for these two conditions, with lower scores associated with AH Dual Banners. The NoAH Completely Separated condition also had marginally higher oculomotor scores when compared to the Completely Separated display incorporating the artificial horizon (AH Completely Separated). There were no significant differences of sickness symptoms or severity (measured by self-assessment, postural stability, and cognitive and visual tests) between display designs 30- and 60-minutes post-exposure. Further, 30- and 60- minute post measures were not significantly different from baseline scores, suggesting that aftereffects were not present up to 60 minutes post-exposure. It was concluded that incorporating an artificial horizon onto the Dual Banners display will be beneficial in mitigating symptoms of motion sickness in manned ground vehicles using 360(&)deg; indirect vision systems. Screening for perceived attentional control will also be advantageous in situations where selection is possible. However, caution must be made in generalizing these results to missions under terrain or vehicle speed different than what is used for this study, as well as those that include a longer immersion time.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005047, ucf:49972
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005047
-
-
Title
-
THE REMOVAL OF MOTION ARTIFACTS FROM NON-INVASIVE BLOOD PRESSURE MEASUREMENTS.
-
Creator
-
Thakkar, Paresh, Weeks, Arthur, University of Central Florida
-
Abstract / Description
-
Modern Automatic Blood Pressure Measurement Techniques are based on measuring the cuff pressure and on sensing the pulsatile amplitude variations. These measurements are very sensitive to motion of the patient or the surroundings where the patient is. The slightest unexpected movements could offset the readings of the automatic Blood Pressure meter by a large amount or render the readings totally meaningless. Every effort must be taken to avoid subjecting the body of the patient or the...
Show moreModern Automatic Blood Pressure Measurement Techniques are based on measuring the cuff pressure and on sensing the pulsatile amplitude variations. These measurements are very sensitive to motion of the patient or the surroundings where the patient is. The slightest unexpected movements could offset the readings of the automatic Blood Pressure meter by a large amount or render the readings totally meaningless. Every effort must be taken to avoid subjecting the body of the patient or the patient's surroundings to motion for obtaining a reliable reading. But there are situations in which we need Blood Pressure Measurements with the patient or his surroundings in motion; for instance in an ambulance while a patient is being transported to a hospital. In this thesis, we present a technique to reduce the effect of motion artifact from Blood Pressure measurements. We digitize the blood pressure waveform and use Digital Signal Processing Techniques to process the corrupted waveform. We use the differences in frequency spectra of the Blood Pressure signal and motion artifact noise to remove the motion artifact noise. The motion artifact noise spectrum is not very well defined, since it may consist of many different frequency components depending on the kind of motion. The Blood Pressure signal is more or less a periodic signal. That translates to periodicity in the frequency domain. Hence, we designed a digital filter that could take advantage of the periodic nature of the Blood Pressure Signal waveform. The filter is shaped like a comb with periodic peaks around the signal frequency components. Further processing of the filtered signal: baseline restoration and level shifting help us to further reduce the noise corruption.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000324, ucf:46289
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000324
-
-
Title
-
THE EFFECTS OF THE 5E LEARNING CYCLE MODEL ON STUDENTS' UNDERSTANDING OF FORCE AND MOTION CONCEPTS.
-
Creator
-
Campbell, Meghann, Sweeney, Aldrin, University of Central Florida
-
Abstract / Description
-
As advocated by the National Research Council [NRC] (1996) and the American Association for the Advancement of Science [AAAS] (1989), a change in the manner in which science is taught must be recognized at a national level and also embraced at a level that is reflected in every science teacher's classroom. With these ideas set forth as a guide for change,this study investigated the fifth grade students' understanding of force and motion concepts as they engaged in inquiry-based science...
Show moreAs advocated by the National Research Council [NRC] (1996) and the American Association for the Advancement of Science [AAAS] (1989), a change in the manner in which science is taught must be recognized at a national level and also embraced at a level that is reflected in every science teacher's classroom. With these ideas set forth as a guide for change,this study investigated the fifth grade students' understanding of force and motion concepts as they engaged in inquiry-based science investigations through the use of the 5E Learning Cycle. The researcher's journey through this process was also a focus of the study. Initial data were provided by a pretest indicating students' understanding of force and motion concepts. Four times weekly for a period of 14 weeks, students participated in investigations related to force and motion concepts. Their subsequent understanding of these concepts and their ability to generalize their understandings was evaluated via a posttest. Additionally, a review of lab activity sheets, other classroom-based assessments, and filmed interviews allowed for the triangulation of pertinent data necessary to draw conclusions from the study. Findings showed that student knowledge of force and motion concepts did increase although their understanding as demonstrated on paper lacked completeness versus understanding in an interview setting. Survey results also showed that after the study students believed they did not learn science best via textbook-based instruction.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001007, ucf:46831
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001007
-
-
Title
-
When the Alligator Called to Elijah: A Handcrafted Exploration of the Digital Moving Image.
-
Creator
-
Shults, Katherine, Harris, Christopher, Stoeckl, Ula, Schlow, Stephen, Grajeda, Anthony, University of Central Florida
-
Abstract / Description
-
When the Alligator Called to Elijah is a feature-length video conceptualized and constructed by Kate Shults in partial fulfillment of the requirements for earning a Master of Fine Arts in Entrepreneurial Digital Cinema from the University of Central Florida. The video is the result of an evolving exploration of the aesthetic capabilities of the digital image using Flip Video cameras, found footage and Final Cut Pro. Though originating as an experiment, When the Alligator Called to Elijah...
Show moreWhen the Alligator Called to Elijah is a feature-length video conceptualized and constructed by Kate Shults in partial fulfillment of the requirements for earning a Master of Fine Arts in Entrepreneurial Digital Cinema from the University of Central Florida. The video is the result of an evolving exploration of the aesthetic capabilities of the digital image using Flip Video cameras, found footage and Final Cut Pro. Though originating as an experiment, When the Alligator Called to Elijah became a creation of motion collage with very specific production parameters. This thesis is a record of this video's progression, from development to picture lock, taking it into preparation for exhibition and distribution.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004442, ucf:49332
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004442
-
-
Title
-
FABRIC ARCHITECTURE: BODY IN MOTION.
-
Creator
-
Cosovic, Daniela, Robinson, Elizabeth Brady, University of Central Florida
-
Abstract / Description
-
Making a dress, creating an object for someone else is a simple act of giving to another person. I did not want to decide between an object to wear and one to hang on the wall, so I gave you both, and movement in between. Take a dress off of a wall. Wear it. Put it back on the wall. Repeat it, or not. There is balance in movement of an object between a person and the wall. It is this quietness of balance amongst the sound of movement that I am seeking in my work.
-
Date Issued
-
2009
-
Identifier
-
CFE0002606, ucf:48291
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002606
-
-
Title
-
Recursive Behavior Recording: Complex Motor Stereotypies and Anatomical Behavior Descriptions.
-
Creator
-
Bobbitt, Nathaniel, Vasquez, Eleazar, Lambert, Stephen, Hughes, Charles, University of Central Florida
-
Abstract / Description
-
A novel anatomical behavioral descriptive taxonomy improves motion capture in complex motor stereotypies (CMS) by indexing precise time data without degradation in the complexity of whole body movement in CMS. The absence of etiological explanation of complex motor stereotypies warrants the aggregation of a core CMS dataset to compare regulation of repetitive behaviors in the time domain. A set of visual formalisms trap configurations of behavioral markers (lateralized movements) for...
Show moreA novel anatomical behavioral descriptive taxonomy improves motion capture in complex motor stereotypies (CMS) by indexing precise time data without degradation in the complexity of whole body movement in CMS. The absence of etiological explanation of complex motor stereotypies warrants the aggregation of a core CMS dataset to compare regulation of repetitive behaviors in the time domain. A set of visual formalisms trap configurations of behavioral markers (lateralized movements) for behavioral phenotype discovery as paired transitions (from, to) and asymmetries within repetitive restrictive behaviors. This translational project integrates NIH MeSH (medical subject headings) taxonomy with direct biological interface (wearable sensors and nanoscience in vitro assays) to design the architecture for exploratory diagnostic instruments. Motion capture technology when calibrated to multi-resolution indexing system (MeSH based) quantifies potential diagnostic criteria for comparing severity of CMS within behavioral plasticity and switching (sustained repetition or cyclic repetition) time-signatures. Diagnostic instruments sensitive to high behavioral resolution promote measurement to maximize behavioral activity while minimizing biological uncertainty. A novel protocol advances CMS research through instruments with recursive design.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005927, ucf:50846
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005927
-
-
Title
-
The Relationship Between DNA's Physical Properties and the DNA Molecule's Harmonic Signature, and Related Motion in Water--A Computational Investigation.
-
Creator
-
Boyer, Victor, Proctor, Michael, Thompson, William, Karwowski, Waldemar, Calloway, Richard, University of Central Florida
-
Abstract / Description
-
This research investigates through computational methods whether the physical properties of DNA contribute to its harmonic signature, the uniqueness of that signature if present, and motion of the DNA molecule in water. When DNA is solvated in water at normal 'room temperature', it experiences a natural vibration due to the Brownian motion of the particles in the water colliding with the DNA. The null hypothesis is that there is no evidence to suggest a relationship between DNA's motion and...
Show moreThis research investigates through computational methods whether the physical properties of DNA contribute to its harmonic signature, the uniqueness of that signature if present, and motion of the DNA molecule in water. When DNA is solvated in water at normal 'room temperature', it experiences a natural vibration due to the Brownian motion of the particles in the water colliding with the DNA. The null hypothesis is that there is no evidence to suggest a relationship between DNA's motion and strand length, while the alternative hypothesis is that there is evidence to suggest a relationship between DNA's vibrational motion and strand length. In a similar vein to the first hypothesis, a second hypothesis posits that DNA's vibrational motion may be dependent on strand content. The nature of this relationship, whether linear, exponential, logarithmic or non-continuous is not hypothesized by this research but will be discovered by testing if there is evidence to suggest a relationship between DNA's motion and strand length. The research also aims to discover whether the motion of DNA, when it varies by strand length and/or content, is sufficiently unique to allow that DNA to be identified in the absence of foreknowledge of the type of DNA that is present in a manner similar to a signature. If there is evidence to suggest that there is a uniqueness in DNA's vibrational motion under varying DNA strand content or length, then additional experimentation will be needed to determine whether these variances are unique across small changes as well as large changes, or large changes only. Finally, the question of whether it might be possible to identify a strand of unique DNA by base pair configuration solely from its vibrational signature, or if not, whether it might be possible to identify changes existing inside of a known DNA strand (such as a corruption, transposition or mutational error) is explored. Given the computational approach to this research, the NAMD simulation package (released by the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign) with the CHARMM force field would be the most appropriate set of tools for this investigation (Phillips et al., 2005), and will therefore be the toolset used in this research. For visualization and manipulation of model data, the VMD (Visual Molecular Dynamics) package will be employed. Further, these tools may be optimized and/or be aware of nucleic acid structures, and are free. These tools appear to be sufficient for this task, with validated fidelity of the simulation to provide vibrational and pressure profile data that could be analyzed; sufficient capabilities to do what is being asked of it; speed, so that runs can be done in a reasonable period of time (weeks versus months); and parallelizability, so that the tool could be run over a clustered network of computers dedicated to the task to increase the speed and capacity of the simulations. The computer cluster enabled analysis of 30,000 to 40,000 atom systems spending more than 410,000 CPU computational hours of hundreds of nano second duration, experimental runs each sampled 500,000 times with two-femtosecond (")frames.(")Using Fourier transforms of run pressure readings into frequencies, the simulation investigation could not reject the null hypotheses that the frequencies observed in the system runs are independent on the DNA strand length or content being studied. To be clear, frequency variations were present in the in silicon replications of the DNA in ionized solutions, but we were unable to conclude that those variations were not due to other system factors. There were several tests employed to determine alternative factors that caused these variations. Chief among the factors is the possibility that the water box itself is the source of a large amount of vibrational noise that makes it difficult or impossible with the tools that we had at our disposal to isolate any signals emitted by the DNA strands. Assuming the water-box itself was a source of large amounts of vibrational noise, an emergent hypothesis was generated and additional post-hoc testing was undertaken to attempt to isolate and then filter the water box noise from the rest of the system frequencies. With conclusive results we found that the water box is responsible for the majority of the signals being recorded, resulting in very low signal amplitudes from the DNA molecules themselves. Using these low signal amplitudes being emitted by the DNA, we could not be conclusively uniquely associate either DNA length or content with the remaining observed frequencies. A brief look at a future possible isolation technique, wavelet analysis, was conducted. Finally, because these results are dependent on the tools at our disposal and hence by no means conclusive, suggestions for future research to expand on and further test these hypothesis are made in the final chapter.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005930, ucf:50835
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005930
-
-
Title
-
Dialectics of Microbudget Cinema.
-
Creator
-
Ajdinovic, Milos, Stoeckl, Ula, Watson, Keri, Peters, Philip, Danker, Elizabeth, Perez, Jonathan, University of Central Florida
-
Abstract / Description
-
Magic Kingdom is a feature-length, microbudget motion picture, produced, (")written("), directed, and edited by Milos Ajdinovic as a part of the University of Central Florida's Masters in Fine Arts program in Digital Entrepreneurial Cinema. Its narrative is a product of the collective improvisation between a group of collaborators (-) Chealsea Anagnoson, Henry Gibson, Mikaela Duffy and Marcus Nieves (-) moderated by Milos Ajdinovic. This written dissertation is an attempt to document the...
Show moreMagic Kingdom is a feature-length, microbudget motion picture, produced, (")written("), directed, and edited by Milos Ajdinovic as a part of the University of Central Florida's Masters in Fine Arts program in Digital Entrepreneurial Cinema. Its narrative is a product of the collective improvisation between a group of collaborators (-) Chealsea Anagnoson, Henry Gibson, Mikaela Duffy and Marcus Nieves (-) moderated by Milos Ajdinovic. This written dissertation is an attempt to document the concepts and processes that surrounded the production of this film.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006849, ucf:51787
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006849
-
-
Title
-
Images of Nostalgia: An Exploration of the Creation of Recollection Through Visual Media.
-
Creator
-
Dickerson, Allyson, Harris, Christopher, Danker, Elizabeth, Shults, Katherine, Perez, Jonathan, University of Central Florida
-
Abstract / Description
-
I create innovative artistic works in which the experiential consciousness of the viewer drifts between objects, images, and the auditory narrative. The work approaches the visualization of memory and the catharsis of the loss felt from death. The projection of light onto lifeless entomological specimens mimics the projection of memory as a means to return to what has been lost. The digital copy of the specimen flickers across their bodies as a tribute to the movement that once possessed them...
Show moreI create innovative artistic works in which the experiential consciousness of the viewer drifts between objects, images, and the auditory narrative. The work approaches the visualization of memory and the catharsis of the loss felt from death. The projection of light onto lifeless entomological specimens mimics the projection of memory as a means to return to what has been lost. The digital copy of the specimen flickers across their bodies as a tribute to the movement that once possessed them. A List of Things that Quicken the Heart is a body of multimedia installation and single channel work that has been completed as part of my candidacy for an Emerging Media: Entrepreneurial Digital Cinema M.F.A. at the University of Central Florida.The single channel video work is created in the essay film mode. The visual elements of the piece are a blend of the effect of contextualizing disparate images and subjects. It is the means by which the audience is led to draw connections to the subject of memory without making any specific inferences. As the assembly of images takes place, so too does the assembly of theoretical and observational threads in the essay narration. As the filmmaker, I am speaking directly to the viewer about the implications of my experiences and observations. The editorial rhythm is such that the viewer is allowed brief pauses in the flow of information to meditate on the subject of nostalgia, and how the film incites them to consider the notion. There will also be an ambient audio component designed with the idea of creating a subtle, auditory contrast between familiar and uncanny ambient sounds.The correlating installations will serve as artifacts of memory, the physical objects relevant to my own nostalgia, which will help to serve as a recollection of the narration. In order to integrate them with the tone of the essay film, the narration will be played as a separate component through speakers that surround the space, so that it will envelope the viewer.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006735, ucf:51858
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006735
-
-
Title
-
Exploring sparsity, self-similarity, and low rank approximation in action recognition, motion retrieval, and action spotting.
-
Creator
-
Sun, Chuan, Foroosh, Hassan, Hughes, Charles, Tappen, Marshall, Sukthankar, Rahul, Moshell, Jack, University of Central Florida
-
Abstract / Description
-
This thesis consists of $4$ major parts. In the first part (Chapters $1-2$), we introduce the overview, motivation, and contribution of our works, and extensively survey the current literature for $6$ related topics. In the second part (Chapters $3-7$), we explore the concept of ``Self-Similarity" in two challenging scenarios, namely, the Action Recognition and the Motion Retrieval. We build three-dimensional volume representations for both scenarios, and devise effective techniques that can...
Show moreThis thesis consists of $4$ major parts. In the first part (Chapters $1-2$), we introduce the overview, motivation, and contribution of our works, and extensively survey the current literature for $6$ related topics. In the second part (Chapters $3-7$), we explore the concept of ``Self-Similarity" in two challenging scenarios, namely, the Action Recognition and the Motion Retrieval. We build three-dimensional volume representations for both scenarios, and devise effective techniques that can produce compact representations encoding the internal dynamics of data. In the third part (Chapter $8$), we explore the challenging action spotting problem, and propose a feature-independent unsupervised framework that is effective in spotting action under various real situations, even under heavily perturbed conditions. The final part (Chapters $9$) is dedicated to conclusions and future works.For action recognition, we introduce a generic method that does not depend on one particular type of input feature vector. We make three main contributions: (i) We introduce the concept of Joint Self-Similarity Volume (Joint SSV) for modeling dynamical systems, and show that by using a new optimized rank-1 tensor approximation of Joint SSV one can obtain compact low-dimensional descriptors that very accurately preserve the dynamics of the original system, e.g. an action video sequence; (ii) The descriptor vectors derived from the optimized rank-1 approximation make it possible to recognize actions without explicitly aligning the action sequences of varying speed of execution or difference frame rates; (iii) The method is generic and can be applied using different low-level features such as silhouettes, histogram of oriented gradients (HOG), etc. Hence, it does not necessarily require explicit tracking of features in the space-time volume. Our experimental results on five public datasets demonstrate that our method produces very good results and outperforms many baseline methods.For action recognition for incomplete videos, we determine whether incomplete videos that are often discarded carry useful information for action recognition, and if so, how one can represent such mixed collection of video data (complete versus incomplete, and labeled versus unlabeled) in a unified manner. We propose a novel framework to handle incomplete videos in action classification, and make three main contributions: (i) We cast the action classification problem for a mixture of complete and incomplete data as a semi-supervised learning problem of labeled and unlabeled data. (ii) We introduce a two-step approach to convert the input mixed data into a uniform compact representation. (iii) Exhaustively scrutinizing $280$ configurations, we experimentally show on our two created benchmarks that, even when the videos are extremely sparse and incomplete, it is still possible to recover useful information from them, and classify unknown actions by a graph based semi-supervised learning framework.For motion retrieval, we present a framework that allows for a flexible and an efficient retrieval of motion capture data in huge databases. The method first converts an action sequence into a self-similarity matrix (SSM), which is based on the notion of self-similarity. This conversion of the motion sequences into compact and low-rank subspace representations greatly reduces the spatiotemporal dimensionality of the sequences. The SSMs are then used to construct order-3 tensors, and we propose a low-rank decomposition scheme that allows for converting the motion sequence volumes into compact lower dimensional representations, without losing the nonlinear dynamics of the motion manifold. Thus, unlike existing linear dimensionality reduction methods that distort the motion manifold and lose very critical and discriminative components, the proposed method performs well, even when inter-class differences are small or intra-class differences are large. In addition, the method allows for an efficient retrieval and does not require the time-alignment of the motion sequences. We evaluate the performance of our retrieval framework on the CMU mocap dataset under two experimental settings, both demonstrating very good retrieval rates.For action spotting, our framework does not depend on any specific feature (e.g. HOG/HOF, STIP, silhouette, bag-of-words, etc.), and requires no human localization, segmentation, or framewise tracking. This is achieved by treating the problem holistically as that of extracting the internal dynamics of video cuboids by modeling them in their natural form as multilinear tensors. To extract their internal dynamics, we devised a novel Two-Phase Decomposition (TP-Decomp) of a tensor that generates very compact and discriminative representations that are robust to even heavily perturbed data. Technically, a Rank-based Tensor Core Pyramid (Rank-TCP) descriptor is generated by combining multiple tensor cores under multiple ranks, allowing to represent video cuboids in a hierarchical tensor pyramid. The problem then reduces to a template matching problem, which is solved efficiently by using two boosting strategies: (i) to reduce the search space, we filter the dense trajectory cloud extracted from the target video; (ii) to boost the matching speed, we perform matching in an iterative coarse-to-fine manner. Experiments on 5 benchmarks show that our method outperforms current state-of-the-art under various challenging conditions. We also created a challenging dataset called Heavily Perturbed Video Arrays (HPVA) to validate the robustness of our framework under heavily perturbed situations.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005554, ucf:50290
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005554
-
-
Title
-
The Happiest Place on Earth - The Microbudget Model as a Means to an American National Cinema.
-
Creator
-
Goshorn, John, Stoeckl, Ula, Gay, Andrew, Harris, Christopher, Sandler, Barry, University of Central Florida
-
Abstract / Description
-
The Happiest Place on Earth is a feature-length film written, directed, and produced by John Goshorn as part of the requirements for earning a Master of Fine Arts in Film (&) Digital Media from the University of Central Florida. The project aims to challenge existing conventions of the American fiction film on multiple levels (-) aesthetic, narrative, technical, and industrial (-)while dealing with a distinctly American subject and target audience. These challenges were both facilitated and...
Show moreThe Happiest Place on Earth is a feature-length film written, directed, and produced by John Goshorn as part of the requirements for earning a Master of Fine Arts in Film (&) Digital Media from the University of Central Florida. The project aims to challenge existing conventions of the American fiction film on multiple levels (-) aesthetic, narrative, technical, and industrial (-)while dealing with a distinctly American subject and target audience. These challenges were both facilitated and necessitated by the limited resources available to the production team and the academic context of the production. This thesis is a record of the film, from concept to completion and preparation for delivery to an audience.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004325, ucf:49451
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004325
-
-
Title
-
GEOMETRIC INVARIANCE IN THE ANALYSIS OF HUMAN MOTION IN VIDEO DATA.
-
Creator
-
Shen, Yuping, Foroosh, Hassan, University of Central Florida
-
Abstract / Description
-
Human motion analysis is one of the major problems in computer vision research. It deals with the study of the motion of human body in video data from different aspects, ranging from the tracking of body parts and reconstruction of 3D human body configuration, to higher level of interpretation of human action and activities in image sequences. When human motion is observed through video camera, it is perspectively distorted and may appear totally different from different viewpoints. Therefore...
Show moreHuman motion analysis is one of the major problems in computer vision research. It deals with the study of the motion of human body in video data from different aspects, ranging from the tracking of body parts and reconstruction of 3D human body configuration, to higher level of interpretation of human action and activities in image sequences. When human motion is observed through video camera, it is perspectively distorted and may appear totally different from different viewpoints. Therefore it is highly challenging to establish correct relationships between human motions across video sequences with different camera settings. In this work, we investigate the geometric invariance in the motion of human body, which is critical to accurately understand human motion in video data regardless of variations in camera parameters and viewpoints. In human action analysis, the representation of human action is a very important issue, and it usually determines the nature of the solutions, including their limits in resolving the problem. Unlike existing research that study human motion as a whole 2D/3D object or a sequence of postures, we study human motion as a sequence of body pose transitions. We also decompose a human body pose further into a number of body point triplets, and break down a pose transition into the transition of a set of body point triplets. In this way the study of complex non-rigid motion of human body is reduced to that of the motion of rigid body point triplets, i.e. a collection of planes in motion. As a result, projective geometry and linear algebra can be applied to explore the geometric invariance in human motion. Based on this formulation, we have discovered the fundamental ratio invariant and the eigenvalue equality invariant in human motion. We also propose solutions based on these geometric invariants to the problems of view-invariant recognition of human postures and actions, as well as analysis of human motion styles. These invariants and their applicability have been validated by experimental results supporting that their effectiveness in understanding human motion with various camera parameters and viewpoints.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002945, ucf:47970
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002945
-
-
Title
-
IMAGE BASED VIEW SYNTHESIS.
-
Creator
-
Xiao, Jiangjian, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
This dissertation deals with the image-based approach to synthesize a virtual scene using sparse images or a video sequence without the use of 3D models. In our scenario, a real dynamic or static scene is captured by a set of un-calibrated images from different viewpoints. After automatically recovering the geometric transformations between these images, a series of photo-realistic virtual views can be rendered and a virtual environment covered by these several static cameras can be...
Show moreThis dissertation deals with the image-based approach to synthesize a virtual scene using sparse images or a video sequence without the use of 3D models. In our scenario, a real dynamic or static scene is captured by a set of un-calibrated images from different viewpoints. After automatically recovering the geometric transformations between these images, a series of photo-realistic virtual views can be rendered and a virtual environment covered by these several static cameras can be synthesized. This image-based approach has applications in object recognition, object transfer, video synthesis and video compression. In this dissertation, I have contributed to several sub-problems related to image based view synthesis. Before image-based view synthesis can be performed, images need to be segmented into individual objects. Assuming that a scene can approximately be described by multiple planar regions, I have developed a robust and novel approach to automatically extract a set of affine or projective transformations induced by these regions, correctly detect the occlusion pixels over multiple consecutive frames, and accurately segment the scene into several motion layers. First, a number of seed regions using correspondences in two frames are determined, and the seed regions are expanded and outliers are rejected employing the graph cuts method integrated with level set representation. Next, these initial regions are merged into several initial layers according to the motion similarity. Third, the occlusion order constraints on multiple frames are explored, which guarantee that the occlusion area increases with the temporal order in a short period and effectively maintains segmentation consistency over multiple consecutive frames. Then the correct layer segmentation is obtained by using a graph cuts algorithm, and the occlusions between the overlapping layers are explicitly determined. Several experimental results are demonstrated to show that our approach is effective and robust. Recovering the geometrical transformations among images of a scene is a prerequisite step for image-based view synthesis. I have developed a wide baseline matching algorithm to identify the correspondences between two un-calibrated images, and to further determine the geometric relationship between images, such as epipolar geometry or projective transformation. In our approach, a set of salient features, edge-corners, are detected to provide robust and consistent matching primitives. Then, based on the Singular Value Decomposition (SVD) of an affine matrix, we effectively quantize the search space into two independent subspaces for rotation angle and scaling factor, and then we use a two-stage affine matching algorithm to obtain robust matches between these two frames. The experimental results on a number of wide baseline images strongly demonstrate that our matching method outperforms the state-of-art algorithms even under the significant camera motion, illumination variation, occlusion, and self-similarity. Given the wide baseline matches among images I have developed a novel method for Dynamic view morphing. Dynamic view morphing deals with the scenes containing moving objects in presence of camera motion. The objects can be rigid or non-rigid, each of them can move in any orientation or direction. The proposed method can generate a series of continuous and physically accurate intermediate views from only two reference images without any knowledge about 3D. The procedure consists of three steps: segmentation, morphing and post-warping. Given a boundary connection constraint, the source and target scenes are segmented into several layers for morphing. Based on the decomposition of affine transformation between corresponding points, we uniquely determine a physically correct path for post-warping by the least distortion method. I have successfully generalized the dynamic scene synthesis problem from the simple scene with only rotation to the dynamic scene containing non-rigid objects. My method can handle dynamic rigid or non-rigid objects, including complicated objects such as humans. Finally, I have also developed a novel algorithm for tri-view morphing. This is an efficient image-based method to navigate a scene based on only three wide-baseline un-calibrated images without the explicit use of a 3D model. After automatically recovering corresponding points between each pair of images using our wide baseline matching method, an accurate trifocal plane is extracted from the trifocal tensor implied in these three images. Next, employing a trinocular-stereo algorithm and barycentric blending technique, we generate an arbitrary novel view to navigate the scene in a 2D space. Furthermore, after self-calibration of the cameras, a 3D model can also be correctly augmented into this virtual environment synthesized by the tri-view morphing algorithm. We have applied our view morphing framework to several interesting applications: 4D video synthesis, automatic target recognition, multi-view morphing.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000218, ucf:46276
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000218
Pages