Current Search: Video (x)
Pages
-
-
Title
-
When the Alligator Called to Elijah: A Handcrafted Exploration of the Digital Moving Image.
-
Creator
-
Shults, Katherine, Harris, Christopher, Stoeckl, Ula, Schlow, Stephen, Grajeda, Anthony, University of Central Florida
-
Abstract / Description
-
When the Alligator Called to Elijah is a feature-length video conceptualized and constructed by Kate Shults in partial fulfillment of the requirements for earning a Master of Fine Arts in Entrepreneurial Digital Cinema from the University of Central Florida. The video is the result of an evolving exploration of the aesthetic capabilities of the digital image using Flip Video cameras, found footage and Final Cut Pro. Though originating as an experiment, When the Alligator Called to Elijah...
Show moreWhen the Alligator Called to Elijah is a feature-length video conceptualized and constructed by Kate Shults in partial fulfillment of the requirements for earning a Master of Fine Arts in Entrepreneurial Digital Cinema from the University of Central Florida. The video is the result of an evolving exploration of the aesthetic capabilities of the digital image using Flip Video cameras, found footage and Final Cut Pro. Though originating as an experiment, When the Alligator Called to Elijah became a creation of motion collage with very specific production parameters. This thesis is a record of this video's progression, from development to picture lock, taking it into preparation for exhibition and distribution.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004442, ucf:49332
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004442
-
-
Title
-
STORMS NAMED AFTER PEOPLE.
-
Creator
-
Ballard, Sarah E, Danker, Elizabeth, University of Central Florida
-
Abstract / Description
-
Storms Named After People is a coming-of-age film about loneliness, Florida's disposition during holidays, freedom within abandonment, and how one translates time and space when alone. I intend for this film to capture a unique and authentic representation of young women that I find difficult to come by in mainstream cinema. Some other things I plan to accomplish with Storms Named After people include subverting the audience's expectations, challenging tired stereotypes of women and various...
Show moreStorms Named After People is a coming-of-age film about loneliness, Florida's disposition during holidays, freedom within abandonment, and how one translates time and space when alone. I intend for this film to capture a unique and authentic representation of young women that I find difficult to come by in mainstream cinema. Some other things I plan to accomplish with Storms Named After people include subverting the audience's expectations, challenging tired stereotypes of women and various relationships among them, capturing loneliness from an optimistic point of view and embracing availability within a micro-budget filmmaking process. A final product that accomplishes all the above will be considered successful.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFH2000338, ucf:45875
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH2000338
-
-
Title
-
STARS, STRIPES, CAMERAS AND DECADENCE: MUSIC VIDEOS OF THE IRAQ WAR ERA.
-
Creator
-
Miller, Henry, Mauer, Barry, University of Central Florida
-
Abstract / Description
-
Recently, academic researchers have brought critical attention to representations of the Iraq War in popular culture. Most of this work, however, focuses on film and music, leaving the influential medium of music video largely unexplored. A number of artists produced music videos that capture the zeitgeists of competing movements leading up to and following the United States' involvement in the Iraq invasion. This project, "Stars, Stripes, Cameras and Decadence: Music Videos of the Iraq War,"...
Show moreRecently, academic researchers have brought critical attention to representations of the Iraq War in popular culture. Most of this work, however, focuses on film and music, leaving the influential medium of music video largely unexplored. A number of artists produced music videos that capture the zeitgeists of competing movements leading up to and following the United States' involvement in the Iraq invasion. This project, "Stars, Stripes, Cameras and Decadence: Music Videos of the Iraq War," seeks to survey music videos in order to understand how music video helps shape Americans' relationship to heavily polarized public discourses in the United States regarding this controversial military act. The thesis will take a multi-dimensional approach to analyzing each music video. The study will incorporate data on public opinion, audience reaction and political shifts in relationship to each video. On the most elementary level, the thesis will address the "anti" and "pro" war stances portrayed by music videos to understand both how they were shaped by their relationship to power and how they consequently shape their audience's relationship to power. The study will also undertake to understand these music videos aesthetically. Both "anti" and "pro" music videos draw upon schools of political messaging that largely dictate the art of the music video. Each school portrays soldiers, violence, war, enemies, families and loved ones in different ways. The thesis will delve into the histories of how various political traditions use images of war to shape their messages and how music videos continue (or break from) these traditions.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFH0003796, ucf:44755
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0003796
-
-
Title
-
LIVE VIDEO DATABASE MANAGEMENT SYSTEMS.
-
Creator
-
Peng, Rui, Hua, Kien, University of Central Florida
-
Abstract / Description
-
With the proliferation of inexpensive cameras and the availability of high-speed wired and wireless networks, networks of distributed cameras are becoming an enabling technology for a broad range of interdisciplinary applications in domains such as public safety and security, manufacturing, transportation, and healthcare. TodayÃÂ's live video processing systems on networks of distributed cameras, however, are designed for specific classes of applications. To provide a...
Show moreWith the proliferation of inexpensive cameras and the availability of high-speed wired and wireless networks, networks of distributed cameras are becoming an enabling technology for a broad range of interdisciplinary applications in domains such as public safety and security, manufacturing, transportation, and healthcare. TodayÃÂ's live video processing systems on networks of distributed cameras, however, are designed for specific classes of applications. To provide a generic query processing platform for applications of distributed camera networks, we designed and implemented a new class of general purpose database management systems, the live video database management system (LVDBMS). We view networked video cameras as a special class of interconnected storage devices, and allow the user to formulate ad hoc queries over real-time live video feeds. In the first part of this dissertation, an Internet scale framework for sharing and dissemination of general sensor data is presented. This framework provides a platform for general sensor data to be published, searched, shared, and delivered across the Internet. The second part is the design and development of a Live Video Database Management System. LVDBMS allows users to easily focus on events of interest from a multitude of distributed video cameras by posing continuous queries on the live video streams. In the third part, a distributed in-memory database approach is proposed to enhance the LVDBMS with an important capability of tracking objects across cameras.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003453, ucf:48419
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003453
-
-
Title
-
Spatiotemporal Graphs for Object Segmentation and Human Pose Estimation in Videos.
-
Creator
-
Zhang, Dong, Shah, Mubarak, Qi, GuoJun, Bagci, Ulas, Yun, Hae-Bum, University of Central Florida
-
Abstract / Description
-
Images and videos can be naturally represented by graphs, with spatial graphs for images and spatiotemporal graphs for videos. However, for different applications, there are usually different formulations of the graphs, and algorithms for each formulation have different complexities. Therefore, wisely formulating the problem to ensure an accurate and efficient solution is one of the core issues in Computer Vision research. We explore three problems in this domain to demonstrate how to...
Show moreImages and videos can be naturally represented by graphs, with spatial graphs for images and spatiotemporal graphs for videos. However, for different applications, there are usually different formulations of the graphs, and algorithms for each formulation have different complexities. Therefore, wisely formulating the problem to ensure an accurate and efficient solution is one of the core issues in Computer Vision research. We explore three problems in this domain to demonstrate how to formulate all of these problems in terms of spatiotemporal graphs and obtain good and efficient solutions.The first problem we explore is video object segmentation. The goal is to segment the primary moving objects in the videos. This problem is important for many applications, such as content based video retrieval, video summarization, activity understanding and targeted content replacement. In our framework, we use object proposals, which are object-like regions obtained by low-level visual cues. Each object proposal has an object-ness score associated with it, which indicates how likely this object proposal corresponds to an object. The problem is formulated as a directed acyclic graph, for which nodes represent the object proposals and edges represent the spatiotemporal relationship between nodes. A dynamic programming solution is employed to select one object proposal from each video frame, while ensuring their consistency throughout the video frames. Gaussian mixture models (GMMs) are used for modeling the background and foreground, and Markov Random Fields (MRFs) are employed to smooth the pixel-level segmentation.In the above spatiotemporal graph formulation, we consider the object segmentation in only single video. Next, we consider multiple videos and model the video co-segmentation problem as a spatiotemporal graph. The goal here is to simultaneously segment the moving objects from multiple videos and assign common objects the same labels. The problem is formulated as a regulated maximum clique problem using object proposals. The object proposals are tracked in adjacent frames to generate a pool of candidate tracklets. Then an undirected graph is built with the nodes corresponding to the tracklets from all the videos and edges representing the similarities between the tracklets. A modified Bron-Kerbosch Algorithm is applied to the graph in order to select the prominent objects contained in these videos, hence relate the segmentation of each object in different videos.In online and surveillance videos, the most important object class is the human. In contrast to generic video object segmentation and co-segmentation, specific knowledge about humans, which is defined by a pose (i.e. human skeleton), can be employed to help the segmentation and tracking of people in the videos. We formulate the problem of human pose estimation in videos using the spatiotemporal graph. In this formulation, the nodes represent different body parts in the video frames and edges represent the spatiotemporal relationship between body parts in adjacent frames. The graph is carefully designed to ensure an exact and efficient solution. The overall objective for the new formulation is to remove the simple cycles from the traditional graph-based formulations. Dynamic programming is employed in different stages in the method to select the best tracklets and human pose configurations
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006429, ucf:51488
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006429
-
-
Title
-
VIDEO CONTENT EXTRACTION: SCENE SEGMENTATION, LINKING AND ATTENTION DETECTION.
-
Creator
-
Zhai, Yun, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
In this fast paced digital age, a vast amount of videos are produced every day, such as movies, TV programs, personal home videos, surveillance video, etc. This places a high demand for effective video data analysis and management techniques. In this dissertation, we have developed new techniques for segmentation, linking and understanding of video scenes. Firstly, we have developed a video scene segmentation framework that segments the video content into story units. Then, a linking method...
Show moreIn this fast paced digital age, a vast amount of videos are produced every day, such as movies, TV programs, personal home videos, surveillance video, etc. This places a high demand for effective video data analysis and management techniques. In this dissertation, we have developed new techniques for segmentation, linking and understanding of video scenes. Firstly, we have developed a video scene segmentation framework that segments the video content into story units. Then, a linking method is designed to find the semantic correlation between video scenes/stories. Finally, to better understand the video content, we have developed a spatiotemporal attention detection model for videos. Our general framework for temporal scene segmentation, which is applicable to several video domains, is formulated in a statistical fashion and uses the Markov chain Monte Carlo (MCMC) technique to determine the boundaries between video scenes. In this approach, a set of arbitrary scene boundaries are initialized at random locations and are further automatically updated using two types of updates: diffusion and jumps. The posterior probability of the target distribution of the number of scenes and their corresponding boundary locations are computed based on the model priors and the data likelihood. Model parameter updates are controlled by the MCMC hypothesis ratio test, and samples are collected to generate the final scene boundaries. The major contribution of the proposed framework is two-fold: (1) it is able to find weak boundaries as well as strong boundaries, i.e., it does not rely on the fixed threshold; (2) it can be applied to different video domains. We have tested the proposed method on two video domains: home videos and feature films. On both of these domains we have obtained very accurate results, achieving on the average of 86% precision and 92% recall for home video segmentation, and 83% precision and 83% recall for feature films. The video scene segmentation process divides videos into meaningful units. These segments (or stories) can be further organized into clusters based on their content similarities. In the second part of this dissertation, we have developed a novel concept tracking method, which links news stories that focus on the same topic across multiple sources. The semantic linkage between the news stories is reflected in the combination of both their visual content and speech content. Visually, each news story is represented by a set of key frames, which may or may not contain human faces. The facial key frames are linked based on the analysis of the extended facial regions, and the non-facial key frames are correlated using the global matching. The textual similarity of the stories is expressed in terms of the normalized textual similarity between the keywords in the speech content of the stories. The developed framework has also been applied to the task of story ranking, which computes the interestingness of the stories. The proposed semantic linking framework and the story ranking method have both been tested on a set of 60 hours of open-benchmark video data (CNN and ABC news) from the TRECVID 2003 evaluation forum organized by NIST. Above 90% system precision has been achieved for the story linking task. The combination of both visual and speech cues has boosted the un-normalized recall by 15%. We have developed PEGASUS, a content based video retrieval system with fast speech and visual feature indexing and search. The system is available on the web: http://pegasus.cs.ucf.edu:8080/index.jsp. Given a video sequence, one important task is to understand what is present or what is happening in its content. To achieve this goal, target objects or activities need to be detected, localized and recognized in either the spatial and/or temporal domain. In the last portion of this dissertation, we present a visual attention detection method, which automatically generates the spatiotemporal saliency maps of input video sequences. The saliency map is later used in the detections of interesting objects and activities in videos by significantly narrowing the search range. Our spatiotemporal visual attention model generates the saliency maps based on both the spatial and temporal signals in the video sequences. In the temporal attention model, motion contrast is computed based on the planar motions (homography) between images, which are estimated by applying RANSAC on point correspondences in the scene. To compensate for the non-uniformity of the spatial distribution of interest-points, spanning areas of motion segments are incorporated in the motion contrast computation. In the spatial attention model, we have developed a fast method for computing pixel-level saliency maps using color histograms of images. Finally, a dynamic fusion technique is applied to combine both the temporal and spatial saliency maps, where temporal attention is dominant over the spatial model when large motion contrast exists, and vice versa. The proposed spatiotemporal attention framework has been extensively applied on multiple video sequences to highlight interesting objects and motions present in the sequences. We have achieved 82% user satisfactory rate on the point-level attention detection and over 92% user satisfactory rate on the object-level attention detection.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001216, ucf:46944
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001216
-
-
Title
-
SEMANTIC VIDEO RETRIEVAL USING HIGH LEVEL CONTEXT.
-
Creator
-
Aytar, Yusuf, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
Video retrieval searching and retrieving videos relevant to a user defined query is one of the most popular topics in both real life applications and multimedia research. This thesis employs concepts from Natural Language Understanding in solving the video retrieval problem. Our main contribution is the utilization of the semantic word similarity measures for video retrieval through the trained concept detectors, and the visual co-occurrence relations between such concepts. We...
Show moreVideo retrieval searching and retrieving videos relevant to a user defined query is one of the most popular topics in both real life applications and multimedia research. This thesis employs concepts from Natural Language Understanding in solving the video retrieval problem. Our main contribution is the utilization of the semantic word similarity measures for video retrieval through the trained concept detectors, and the visual co-occurrence relations between such concepts. We propose two methods for content-based retrieval of videos: (1) A method for retrieving a new concept (a concept which is not known to the system and no annotation is available) using semantic word similarity and visual co-occurrence, which is an unsupervised method. (2) A method for retrieval of videos based on their relevance to a user defined text query using the semantic word similarity and visual content of videos. For evaluation purposes, we mainly used the automatic search and the high level feature extraction test set of TRECVID'06 and TRECVID'07 benchmarks. These two data sets consist of 250 hours of multilingual news video captured from American, Arabic, German and Chinese TV channels. Although our method for retrieving a new concept is an unsupervised method, it outperforms the trained concept detectors (which are supervised) on 7 out of 20 test concepts, and overall it performs very close to the trained detectors. On the other hand, our visual content based semantic retrieval method performs more than 100% better than the text-based retrieval method. This shows that using visual content alone we can have significantly good retrieval results.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002158, ucf:47521
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002158
-
-
Title
-
INVESTIGATING FLOW, PRESENCE, AND ENGAGEMENT IN INDEPENDENT VIDEO GAME MECHANICS.
-
Creator
-
Dunaj, Jon, McDaniel, Rudy, University of Central Florida
-
Abstract / Description
-
Video games are being studied today more than ever before. The engagement that they generate with the user, if harnessed, is thought to have applications across numerous other fields. Educators especially wish to implement elements of gaming into supplemental activities to help further interest students in the learning process. Many claim that this is because classroom's today are in direct contradiction with the real home life of students. Student's today were born into the fast paced world...
Show moreVideo games are being studied today more than ever before. The engagement that they generate with the user, if harnessed, is thought to have applications across numerous other fields. Educators especially wish to implement elements of gaming into supplemental activities to help further interest students in the learning process. Many claim that this is because classroom's today are in direct contradiction with the real home life of students. Student's today were born into the fast paced world of the digital realm, frequently multi-tasking between watching television, playing games, doing homework, and socializing. As educators begin to create game like experiences to drive student engagement they will seek to create interactions that foster the psychological phenomena of flow, presence, and engagement. Each of these three processes helps play a key role in what makes video games the attention-grabbing medium that they are. When creating games it would be beneficial to know which type of game mechanics reinforce these phenomena the most. The goal of this study is to investigate, Super Meat Boy and Limbo, two very similar games with very different mechanical representations and see which game is more engaging in these three areas. Twenty- nine participants played one of the two games for forty-five minutes, completed three separate measurements, and were observed throughout the process. The results were analyzed and found one game to indeed be more engaging than the other.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFH0004625, ucf:45268
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004625
-
-
Title
-
FROM SHADOWMOURNE TO FOLK ART: ARTICULATING A VISION OF ELEARNING FOR THE 21ST CENTURY.
-
Creator
-
Kapp, Christina, Campbell, James, University of Central Florida
-
Abstract / Description
-
This study examines mass-market applications for some of the many theories of eLearning and blended learning, focusing most closely on a period from 2000-2010. It establishes a state of the union for K-12 immersive eLearning environments by using in-depth cases studies of five major mass-market, educational, and community-education based productsÃÂ--Gaia Online, Poptropica, Quest Atlantis, Dimenxian/Dimension U, and Folkvine. Investigating these models calls into play not...
Show moreThis study examines mass-market applications for some of the many theories of eLearning and blended learning, focusing most closely on a period from 2000-2010. It establishes a state of the union for K-12 immersive eLearning environments by using in-depth cases studies of five major mass-market, educational, and community-education based productsÃÂ--Gaia Online, Poptropica, Quest Atlantis, Dimenxian/Dimension U, and Folkvine. Investigating these models calls into play not only the voices of traditional academic and usability research, but also the ad hoc voices of the players, commentators, developers, and bloggers. These are the people who speak to the community of these sites, and their lived experiences fall somewhere in the interstices between in-site play, beta development, and external commentary (both academic and informal.) The works of experimental academic theorists play an acknowledged and fundamental role in this study, including those of Ulmer, Barab, Gee, and McLuhan. These visionary voices of academia are balanced with a consideration of both the political and financial constraints surrounding immersive educational game development. This secondary level of analysis focuses on how issues around equity of access, delivery platforms, and target disciplines can and should inform strategic goals. While this dissertation alone is unlikely to solve issues of access, emergent groups including the OLPC hold exciting promises for worldwide connectivity. My conclusion forms a synthesis of all these competing forces and proposes a pragmatic and conceptual rule-set for the development of a forward-looking and immersive educational MMORPG.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003549, ucf:48906
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003549
-
-
Title
-
A ROBUST WIRELESS MESH ACCESS ENVIRONMENT FOR MOBILE VIDEO USERS.
-
Creator
-
Xie, Fei, Hua, Kien, University of Central Florida
-
Abstract / Description
-
The rapid advances in networking technology have enabled large-scale deployments of online video streaming services in todayÃÂ's Internet. In particular, wireless Internet access technology has been one of the most transforming and empowering technologies in recent years. We have witnessed a dramatic increase in the number of mobile users who access online video services through wireless access networks, such as wireless mesh networks and 3G cellular networks. Unlike in...
Show moreThe rapid advances in networking technology have enabled large-scale deployments of online video streaming services in todayÃÂ's Internet. In particular, wireless Internet access technology has been one of the most transforming and empowering technologies in recent years. We have witnessed a dramatic increase in the number of mobile users who access online video services through wireless access networks, such as wireless mesh networks and 3G cellular networks. Unlike in wired environment, using a dedicated stream for each video service request is very expensive for wireless networks. This simple strategy also has limited scalability when popular content is demanded by a large number of users. It is desirable to have a robust wireless access environment that can sustain a sudden spurt of interest for certain videos due to, say a current event. Moreover, due to the mobility of the video users, smooth streaming performance during the handoff is a key requirement to the robustness of the wireless access networks for mobile video users. In this dissertation, the author focuses on the robustness of the wireless mesh access (WMA) environment for mobile video users. Novel video sharing techniques are proposed to reduce the burden of video streaming in different WMA environments. The author proposes a cross-layer framework for scalable Video-on-Demand (VOD) service in multi-hop WiMax mesh networks. The author also studies the optimization problems for video multicast in a general wireless mesh networks. The WMA environment is modeled as a connected graph with a video source in one of the nodes and the video requests randomly generated from other nodes in the graph. The optimal video multicast problem in such environment is formulated as two sub-problems. The proposed solutions of the sub-problems are justified using simulation and numerical study. In the case of online video streaming, online video server does not cooperate with the access networks. In this case, the centralized data sharing technique fails since they assume the cooperation between the video server and the network. To tackle this problem, a novel distributed video sharing technique called Dynamic Stream Merging (DSM) is proposed. DSM improves the robustness of the WMA environment without the cooperation from the online video server. It optimizes the per link sharing performance with small time complexity and message complexity. The performance of DSM has been studied using simulations in Network Simulator 2 (NS2) as well as real experiments in a wireless mesh testbed. The Mobile YouTube website (http://m.youtube.com) is used as the online video website in the experiment. Last but not the least; a cross-layer scheme is proposed to avoid the degradation on the video quality during the handoff in the WMA environment. Novel video quality related triggers and the routing metrics at the mesh routers are utilized in the handoff decision making process. A redirection scheme is also proposed to eliminate packet loss caused by the handoff.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003241, ucf:48541
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003241
-
-
Title
-
Video game self-efficacy and its effect on training performance.
-
Creator
-
Ortiz, Skilan, Bowers, Clint, Fritzsche, Barbara, Joseph, Dana, Cannon-Bowers, Janis, University of Central Florida
-
Abstract / Description
-
This study examined the effects of using serious games for training on task performance and declarative knowledge outcomes. The purpose was to determine if serious games are more effective training tools than traditional methods. Self-efficacy, expectations for training, and engagement were considered as moderators of the relationship between type of training and task performance as well as type of training and declarative knowledge. Results of the study offered support for the potential of...
Show moreThis study examined the effects of using serious games for training on task performance and declarative knowledge outcomes. The purpose was to determine if serious games are more effective training tools than traditional methods. Self-efficacy, expectations for training, and engagement were considered as moderators of the relationship between type of training and task performance as well as type of training and declarative knowledge. Results of the study offered support for the potential of serious games to be more effective than traditional methods of training when it comes to task performance.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005224, ucf:50639
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005224
-
-
Title
-
AUTOMATED VISUAL DATABASE CREATION FOR A GROUND VEHICLE SIMULATOR.
-
Creator
-
Claudio, Pedro, Bauer, Christian, University of Central Florida
-
Abstract / Description
-
This research focuses on extracting road models from stereo video sequences taken from a moving vehicle. The proposed method combines color histogram based segmentation, active contours (snakes) and morphological processing to extract road boundary coordinates for conversion into Matlab or Multigen OpenFlight compatible polygonal representations. Color segmentation uses an initial truth frame to develop a color probability density function (PDF) of the road versus the terrain....
Show moreThis research focuses on extracting road models from stereo video sequences taken from a moving vehicle. The proposed method combines color histogram based segmentation, active contours (snakes) and morphological processing to extract road boundary coordinates for conversion into Matlab or Multigen OpenFlight compatible polygonal representations. Color segmentation uses an initial truth frame to develop a color probability density function (PDF) of the road versus the terrain. Subsequent frames are segmented using a Maximum Apostiori Probability (MAP) criteria and the resulting templates are used to update the PDFs. Color segmentation worked well where there was minimal shadowing and occlusion by other cars. A snake algorithm was used to find the road edges which were converted to 3D coordinates using stereo disparity and vehicle position information. The resulting 3D road models were accurate to within 1 meter.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001326, ucf:46994
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001326
-
-
Title
-
EMBODIED ABSTRACTION IN CINEMA: VIRTUAL PROSTHESIS AND FORESTS OF LIGHT.
-
Creator
-
Perez, Jonathan, Harris, Christopher, University of Central Florida
-
Abstract / Description
-
Our impressions of this lifeworld are contingent upon our ability to see (in every conflicting meaning of the word). This paper reviews a body of scholars who often share disparate, "incompatible" ontological commitments in effort to examine how their ordering of concepts may reveal a deeper fluidity and permeability between all states of inquiry, creation and investigation into Being and Time. It begins with perspective, examining our subjective presence in the context of the camera...
Show moreOur impressions of this lifeworld are contingent upon our ability to see (in every conflicting meaning of the word). This paper reviews a body of scholars who often share disparate, "incompatible" ontological commitments in effort to examine how their ordering of concepts may reveal a deeper fluidity and permeability between all states of inquiry, creation and investigation into Being and Time. It begins with perspective, examining our subjective presence in the context of the camera apparatus and considers how the mirroring of mechanical instrumentation, namely the rotary shutter and optics of the camera has limited the true function of the cinema to a narrow, representational form. It considers the spiritual implications of the apparatus, exploring, regardless of what is filmed, what the method of inscription from still photos into motion means in regards to consciousness. The paper then investigates what the role of abstraction is in the context of a spiritually minded camera apparatus and attempts to reconcile Deluzian and phenomenological perspectives about film consciousness. All of this is, after all, is in the conceptual support of the four channel video installation Phase Space. The paper does not seek to, or claim to apply readymade philosophical concepts to cinema, rather it explicitly attempts to examine and discuss cinema on its own virtues and investigate how it can express itself as an experimental form of philosophy.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFH0004219, ucf:44966
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004219
-
-
Title
-
EFFECTS OF INPUT MODALITY AND EXPERTISE ON WORKLOAD AND VIDEO GAME PERFORMANCE.
-
Creator
-
Kent, Travis, Sims, Valerie, University of Central Florida
-
Abstract / Description
-
A recent trend in consumer and military electronics has been to allow operators the option to control the system via novel control methods. The most prevalent and available form of these methods is that of vocal control. Vocal control allows for the control of a system by speaking commands rather than manually inputting them. This has not only implications for increased productivity but also optimizing safety, and assisting the disabled population. Past research has examined the potential...
Show moreA recent trend in consumer and military electronics has been to allow operators the option to control the system via novel control methods. The most prevalent and available form of these methods is that of vocal control. Vocal control allows for the control of a system by speaking commands rather than manually inputting them. This has not only implications for increased productivity but also optimizing safety, and assisting the disabled population. Past research has examined the potential costs and benefits to this novel control scheme with varying results. The purpose of this study was to further examine the relationship between modality of input, operator workload, and expertise. The results obtained indicated that vocal control may not be ideal in all situations as a method of input as participants experienced significantly higher amounts of workload than those in the manual condition. Additionally, expertise may be more specific than previously thought as participants in the vocal condition performed nearly identical at the task regardless of gaming expertise. The implications of the findings for this study suggest that vocal control be further examined as an effective method of user input, especially with regards to expertise and training effects.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFH0004122, ucf:44877
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004122
-
-
Title
-
SPATIO-TEMPORAL MAXIMUM AVERAGE CORRELATION HEIGHT TEMPLATES IN ACTION RECOGNITION AND VIDEO SUMMARIZATION.
-
Creator
-
Rodriguez, Mikel, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
Action recognition represents one of the most difficult problems in computer vision given that it embodies the combination of several uncertain attributes, such as the subtle variability associated with individual human behavior and the challenges that come with viewpoint variations, scale changes and different temporal extents. Nevertheless, action recognition solutions are critical in a great number of domains, such video surveillance, assisted living environments, video search, interfaces,...
Show moreAction recognition represents one of the most difficult problems in computer vision given that it embodies the combination of several uncertain attributes, such as the subtle variability associated with individual human behavior and the challenges that come with viewpoint variations, scale changes and different temporal extents. Nevertheless, action recognition solutions are critical in a great number of domains, such video surveillance, assisted living environments, video search, interfaces, and virtual reality. In this dissertation, we investigate template-based action recognition algorithms that can incorporate the information contained in a set of training examples, and we explore how these algorithms perform in action recognition and video summarization. First, we introduce a template-based method for recognizing human actions called Action MACH. Our approach is based on a Maximum Average Correlation Height (MACH) filter. MACH is capable of capturing intra-class variability by synthesizing a single Action MACH filter for a given action class. We generalize the traditional MACH filter to video (3D spatiotemporal volume), and vector valued data. By analyzing the response of the filter in the frequency domain, we avoid the high computational cost commonly incurred in template-based approaches. Vector valued data is analyzed using the Clifford Fourier transform, a generalization of the Fourier transform intended for both scalar and vector-valued data. Next, we address three seldom explored challenges in template-based action recognition. The first is the recognition and localization of human actions in aerial videos obtained from unmanned aerial vehicles (UAVs), a new medium which presents unique challenges due to the small number of pixels per human, pose, and moving camera. The second issue we address is the incorporation of multiple positive and negative examples of a target action class when generating an action template. We address this issue by employing the Fukunaga-Koontz Transform as a means of generating a single quadratic template which, unlike traditional temporal templates (which rely on positive examples alone), effectively captures the variability associated with an action class by including both positive and negative examples in the template training process. Third, we explore the problem of generating video summaries that include specific actions of interest as opposed to all moving objects. In doing so, we explore the role of action templates in video summarization in an effort to provide a means of generating a compact video representation based on a set of activities of interest. We introduce an approach in which a user specifies the activities that interest him and the video is automatically condensed to a short clip which captures the most relevant events based on the user's preference. We follow the output summary video format of non-chronological video synopsis approaches, in which different events which occur at different times may be displayed concurrently, even though they never occur simultaneously in the original video. However, instead of assuming that all moving objects are interesting, priority is given to specific activities of interest which pertain to a user's query. This provides an efficient means of browsing through large collections of video for events of interest.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003313, ucf:48507
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003313
-
-
Title
-
DANGEROUS OPINIONS: PERCEPTION OF VIOLENT VIDEO GAMES ON JURY DECISION MAKING.
-
Creator
-
Jacobi, Brock, Sims, Valerie, University of Central Florida
-
Abstract / Description
-
The purpose of the study was to examine whether a potential juror would give harsher sentences to defendants based only on the manipulation of the defendant's personal hobby. This was investigated by manipulating the hobby through a hypothetical manslaughter scenario in a vignette. Participants were asked to answer questions pertaining to the defendant's guilt and potential sentencing. Results indicate that participants' sex, participants' authoritarianism, and defendant's hobby were...
Show moreThe purpose of the study was to examine whether a potential juror would give harsher sentences to defendants based only on the manipulation of the defendant's personal hobby. This was investigated by manipulating the hobby through a hypothetical manslaughter scenario in a vignette. Participants were asked to answer questions pertaining to the defendant's guilt and potential sentencing. Results indicate that participants' sex, participants' authoritarianism, and defendant's hobby were significant factors. Significant interactions were found pertaining to whether the defendant should receive counseling across sex by violence and sex by avocation. These results are evidence that the use of jurors in the legal system is flawed and needs to be improved upon. Future research should examine an age distribution closer to the national mean, and the online setting should be replaced with an in person mock jury that will have more realistic group dynamic and higher ecological validity.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFH0004569, ucf:45177
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004569
-
-
Title
-
PROTOTYPE OF AN EDUCATIONAL GAME FOR KNOWLEDGE RETENTION IN YOUTH HEALTH EDUCATION.
-
Creator
-
Vogel, Jennifer, Montagne, Euripides, University of Central Florida
-
Abstract / Description
-
There is some debate about the most effective and least controversial means of sex education in schools. In several states, state law does not require education about Sexually Transmitted Diseases and Human Immunodeficiency Virus Infection/Acquired Immunodeficiency Syndrome (STDs and HIV/AIDS.) There is also debate about the effect and pervasiveness of sexual situations in video games and its effect on the healthy sexual development of adolescents. This research therefore aims to try to solve...
Show moreThere is some debate about the most effective and least controversial means of sex education in schools. In several states, state law does not require education about Sexually Transmitted Diseases and Human Immunodeficiency Virus Infection/Acquired Immunodeficiency Syndrome (STDs and HIV/AIDS.) There is also debate about the effect and pervasiveness of sexual situations in video games and its effect on the healthy sexual development of adolescents. This research therefore aims to try to solve these two problems and answer the following question: Is it possible to represent sex in a more realistic and educational way through a video game while teaching more medically accurate and necessary information? The completion of this study will be able to provide some insights on the feasibility and benefits of widespread implementation of serious video games for health education in the United States and also point to the necessity of future research into this topic.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFH0004656, ucf:45257
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004656
-
-
Title
-
Holistic Representations for Activities and Crowd Behaviors.
-
Creator
-
Solmaz, Berkan, Shah, Mubarak, Da Vitoria Lobo, Niels, Jha, Sumit, Ilie, Marcel, Moore, Brian, University of Central Florida
-
Abstract / Description
-
In this dissertation, we address the problem of analyzing the activities of people in a variety of scenarios, this is commonly encountered in vision applications. The overarching goal is to devise new representations for the activities, in settings where individuals or a number of people may take a part in specific activities. Different types of activities can be performed by either an individual at the fine level or by several people constituting a crowd at the coarse level. We take into...
Show moreIn this dissertation, we address the problem of analyzing the activities of people in a variety of scenarios, this is commonly encountered in vision applications. The overarching goal is to devise new representations for the activities, in settings where individuals or a number of people may take a part in specific activities. Different types of activities can be performed by either an individual at the fine level or by several people constituting a crowd at the coarse level. We take into account the domain specific information for modeling these activities. The summary of the proposed solutions is presented in the following.The holistic description of videos is appealing for visual detection and classification tasks for several reasons including capturing the spatial relations between the scene components, simplicity, and performance [1, 2, 3]. First, we present a holistic (global) frequency spectrum based descriptor for representing the atomic actions performed by individuals such as: bench pressing, diving, hand waving, boxing, playing guitar, mixing, jumping, horse riding, hula hooping etc. We model and learn these individual actions for classifying complex user uploaded videos. Our method bypasses the detection of interest points, the extraction of local video descriptors and the quantization of local descriptors into a code book; it represents each video sequence as a single feature vector. This holistic feature vector is computed by applying a bank of 3-D spatio-temporal filters on the frequency spectrum of a video sequence; hence it integrates the information about the motion and scene structure. We tested our approach on two of the most challenging datasets, UCF50 [4] and HMDB51 [5], and obtained promising results which demonstrates the robustness and the discriminative power of our holistic video descriptor for classifying videos of various realistic actions.In the above approach, a holistic feature vector of a video clip is acquired by dividing the video into spatio-temporal blocks then concatenating the features of the individual blocks together. However, such a holistic representation blindly incorporates all the video regions regardless of their contribution in classification. Next, we present an approach which improves the performance of the holistic descriptors for activity recognition. In our novel method, we improve the holistic descriptors by discovering the discriminative video blocks. We measure the discriminativity of a block by examining its response to a pre-learned support vector machine model. In particular, a block is considered discriminative if it responds positively for positive training samples, and negatively for negative training samples. We pose the problem of finding the optimal blocks as a problem of selecting a sparse set of blocks, which maximizes the total classifier discriminativity. Through a detailed set of experiments on benchmark datasets [6, 7, 8, 9, 5, 10], we show that our method discovers the useful regions in the videos and eliminates the ones which are confusing for classification, which results in significant performance improvement over the state-of-the-art.In contrast to the scenes where an individual performs a primitive action, there may be scenes with several people, where crowd behaviors may take place. For these types of scenes the traditional approaches for recognition will not work due to severe occlusion and computational requirements. The number of videos is limited and the scenes are complicated, hence learning these behaviors is not feasible. For this problem, we present a novel approach, based on the optical flow in a video sequence, for identifying five specific and common crowd behaviors in visual scenes. In the algorithm, the scene is overlaid by a grid of particles, initializing a dynamical system which is derived from the optical flow. Numerical integration of the optical flow provides particle trajectories that represent the motion in the scene. Linearization of the dynamical system allows a simple and practical analysis and classification of the behavior through the Jacobian matrix. Essentially, the eigenvalues of this matrix are used to determine the dynamic stability of points in the flow and each type of stability corresponds to one of the five crowd behaviors. The identified crowd behaviors are (1) bottlenecks: where many pedestrians/vehicles from various points in the scene are entering through one narrow passage, (2) fountainheads: where many pedestrians/vehicles are emerging from a narrow passage only to separate in many directions, (3) lanes: where many pedestrians/vehicles are moving at the same speeds in the same direction, (4) arches or rings: where the collective motion is curved or circular, and (5) blocking: where there is a opposing motion and desired movement of groups of pedestrians is somehow prohibited. The implementation requires identifying a region of interest in the scene, and checking the eigenvalues of the Jacobian matrix in that region to determine the type of flow, that corresponds to various well-defined crowd behaviors. The eigenvalues are only considered in these regions of interest, consistent with the linear approximation and the implied behaviors. Since changes in eigenvalues can mean changes in stability, corresponding to changes in behavior, we can repeat the algorithm over clips of long video sequences to locate changes in behavior. This method was tested on over real videos representing crowd and traffic scenes.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004941, ucf:49638
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004941
-
-
Title
-
Scene Understanding for Real Time Processing of Queries over Big Data Streaming Video.
-
Creator
-
Aved, Alexander, Hua, Kien, Foroosh, Hassan, Zou, Changchun, Ni, Liqiang, University of Central Florida
-
Abstract / Description
-
With heightened security concerns across the globe and the increasing need to monitor, preserve and protect infrastructure and public spaces to ensure proper operation, quality assurance and safety, numerous video cameras have been deployed. Accordingly, they also need to be monitored effectively and efficiently. However, relying on human operators to constantly monitor all the video streams is not scalable or cost effective. Humans can become subjective, fatigued, even exhibit bias and it is...
Show moreWith heightened security concerns across the globe and the increasing need to monitor, preserve and protect infrastructure and public spaces to ensure proper operation, quality assurance and safety, numerous video cameras have been deployed. Accordingly, they also need to be monitored effectively and efficiently. However, relying on human operators to constantly monitor all the video streams is not scalable or cost effective. Humans can become subjective, fatigued, even exhibit bias and it is difficult to maintain high levels of vigilance when capturing, searching and recognizing events that occur infrequently or in isolation.These limitations are addressed in the Live Video Database Management System (LVDBMS), a framework for managing and processing live motion imagery data. It enables rapid development of video surveillance software much like traditional database applications are developed today. Such developed video stream processing applications and ad hoc queries are able to "reuse" advanced image processing techniques that have been developed. This results in lower software development and maintenance costs. Furthermore, the LVDBMS can be intensively tested to ensure consistent quality across all associated video database applications. Its intrinsic privacy framework facilitates a formalized approach to the specification and enforcement of verifiable privacy policies. This is an important step towards enabling a general privacy certification for video surveillance systems by leveraging a standardized privacy specification language.With the potential to impact many important fields ranging from security and assembly line monitoring to wildlife studies and the environment, the broader impact of this work is clear. The privacy framework protects the general public from abusive use of surveillance technology; success in addressing the (")trust(") issue will enable many new surveillance-related applications. Although this research focuses on video surveillance, the proposed framework has the potential to support many video-based analytical applications.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004648, ucf:49900
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004648
-
-
Title
-
"I Play to Beat the Machine": Masculinity and the Video Game Industry in the United States.
-
Creator
-
McDivitt, Anne, Foster, Amy, Cassanello, Robert, Solonari, Vladimir, University of Central Florida
-
Abstract / Description
-
This thesis examines the video game industry within the United States from the first game that was created in 1958 until the shift to Japanese dominance of the industry in 1985, and how white, middle class masculinity was reflected through the sphere of video gaming. The first section examines the projections of white, middle class masculinity in U.S. culture and how that affected the types of video games that the developers created. The second section examines reflections of this masculine...
Show moreThis thesis examines the video game industry within the United States from the first game that was created in 1958 until the shift to Japanese dominance of the industry in 1985, and how white, middle class masculinity was reflected through the sphere of video gaming. The first section examines the projections of white, middle class masculinity in U.S. culture and how that affected the types of video games that the developers created. The second section examines reflections of this masculine culture that surrounded video gaming in the 1970s and 1980s in the developers, gamers, and the media, while demonstrating how the masculine realm of video gaming was constructed. Lastly, a shift occurred after the 1980 release of Pac-Man, which led to a larger number of women gamers and developers, as well as an industry that embraced a broader audience. It concludes with the crash of the video game industry within the United States in 1983, which allowed Japanese video game companies to gain dominance in video gaming worldwide instead of the U.S. companies, such as Atari.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004889, ucf:49645
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004889
Pages