Current Search: vision (x)
Pages
-
-
Title
-
DEPTH FROM DEFOCUSED MOTION.
-
Creator
-
Myles, Zarina, da Vitoria Lobo, Niels, University of Central Florida
-
Abstract / Description
-
Motion in depth and/or zooming causes defocus blur. This work presents a solution to the problem of using defocus blur and optical flow information to compute depth at points that defocus when they move.We first formulate a novel algorithm which recovers defocus blur and affine parameters simultaneously. Next we formulate a novel relationship (the blur-depth relationship) between defocus blur, relative object depth and three parameters based on camera motion and intrinsic camera parameters.We...
Show moreMotion in depth and/or zooming causes defocus blur. This work presents a solution to the problem of using defocus blur and optical flow information to compute depth at points that defocus when they move.We first formulate a novel algorithm which recovers defocus blur and affine parameters simultaneously. Next we formulate a novel relationship (the blur-depth relationship) between defocus blur, relative object depth and three parameters based on camera motion and intrinsic camera parameters.We can handle the situation where a single image has points which have defocused, got sharper or are focally unperturbed. Moreover, our formulation is valid regardless of whether the defocus is due to the image plane being in front of or behind the point of sharp focus.The blur-depth relationship requires a sequence of at least three images taken with the camera moving either towards or away from the object. It can be used to obtain an initial estimate of relative depth using one of several non-linear methods. We demonstrate a solution based on the Extended Kalman Filter in which the measurement equation is the blur-depth relationship.The estimate of relative depth is then used to compute an initial estimate of camera motion parameters. In order to refine depth values, the values of relative depth and camera motion are then input into a second Extended Kalman Filter in which the measurement equations are the discrete motion equations. This set of cascaded Kalman filters can be employed iteratively over a longer sequence of images in order to further refine depth.We conduct several experiments on real scenery in order to demonstrate the range of object shapes that the algorithm can handle. We show that fairly good estimates of depth can be obtained with just three images.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000135, ucf:46179
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000135
-
-
Title
-
AUTONOMOUS ROBOTIC GRASPING IN UNSTRUCTURED ENVIRONMENTS.
-
Creator
-
Jabalameli, Amirhossein, Behal, Aman, Haralambous, Michael, Pourmohammadi Fallah, Yaser, Boloni, Ladislau, Xu, Yunjun, University of Central Florida
-
Abstract / Description
-
A crucial problem in robotics is interacting with known or novel objects in unstructured environments. While the convergence of a multitude of research advances is required to address this problem, our goal is to describe a framework that employs the robot's visual perception to identify and execute an appropriate grasp to pick and place novel objects. Analytical approaches explore for solutions through kinematic and dynamic formulations. On the other hand, data-driven methods retrieve grasps...
Show moreA crucial problem in robotics is interacting with known or novel objects in unstructured environments. While the convergence of a multitude of research advances is required to address this problem, our goal is to describe a framework that employs the robot's visual perception to identify and execute an appropriate grasp to pick and place novel objects. Analytical approaches explore for solutions through kinematic and dynamic formulations. On the other hand, data-driven methods retrieve grasps according to their prior knowledge of either the target object, human experience, or through information obtained from acquired data. In this dissertation, we propose a framework based on the supporting principle that potential contacting regions for a stable grasp can be foundby searching for (i) sharp discontinuities and (ii) regions of locally maximal principal curvature in the depth map. In addition to suggestions from empirical evidence, we discuss this principle by applying the concept of force-closure and wrench convexes. The key point is that no prior knowledge of objects is utilized in the grasp planning process; however, the obtained results show thatthe approach is capable to deal successfully with objects of different shapes and sizes. We believe that the proposed work is novel because the description of the visible portion of objects by theaforementioned edges appearing in the depth map facilitates the process of grasp set-point extraction in the same way as image processing methods with the focus on small-size 2D image areas rather than clustering and analyzing huge sets of 3D point-cloud coordinates. In fact, this approach dismisses reconstruction of objects. These features result in low computational costs and make it possible to run the proposed algorithm in real-time. Finally, the performance of the approach is successfully validated by applying it to the scenes with both single and multiple objects, in both simulation and real-world experiment setups.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007892, ucf:52757
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007892
-
-
Title
-
Visionary Ophthalmics: Confluence of Computer Vision and Deep Learning for Ophthalmology.
-
Creator
-
Morley, Dustin, Foroosh, Hassan, Bagci, Ulas, Gong, Boqing, Mohapatra, Ram, University of Central Florida
-
Abstract / Description
-
Ophthalmology is a medical field ripe with opportunities for meaningful application of computer vision algorithms. The field utilizes data from multiple disparate imaging techniques, ranging from conventional cameras to tomography, comprising a diverse set of computer vision challenges. Computer vision has a rich history of techniques that can adequately meet many of these challenges. However, the field has undergone something of a revolution in recent times as deep learning techniques have...
Show moreOphthalmology is a medical field ripe with opportunities for meaningful application of computer vision algorithms. The field utilizes data from multiple disparate imaging techniques, ranging from conventional cameras to tomography, comprising a diverse set of computer vision challenges. Computer vision has a rich history of techniques that can adequately meet many of these challenges. However, the field has undergone something of a revolution in recent times as deep learning techniques have sprung into the forefront following advances in GPU hardware. This development raises important questions regarding how to best leverage insights from both modern deep learning approaches and more classical computer vision approaches for a given problem. In this dissertation, we tackle challenging computer vision problems in ophthalmology using methods all across this spectrum. Perhaps our most significant work is a highly successful iris registration algorithm for use in laser eye surgery. This algorithm relies on matching features extracted from the structure tensor and a Gabor wavelet (-) a classically driven approach that does not utilize modern machine learning. However, drawing on insight from the deep learning revolution, we demonstrate successful application of backpropagation to optimize the registration significantly faster than the alternative of relying on finite differences. Towards the other end of the spectrum, we also present a novel framework for improving RANSAC segmentation algorithms by utilizing a convolutional neural network (CNN) trained on a RANSAC-based loss function. Finally, we apply state-of-the-art deep learning methods to solve the problem of pathological fluid detection in optical coherence tomography images of the human retina, using a novel retina-specific data augmentation technique to greatly expand the data set. Altogether, our work demonstrates benefits of applying a holistic view of computer vision, which leverages deep learning and associated insights without neglecting techniques and insights from the previous era.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007058, ucf:52001
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007058
-
-
Title
-
Vision-Based Testbeds for Control System Applicaitons.
-
Creator
-
Sivilli, Robert, Xu, Yunjun, Gou, Jihua, Cho, Hyoung, Pham, Khanh, University of Central Florida
-
Abstract / Description
-
In the field of control systems, testbeds are a pivotal step in the validation and improvement of new algorithms for different applications. They provide a safe, controlled environment typically having a significantly lower cost of failure than the final application. Vision systems provide nonintrusive methods of measurement that can be easily implemented for various setups and applications. This work presents methods for modeling, removing distortion, calibrating, and rectifying single and...
Show moreIn the field of control systems, testbeds are a pivotal step in the validation and improvement of new algorithms for different applications. They provide a safe, controlled environment typically having a significantly lower cost of failure than the final application. Vision systems provide nonintrusive methods of measurement that can be easily implemented for various setups and applications. This work presents methods for modeling, removing distortion, calibrating, and rectifying single and two camera systems, as well as, two very different applications of vision-based control system testbeds: deflection control of shape memory polymers and trajectory planning for mobile robots. First, a testbed for the modeling and control of shape memory polymers (SMP) is designed. Red-green-blue (RGB) thresholding is used to assist in the webcam-based, 3D reconstruction of points of interest. A PID based controller is designed and shown to work with SMP samples, while state space models were identified from step input responses. Models were used to develop a linear quadratic regulator that is shown to work in simulation. Also, a simple to use graphical interface is designed for fast and simple testing of a series of samples. Second, a robot testbed is designed to test new trajectory planning algorithms. A template-based predictive search algorithm is investigated to process the images obtained through a low-cost webcam vision system, which is used to monitor the testbed environment. Also a user-friendly graphical interface is developed such that the functionalities of the webcam, robots, and optimizations are automated. The testbeds are used to demonstrate a wavefront-enhanced, B-spline augmented virtual motion camouflage algorithm for single or multiple robots to navigate through an obstacle dense and changing environment, while considering inter-vehicle conflicts, obstacle avoidance, nonlinear dynamics, and different constraints. In addition, it is expected that this testbed can be used to test different vehicle motion planning and control algorithms.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004601, ucf:49187
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004601
-
-
Title
-
A Study of Localization and Latency Reduction for Action Recognition.
-
Creator
-
Masood, Syed, Tappen, Marshall, Foroosh, Hassan, Stanley, Kenneth, Sukthankar, Rahul, University of Central Florida
-
Abstract / Description
-
The success of recognizing periodic actions in single-person-simple-background datasets, such as Weizmann and KTH, has created a need for more complex datasets to push the performance of action recognition systems. In this work, we create a new synthetic action dataset and use it to highlight weaknesses in current recognition systems. Experiments show that introducing background complexity to action video sequences causes a significant degradation in recognition performance. Moreover, this...
Show moreThe success of recognizing periodic actions in single-person-simple-background datasets, such as Weizmann and KTH, has created a need for more complex datasets to push the performance of action recognition systems. In this work, we create a new synthetic action dataset and use it to highlight weaknesses in current recognition systems. Experiments show that introducing background complexity to action video sequences causes a significant degradation in recognition performance. Moreover, this degradation cannot be fixed by fine-tuning system parameters or by selecting better feature points. Instead, we show that the problem lies in the spatio-temporal cuboid volume extracted from the interest point locations. Having identified the problem, we show how improved results can be achieved by simple modifications to the cuboids.For the above method however, one requires near-perfect localization of the action within a video sequence. To achieve this objective, we present a two stage weakly supervised probabilistic model for simultaneous localization and recognition of actions in videos. Different from previous approaches, our method is novel in that it (1) eliminates the need for manual annotations for the training procedure and (2) does not require any human detection or tracking in the classification stage. The first stage of our framework is a probabilistic action localization model which extracts the most promising sub-windows in a video sequence where an action can take place. We use a non-linear classifier in the second stage of our framework for the final classification task. We show the effectiveness of our proposed model on two well known real-world datasets: UCF Sports and UCF11 datasets.Another application of the weakly supervised probablistic model proposed above is in the gaming environment. An important aspect in designing interactive, action-based interfaces is reliably recognizing actions with minimal latency. High latency causes the system's feedback to lag behind and thus significantly degrade the interactivity of the user experience. With slight modification to the weakly supervised probablistic model we proposed for action localization, we show how it can be used for reducing latency when recognizing actions in Human Computer Interaction (HCI) environments. This latency-aware learning formulation trains a logistic regression-based classifier that automatically determines distinctive canonical poses from the data and uses these to robustly recognize actions in the presence of ambiguous poses. We introduce a novel (publicly released) dataset for the purpose of our experiments. Comparisons of our method against both a Bag of Words and a Conditional Random Field (CRF) classifier show improved recognition performance for both pre-segmented and online classification tasks.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004575, ucf:49210
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004575
-
-
Title
-
MARKERLESS TRACKING USING POLAR CORRELATION OF CAMERA OPTICAL FLOW.
-
Creator
-
Gupta, Prince, da Vitoria Lobo, Niels, University of Central Florida
-
Abstract / Description
-
We present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom...
Show moreWe present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom including direction of translation and angular velocity. Experiments comparing our device with an electromagnetic tracker show that its average tracking accuracy is 80% over 185 frames, and it is able to track large range motions even in outdoor settings. We also present how opposing cameras in vision-based inside-looking-out systems can be used for gesture recognition. To demonstrate our approach, we discuss three different algorithms for recovering motion parameters at different levels of complete recovery. We show how optical flow in opposing cameras can be used to recover motion parameters of the multi-camera rig. Experimental results show gesture recognition accuracy of 88.0%, 90.7% and 86.7% for our three techniques, respectively, across a set of 15 gestures.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003163, ucf:48611
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003163
-
-
Title
-
Human Action Localization and Recognition in Unconstrained Videos.
-
Creator
-
Boyraz, Hakan, Tappen, Marshall, Foroosh, Hassan, Lin, Mingjie, Zhang, Shaojie, Sukthankar, Rahul, University of Central Florida
-
Abstract / Description
-
As imaging systems become ubiquitous, the ability to recognize human actions is becoming increasingly important. Just as in the object detection and recognition literature, action recognition can be roughly divided into classification tasks, where the goal is to classify a video according to the action depicted in the video, and detection tasks, where the goal is to detect and localize a human performing a particular action. A growing literature is demonstrating the benefits of localizing...
Show moreAs imaging systems become ubiquitous, the ability to recognize human actions is becoming increasingly important. Just as in the object detection and recognition literature, action recognition can be roughly divided into classification tasks, where the goal is to classify a video according to the action depicted in the video, and detection tasks, where the goal is to detect and localize a human performing a particular action. A growing literature is demonstrating the benefits of localizing discriminative sub-regions of images and videos when performing recognition tasks. In this thesis, we address the action detection and recognition problems. Action detection in video is a particularly difficult problem because actions must not only be recognized correctly, but must also be localized in the 3D spatio-temporal volume. We introduce a technique that transforms the 3D localization problem into a series of 2D detection tasks. This is accomplished by dividing the video into overlapping segments, then representing each segment with a 2D video projection. The advantage of the 2D projection is that it makes it convenient to apply the best techniques from object detection to the action detection problem. We also introduce a novel, straightforward method for searching the 2D projections to localize actions, termed Two-Point Subwindow Search (TPSS). Finally, we show how to connect the local detections in time using a chaining algorithm to identify the entire extent of the action. Our experiments show that video projection outperforms the latest results on action detection in a direct comparison.Second, we present a probabilistic model learning to identify discriminative regions in videos from weakly-supervised data where each video clip is only assigned a label describing what action is present in the frame or clip. While our first system requires every action to be manually outlined in every frame of the video, this second system only requires that the video be given a single high-level tag. From this data, the system is able to identify discriminative regions that correspond well to the regions containing the actual actions. Our experiments on both the MSR Action Dataset II and UCF Sports Dataset show that the localizations produced by this weakly supervised system are comparable in quality to localizations produced by systems that require each frame to be manually annotated. This system is able to detect actions in both 1) non-temporally segmented action videos and 2) recognition tasks where a single label is assigned to the clip. We also demonstrate the action recognition performance of our method on two complex datasets, i.e. HMDB and UCF101. Third, we extend our weakly-supervised framework by replacing the recognition stage with a two-stage neural network and apply dropout for preventing overfitting of the parameters on the training data. Dropout technique has been recently introduced to prevent overfitting of the parameters in deep neural networks and it has been applied successfully to object recognition problem. To our knowledge, this is the first system using dropout for action recognition problem. We demonstrate that using dropout improves the action recognition accuracies on HMDB and UCF101 datasets.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004977, ucf:49562
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004977
-
-
Title
-
"BUT THIS IS WHAT I SEE; THIS IS WHAT I SEE": RE-IMAGINING GENDERED SUBJECTIVITY THROUGH THE WOMAN ARTIST IN PHELPS, JOHNSTONE, AND WOOLF.
-
Creator
-
Wayne, Heather, Jones, Anna, University of Central Florida
-
Abstract / Description
-
Since the publication of Laura MulveyÃÂ's influential article ÃÂ"Visual Pleasure and Narrative Cinema,ÃÂ" in which she identifies the pervasive presence of the male gaze in Hollywood cinema, scholars have sought to account for the female spectator in her paradigm of gendered vision. This thesis suggests that women writers have long debated the problem of the female spectator through literary depictions of the female artist. Women...
Show moreSince the publication of Laura MulveyÃÂ's influential article ÃÂ"Visual Pleasure and Narrative Cinema,ÃÂ" in which she identifies the pervasive presence of the male gaze in Hollywood cinema, scholars have sought to account for the female spectator in her paradigm of gendered vision. This thesis suggests that women writers have long debated the problem of the female spectator through literary depictions of the female artist. Women writers of the nineteenth and twentieth centuriesÃÂ--including Elizabeth Stuart Phelps, Edith Johnstone, and Virginia WoolfÃÂ--recognized the power of the woman artist to undermine the trope of the male gazing subject and a passive female object. Examining PhelpsÃÂ's The Story of Avis (1877), JohnstoneÃÂ's A Sunless Heart (1894), and WoolfÃÂ's To the Lighthouse (1927) illustrates how the woman artistÃÂ's active vision disrupts MulveyÃÂ's ÃÂ"active/male and passive/femaleÃÂ" binary of vision. PhelpsÃÂ's painter-heroine Avis destabilizes the power of the male gaze not only by exerting her own vision, but also by acting as an active object to manipulate the way she is seen. Johnstone uses artist Gasparine to demonstrate the dangers of vision shaped by either aesthetic or political conventions, suggesting that even feminist idealism can promote the objectification of its heroines. Finally, Woolf redefines the terms of objectification through painter Lily Briscoe, whose vision imbues material objects with subjectivity, thereby going beyond the boundaries between male and female to blur the distinction between subject and object. Through their novels, Phelps, Johnstone, and Woolf suggest that depictions of human experience need to be radically re-thought in order to adequately represent the complexity of subjectivity.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003291, ucf:48491
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003291
-
-
Title
-
DESIGNING LIGHT FILTERS TO DETECT SKIN USING A LOW-POWERED SENSOR.
-
Creator
-
Tariq, Muhammad, Wisniewski, Pamela, Gong, Boqing, Leavens, Gary, University of Central Florida
-
Abstract / Description
-
Detection of nudity in photos and videos, especially prior to uploading to the internet, is vital to solving many problems related to adolescent sexting, the distribution of child pornography, and cyber-bullying. The problem with using nudity detection algorithms as a means to combat these problems is that: 1) it implies that a digitized nude photo of a minor already exists (i.e., child pornography), and 2) there are real ethical and legal concerns around the distribution and processing of...
Show moreDetection of nudity in photos and videos, especially prior to uploading to the internet, is vital to solving many problems related to adolescent sexting, the distribution of child pornography, and cyber-bullying. The problem with using nudity detection algorithms as a means to combat these problems is that: 1) it implies that a digitized nude photo of a minor already exists (i.e., child pornography), and 2) there are real ethical and legal concerns around the distribution and processing of child pornography. Once a camera captures an image, that image is no longer secure. Therefore, we need to develop new privacy-preserving solutions that prevent the digital capture of nude imagery of minors. My research takes a first step in trying to accomplish this long-term goal: In this thesis, I examine the feasibility of using a low-powered sensor to detect skin dominance (defined as an image comprised of 50% or more of human skin tone) in a visual scene. By designing four custom light filters to enhance the digital information extracted from 300 scenes captured with the sensor (without digitizing high-fidelity visual features), I was able to accurately detect a skin dominant scene with 83.7% accuracy, 83% precision, and 85% recall. The long-term goal to be achieved in the future is to design a low-powered vision sensor that can be mounted on a digital camera lens on a teen's mobile device to detect and/or prevent the capture of nude imagery. Thus, I discuss the limitations of this work toward this larger goal, as well as future research directions.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006806, ucf:51792
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006806
-
-
Title
-
A Study of the Relationship between Continuous Professional Learning Community Implementation and Student Achievement in a Large Urban School District.
-
Creator
-
Sutula, Erica, Taylor, Rosemarye, Baldwin, Lee, Doherty, Walter, Ellis, Amanda, University of Central Florida
-
Abstract / Description
-
The purpose of this causal comparative study was to understand the differences in comparative data across a large urban school district and to examine the continued effects of the PLC model on teacher and leader perception of the model and student achievement as measured by the 2012 and 2014 FCAT 2.0 Reading and Mathematics. The population for this study included all instructional and leadership personnel in schools within the target school district, with a final convenience sample across the...
Show moreThe purpose of this causal comparative study was to understand the differences in comparative data across a large urban school district and to examine the continued effects of the PLC model on teacher and leader perception of the model and student achievement as measured by the 2012 and 2014 FCAT 2.0 Reading and Mathematics. The population for this study included all instructional and leadership personnel in schools within the target school district, with a final convenience sample across the two school years of N=5,954.The research questions for this study focused on (a) the change in teacher's perception of teachers from the 2012 to the 2014 school year, (b) the impact, if any, of teacher and leader perception on student performance for the FCAT, (c) the differences between the perceptions of teachers and leaders. This study added to the findings of Ellis (2010), expanding the understanding of the complexities of collaboration among teachers, administrators, collaboration, and students. Conclusions from the quantitative analysis found a statistically significant difference between how teachers perceived the implementation of collaborative time during both the 2012 and 2014 school years. Further analysis concluded that there was a statistically significant positive relationship between continual PLC implementation and student achievement for Grade 3 Reading and Mathematics. Other grade levels did show educationally significant findings for the impact of continual implementation on student achievement, but the results did not meet the criteria for statistical significance. There was not a statistically significant relationship between any other measure and any of the considered standardized test scores. Statistically significant differences were found between the 2012 and 2014 perceptions of teachers and leaders.Recommendations from the quantitative analysis include the importance of having collaborative time for teachers. Furthermore, leaders should focus on maximizing the effectiveness of collaborative time by curtailing the amount of required administrative tasks, thereby allowing teachers to focus on designing instructional interventions and analyzing student data through collaboration. This study is an addition to the current literature demonstrating the general perceptions, and impacts of long term implementation of the PLC model, when paired with Ellis' (2010) study it is clear that teachers need continual work within one collaborative model, modeling of collaborative practices by leadership, and support from school leaders for collaborative time to begin positively impacting student achievement.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006802, ucf:51812
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006802
-
-
Title
-
Online, Supervised and Unsupervised Action Localization in Videos.
-
Creator
-
Soomro, Khurram, Shah, Mubarak, Heinrich, Mark, Hu, Haiyan, Bagci, Ulas, Yun, Hae-Bum, University of Central Florida
-
Abstract / Description
-
Action recognition classifies a given video among a set of action labels, whereas action localization determines the location of an action in addition to its class. The overall aim of this dissertation is action localization. Many of the existing action localization approaches exhaustively search (spatially and temporally) for an action in a video. However, as the search space increases with high resolution and longer duration videos, it becomes impractical to use such sliding window...
Show moreAction recognition classifies a given video among a set of action labels, whereas action localization determines the location of an action in addition to its class. The overall aim of this dissertation is action localization. Many of the existing action localization approaches exhaustively search (spatially and temporally) for an action in a video. However, as the search space increases with high resolution and longer duration videos, it becomes impractical to use such sliding window techniques. The first part of this dissertation presents an efficient approach for localizing actions by learning contextual relations between different video regions in training. In testing, we use the context information to estimate the probability of each supervoxel belonging to the foreground action and use Conditional Random Field (CRF) to localize actions. In the above method and typical approaches to this problem, localization is performed in an offline manner where all the video frames are processed together. This prevents timely localization and prediction of actions/interactions - an important consideration for many tasks including surveillance and human-machine interaction. Therefore, in the second part of this dissertation we propose an online approach to the challenging problem of localization and prediction of actions/interactions in videos. In this approach, we use human poses and superpixels in each frame to train discriminative appearance models and perform online prediction of actions/interactions with Structural SVM. Above two approaches rely on human supervision in the form of assigning action class labels to videos and annotating actor bounding boxes in each frame of training videos. Therefore, in the third part of this dissertation we address the problem of unsupervised action localization. Given unlabeled videos without annotations, this approach aims at: 1) Discovering action classes using a discriminative clustering approach, and 2) Localizing actions using a variant of Knapsack problem.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006917, ucf:51685
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006917
-
-
Title
-
Computer Vision Based Structural Identification Framework for Bridge Health Mornitoring.
-
Creator
-
Khuc, Tung, Catbas, Necati, Oloufa, Amr, Mackie, Kevin, Zaurin, Ricardo, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
The objective of this dissertation is to develop a comprehensive Structural Identification (St-Id) framework with damage for bridge type structures by using cameras and computer vision technologies. The traditional St-Id frameworks rely on using conventional sensors. In this study, the collected input and output data employed in the St-Id system are acquired by series of vision-based measurements. The following novelties are proposed, developed and demonstrated in this project: a) vehicle...
Show moreThe objective of this dissertation is to develop a comprehensive Structural Identification (St-Id) framework with damage for bridge type structures by using cameras and computer vision technologies. The traditional St-Id frameworks rely on using conventional sensors. In this study, the collected input and output data employed in the St-Id system are acquired by series of vision-based measurements. The following novelties are proposed, developed and demonstrated in this project: a) vehicle load (input) modeling using computer vision, b) bridge response (output) using full non-contact approach using video/image processing, c) image-based structural identification using input-output measurements and new damage indicators. The input (loading) data due vehicles such as vehicle weights and vehicle locations on the bridges, are estimated by employing computer vision algorithms (detection, classification, and localization of objects) based on the video images of vehicles. Meanwhile, the output data as structural displacements are also obtained by defining and tracking image key-points of measurement locations. Subsequently, the input and output data sets are analyzed to construct novel types of damage indicators, named Unit Influence Surface (UIS). Finally, the new damage detection and localization framework is introduced that does not require a network of sensors, but much less number of sensors.The main research significance is the first time development of algorithms that transform the measured video images into a form that is highly damage-sensitive/change-sensitive for bridge assessment within the context of Structural Identification with input and output characterization. The study exploits the unique attributes of computer vision systems, where the signal is continuous in space. This requires new adaptations and transformations that can handle computer vision data/signals for structural engineering applications. This research will significantly advance current sensor-based structural health monitoring with computer-vision techniques, leading to practical applications for damage detection of complex structures with a novel approach. By using computer vision algorithms and cameras as special sensors for structural health monitoring, this study proposes an advance approach in bridge monitoring through which certain type of data that could not be collected by conventional sensors such as vehicle loads and location, can be obtained practically and accurately.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006127, ucf:51174
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006127
-
-
Title
-
Holistic Representations for Activities and Crowd Behaviors.
-
Creator
-
Solmaz, Berkan, Shah, Mubarak, Da Vitoria Lobo, Niels, Jha, Sumit, Ilie, Marcel, Moore, Brian, University of Central Florida
-
Abstract / Description
-
In this dissertation, we address the problem of analyzing the activities of people in a variety of scenarios, this is commonly encountered in vision applications. The overarching goal is to devise new representations for the activities, in settings where individuals or a number of people may take a part in specific activities. Different types of activities can be performed by either an individual at the fine level or by several people constituting a crowd at the coarse level. We take into...
Show moreIn this dissertation, we address the problem of analyzing the activities of people in a variety of scenarios, this is commonly encountered in vision applications. The overarching goal is to devise new representations for the activities, in settings where individuals or a number of people may take a part in specific activities. Different types of activities can be performed by either an individual at the fine level or by several people constituting a crowd at the coarse level. We take into account the domain specific information for modeling these activities. The summary of the proposed solutions is presented in the following.The holistic description of videos is appealing for visual detection and classification tasks for several reasons including capturing the spatial relations between the scene components, simplicity, and performance [1, 2, 3]. First, we present a holistic (global) frequency spectrum based descriptor for representing the atomic actions performed by individuals such as: bench pressing, diving, hand waving, boxing, playing guitar, mixing, jumping, horse riding, hula hooping etc. We model and learn these individual actions for classifying complex user uploaded videos. Our method bypasses the detection of interest points, the extraction of local video descriptors and the quantization of local descriptors into a code book; it represents each video sequence as a single feature vector. This holistic feature vector is computed by applying a bank of 3-D spatio-temporal filters on the frequency spectrum of a video sequence; hence it integrates the information about the motion and scene structure. We tested our approach on two of the most challenging datasets, UCF50 [4] and HMDB51 [5], and obtained promising results which demonstrates the robustness and the discriminative power of our holistic video descriptor for classifying videos of various realistic actions.In the above approach, a holistic feature vector of a video clip is acquired by dividing the video into spatio-temporal blocks then concatenating the features of the individual blocks together. However, such a holistic representation blindly incorporates all the video regions regardless of their contribution in classification. Next, we present an approach which improves the performance of the holistic descriptors for activity recognition. In our novel method, we improve the holistic descriptors by discovering the discriminative video blocks. We measure the discriminativity of a block by examining its response to a pre-learned support vector machine model. In particular, a block is considered discriminative if it responds positively for positive training samples, and negatively for negative training samples. We pose the problem of finding the optimal blocks as a problem of selecting a sparse set of blocks, which maximizes the total classifier discriminativity. Through a detailed set of experiments on benchmark datasets [6, 7, 8, 9, 5, 10], we show that our method discovers the useful regions in the videos and eliminates the ones which are confusing for classification, which results in significant performance improvement over the state-of-the-art.In contrast to the scenes where an individual performs a primitive action, there may be scenes with several people, where crowd behaviors may take place. For these types of scenes the traditional approaches for recognition will not work due to severe occlusion and computational requirements. The number of videos is limited and the scenes are complicated, hence learning these behaviors is not feasible. For this problem, we present a novel approach, based on the optical flow in a video sequence, for identifying five specific and common crowd behaviors in visual scenes. In the algorithm, the scene is overlaid by a grid of particles, initializing a dynamical system which is derived from the optical flow. Numerical integration of the optical flow provides particle trajectories that represent the motion in the scene. Linearization of the dynamical system allows a simple and practical analysis and classification of the behavior through the Jacobian matrix. Essentially, the eigenvalues of this matrix are used to determine the dynamic stability of points in the flow and each type of stability corresponds to one of the five crowd behaviors. The identified crowd behaviors are (1) bottlenecks: where many pedestrians/vehicles from various points in the scene are entering through one narrow passage, (2) fountainheads: where many pedestrians/vehicles are emerging from a narrow passage only to separate in many directions, (3) lanes: where many pedestrians/vehicles are moving at the same speeds in the same direction, (4) arches or rings: where the collective motion is curved or circular, and (5) blocking: where there is a opposing motion and desired movement of groups of pedestrians is somehow prohibited. The implementation requires identifying a region of interest in the scene, and checking the eigenvalues of the Jacobian matrix in that region to determine the type of flow, that corresponds to various well-defined crowd behaviors. The eigenvalues are only considered in these regions of interest, consistent with the linear approximation and the implied behaviors. Since changes in eigenvalues can mean changes in stability, corresponding to changes in behavior, we can repeat the algorithm over clips of long video sequences to locate changes in behavior. This method was tested on over real videos representing crowd and traffic scenes.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004941, ucf:49638
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004941
-
-
Title
-
Mitigation of Motion Sickness Symptoms in 360(&)deg; Indirect Vision Systems.
-
Creator
-
Quinn, Stephanie, Rinalducci, Edward, Hancock, Peter, Mouloua, Mustapha, French, Jonathan, Chen, Jessie, Kennedy, Robert, University of Central Florida
-
Abstract / Description
-
The present research attempted to use display design as a means to mitigate the occurrence and severity of symptoms of motion sickness and increase performance due to reduced (")general effects(") in an uncoupled motion environment. Specifically, several visual display manipulations of a 360(&)deg; indirect vision system were implemented during a target detection task while participants were concurrently immersed in a motion simulator that mimicked off-road terrain which was completely...
Show moreThe present research attempted to use display design as a means to mitigate the occurrence and severity of symptoms of motion sickness and increase performance due to reduced (")general effects(") in an uncoupled motion environment. Specifically, several visual display manipulations of a 360(&)deg; indirect vision system were implemented during a target detection task while participants were concurrently immersed in a motion simulator that mimicked off-road terrain which was completely separate from the target detection route. Results of a multiple regression analysis determined that the Dual Banners display incorporating an artificial horizon (i.e., AH Dual Banners) and perceived attentional control significantly contributed to the outcome of total severity of motion sickness, as measured by the Simulator Sickness Questionnaire (SSQ). Altogether, 33.6% (adjusted) of the variability in Total Severity was predicted by the variables used in the model. Objective measures were assessed prior to, during and after uncoupled motion. These tests involved performance while immersed in the environment (i.e., target detection and situation awareness), as well as postural stability and cognitive and visual assessment tests (i.e., Grammatical Reasoning and Manikin) both before and after immersion. Response time to Grammatical Reasoning actually decreased after uncoupled motion. However, this was the only significant difference of all the performance measures. Assessment of subjective workload (as measured by NASA-TLX) determined that participants in Dual Banners display conditions had a significantly lower level of perceived physical demand than those with Completely Separated display designs. Further, perceived temporal demand was lower for participants exposed to conditions incorporating an artificial horizon. Subjective sickness (SSQ Total Severity, Nausea, Oculomotor and Disorientation) was evaluated using non-parametric tests and confirmed that the AH Dual Banners display had significantly lower Total Severity scores than the Completely Separated display with no artificial horizon (i.e., NoAH Completely Separated). Oculomotor scores were also significantly different for these two conditions, with lower scores associated with AH Dual Banners. The NoAH Completely Separated condition also had marginally higher oculomotor scores when compared to the Completely Separated display incorporating the artificial horizon (AH Completely Separated). There were no significant differences of sickness symptoms or severity (measured by self-assessment, postural stability, and cognitive and visual tests) between display designs 30- and 60-minutes post-exposure. Further, 30- and 60- minute post measures were not significantly different from baseline scores, suggesting that aftereffects were not present up to 60 minutes post-exposure. It was concluded that incorporating an artificial horizon onto the Dual Banners display will be beneficial in mitigating symptoms of motion sickness in manned ground vehicles using 360(&)deg; indirect vision systems. Screening for perceived attentional control will also be advantageous in situations where selection is possible. However, caution must be made in generalizing these results to missions under terrain or vehicle speed different than what is used for this study, as well as those that include a longer immersion time.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005047, ucf:49972
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005047
-
-
Title
-
STRUCTURAL HEALTH MONITORING WITH EMPHASIS ON COMPUTER VISION, DAMAGE INDICES, AND STATISTICAL ANALYSIS.
-
Creator
-
ZAURIN, RICARDO, CATBAS, F. NECATI, University of Central Florida
-
Abstract / Description
-
Structural Health Monitoring (SHM) is the sensing and analysis of a structure to detect abnormal behavior, damage and deterioration during regular operations as well as under extreme loadings. SHM is designed to provide objective information for decision-making on safety and serviceability. This research focuses on the SHM of bridges by developing and integrating novel methods and techniques using sensor networks, computer vision, modeling for damage indices and statistical approaches....
Show moreStructural Health Monitoring (SHM) is the sensing and analysis of a structure to detect abnormal behavior, damage and deterioration during regular operations as well as under extreme loadings. SHM is designed to provide objective information for decision-making on safety and serviceability. This research focuses on the SHM of bridges by developing and integrating novel methods and techniques using sensor networks, computer vision, modeling for damage indices and statistical approaches. Effective use of traffic video synchronized with sensor measurements for decision-making is demonstrated. First, some of the computer vision methods and how they can be used for bridge monitoring are presented along with the most common issues and some practical solutions. Second, a conceptual damage index (Unit Influence Line) is formulated using synchronized computer images and sensor data for tracking the structural response under various load conditions. Third, a new index, Nd , is formulated and demonstrated to more effectively identify, localize and quantify damage. Commonly observed damage conditions on real bridges are simulated on a laboratory model for the demonstration of the computer vision method, UIL and the new index. This new method and the index, which are based on outlier detection from the UIL population, can very effectively handle large sets of monitoring data. The methods and techniques are demonstrated on the laboratory model for damage detection and all damage scenarios are identified successfully. Finally, the application of the proposed methods on a real life structure, which has a monitoring system, is presented. It is shown that these methods can be used efficiently for applications such as damage detection and load rating for decision-making. The results from this monitoring project on a movable bridge are demonstrated and presented along with the conclusions and recommendations for future work.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002890, ucf:48039
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002890
-
-
Title
-
Learning Algorithms for Fat Quantification and Tumor Characterization.
-
Creator
-
Hussein, Sarfaraz, Bagci, Ulas, Shah, Mubarak, Heinrich, Mark, Pensky, Marianna, University of Central Florida
-
Abstract / Description
-
Obesity is one of the most prevalent health conditions. About 30% of the world's and over 70% of the United States' adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the...
Show moreObesity is one of the most prevalent health conditions. About 30% of the world's and over 70% of the United States' adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007196, ucf:52288
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007196
-
-
Title
-
Concerning the Perceptive Gaze: The Impact of Vision Theories on Late Nineteenth-Century Victorian Literature.
-
Creator
-
Rushworth, Lindsay, Jones, Anna, Philpotts, Trey, Campbell, James, University of Central Florida
-
Abstract / Description
-
This thesis examines two specific interventions in vision theory(-)namely, Herbert Spencer's theory of organic memory, which he developed by way of Lamarckian genetics and Darwinian evolution in A System of Synthetic Philosophy (1864), and the Aesthetic Movement (1870s(-)1890s), famously articulated by Walter Pater in The Renaissance: Studies in Art and Poetry (1873 and 1893). I explore the impact of these theories on late nineteenth-century fiction, focusing on two novels: Thomas Hardy's Two...
Show moreThis thesis examines two specific interventions in vision theory(-)namely, Herbert Spencer's theory of organic memory, which he developed by way of Lamarckian genetics and Darwinian evolution in A System of Synthetic Philosophy (1864), and the Aesthetic Movement (1870s(-)1890s), famously articulated by Walter Pater in The Renaissance: Studies in Art and Poetry (1873 and 1893). I explore the impact of these theories on late nineteenth-century fiction, focusing on two novels: Thomas Hardy's Two on a Tower (1882) and Edith Johnstone's A Sunless Heart (1894). These two authors' texts engage with scientific and aesthetic visual theories to demonstrate their anxieties concerning the perceptive gaze and to reveal the difficulties and limitations of visual perception and misperception for both the observer and the observed within the context of social class.It is widely accepted by scholars of the so-called visual turn in the Victorian era(-) following landmark works by Kate Flint and Nancy Armstrong(-)that myriad anxieties were associated with new ways of seeing during this time. Building on this work, my thesis focuses specifically on how these two approaches to visual perception(-)organic memory and Aestheticism(-)were intertwined with anxieties about social status and mobility. The novels analyzed in this thesis demonstrate how subjective visual perception affects one's place within the social hierarchy, as we see reflected in the fluctuating social statuses of Hardy's star-crossed lovers, Swithin St Cleeve and Lady Constantine, and Johnstone's two female protagonists, Gasparine O'Neill and Lotus Grace.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007527, ucf:52624
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007527
-
-
Title
-
Load Estimation, Structural Identification and Human Comfort Assessment of Flexible Structures.
-
Creator
-
Celik, Ozan, Catbas, Necati, Yun, Hae-Bum, Makris, Nicos, Kauffman, Jeffrey L., University of Central Florida
-
Abstract / Description
-
Stadiums, pedestrian bridges, dance floors, and concert halls are distinct from other civil engineering structures due to several challenges in their design and dynamic behavior. These challenges originate from the flexible inherent nature of these structures coupled with human interactions in the form of loading. The investigations in past literature on this topic clearly state that the design of flexible structures can be improved with better load modeling strategies acquired with reliable...
Show moreStadiums, pedestrian bridges, dance floors, and concert halls are distinct from other civil engineering structures due to several challenges in their design and dynamic behavior. These challenges originate from the flexible inherent nature of these structures coupled with human interactions in the form of loading. The investigations in past literature on this topic clearly state that the design of flexible structures can be improved with better load modeling strategies acquired with reliable load quantification, a deeper understanding of structural response, generation of simple and efficient human-structure interaction models and new measurement and assessment criteria for acceptable vibration levels. In contribution to these possible improvements, this dissertation taps into three specific areas: the load quantification of lively individuals or crowds, the structural identification under non-stationary and narrowband disturbances and the measurement of excessive vibration levels for human comfort. For load quantification, a computer vision based approach capable of tracking both individual and crowd motion is used. For structural identification, a noise-assisted Multivariate Empirical Mode Decomposition (MEMD) algorithm is incorporated into the operational modal analysis. The measurement of excessive vibration levels and the assessment of human comfort are accomplished through computer vision based human and object tracking, which provides a more convenient means for measurement and computation. All the proposed methods are tested in the laboratory environment utilizing a grandstand simulator and in the field on a pedestrian bridge and on a football stadium. Findings and interpretations from the experimental results are presented. The dissertation is concluded by highlighting the critical findings and the possible future work that may be conducted.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006863, ucf:51752
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006863
-
-
Title
-
Understanding images and videos using context.
-
Creator
-
Vaca Castano, Gonzalo, Da Vitoria Lobo, Niels, Shah, Mubarak, Mikhael, Wasfy, Jones, W Linwood, Wiegand, Rudolf, University of Central Florida
-
Abstract / Description
-
In computer vision, context refers to any information that may influence how visual media are understood.(&)nbsp; Traditionally, researchers have studied the influence of several sources of context in relation to the object detection problem in images. In this dissertation, we present a multifaceted review of the problem of context.(&)nbsp; Context is analyzed as a source of improvement in the object detection problem, not only in images but also in videos. In the case of images, we also...
Show moreIn computer vision, context refers to any information that may influence how visual media are understood.(&)nbsp; Traditionally, researchers have studied the influence of several sources of context in relation to the object detection problem in images. In this dissertation, we present a multifaceted review of the problem of context.(&)nbsp; Context is analyzed as a source of improvement in the object detection problem, not only in images but also in videos. In the case of images, we also investigate the influence of the semantic context, determined by objects, relationships, locations, and global composition, to achieve a general understanding of the image content as a whole. In our research, we also attempt to solve the related problem of finding the context associated with visual media. Given a set of visual elements (images), we want to extract the context that can be commonly associated with these images in order to remove ambiguity. The first part of this dissertation concentrates on achieving image understanding using semantic context.(&)nbsp; In spite of the recent success in tasks such as image classi?cation, object detection, image segmentation, and the progress on scene understanding, researchers still lack clarity about computer comprehension of the content of the image as a whole. Hence, we propose a Top-Down Visual Tree (TDVT) image representation that allows the encoding of the content of the image as a hierarchy of objects capturing their importance, co-occurrences, and type of relations. A novel Top-Down Tree LSTM network is presented to learn about the image composition from the training images and their TDVT representations. Given a test image, our algorithm detects objects and determine the hierarchical structure that they form, encoded as a TDVT representation of the image.A single image could have multiple interpretations that may lead to ambiguity about the intentionality of an image.(&)nbsp; What if instead of having only a single image to be interpreted, we have multiple images that represent the same topic. The second part of this dissertation covers how to extract the context information shared by multiple images. We present a method to determine the topic that these images represent. We accomplish this task by transferring tags from an image retrieval database, and by performing operations in the textual space of these tags. As an application, we also present a new image retrieval method that uses multiple images as input. Unlike earlier works that focus either on using just a single query image or using multiple query images with views of the same instance, the new image search paradigm retrieves images based on the underlying concepts that the input images represent.Finally, in the third part of this dissertation, we analyze the influence of context in videos. In this case, the temporal context is utilized to improve scene identification and object detection. We focus on egocentric videos, where agents require some time to change from one location to another. Therefore, we propose a Conditional Random Field (CRF) formulation, which penalizes short-term changes of the scene identity to improve the scene identity accuracy.(&)nbsp; We also show how to improve the object detection outcome by re-scoring the results based on the scene identity of the tested frame. We present a Support Vector Regression (SVR) formulation in the case that explicit knowledge of the scene identity is available during training time. In the case that explicit scene labeling is not available, we propose an LSTM formulation that considers the general appearance of the frame to re-score the object detectors.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006922, ucf:51703
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006922
-
-
Title
-
A Historical Analysis of the Evolution of the Administrative and Organizational Structure of the University of Central Florida as it Relates to Growth.
-
Creator
-
Lindsley, Boyd, Murray, Barbara, Doherty, Walter, Murray, Kenneth, Dziuban, Charles, University of Central Florida
-
Abstract / Description
-
This was a qualitative historical study, which was recounted chronologically and organized around the terms of the four full-time presidents of the university. The review addressed the processes associated with the establishment and development of Florida Technological University beginning in 1963 through its name change to the University of Central Florida in 1979, concluding in 2013. The organization's mission, vision, and goals, how they evolved and the impact they had on the university...
Show moreThis was a qualitative historical study, which was recounted chronologically and organized around the terms of the four full-time presidents of the university. The review addressed the processes associated with the establishment and development of Florida Technological University beginning in 1963 through its name change to the University of Central Florida in 1979, concluding in 2013. The organization's mission, vision, and goals, how they evolved and the impact they had on the university were of particular interest. The study was focused on the administrative actions and organizational changes that took place within the university to assist faculty in teaching, research, and service as well as external conditions and events which impacted the university and shaped its development. The growth of the university, as well as the productivity of the faculty, were of interest in the study.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005650, ucf:50187
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005650
Pages