Current Search: Kasparis, Takis (x)
View All Items
- Title
- Numerical, image, and signal processing algorithms applied to radar rainfall estimation.
- Creator
-
Lane, John Eugene, Kasparis, Takis, Engineering
- Abstract / Description
-
University of Central Florida College of Engineering Thesis; The main focus of this dissertation research has been to develop and analyze methods of rain gauge and radar correlation for the purpose of optimizing rainfall estimates.
- Date Issued
- 2001
- Identifier
- CFR0000782, ucf:52926
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFR0000782
- Title
- ANALYSIS AND SIMULATION TOOLS FOR SOLAR ARRAY POWER SYSTEMS.
- Creator
-
Pongratananukul, Nattorn, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
This dissertation presents simulation tools developed specifically for the design of solar array power systems. Contributions are made in several aspects of the system design phases, including solar source modeling, system simulation, and controller verification. A tool to automate the study of solar array configurations using general purpose circuit simulators has been developed based on the modeling of individual solar cells. Hierarchical structure of solar cell elements, including...
Show moreThis dissertation presents simulation tools developed specifically for the design of solar array power systems. Contributions are made in several aspects of the system design phases, including solar source modeling, system simulation, and controller verification. A tool to automate the study of solar array configurations using general purpose circuit simulators has been developed based on the modeling of individual solar cells. Hierarchical structure of solar cell elements, including semiconductor properties, allows simulation of electrical properties as well as the evaluation of the impact of environmental conditions. A second developed tool provides a co-simulation platform with the capability to verify the performance of an actual digital controller implemented in programmable hardware such as a DSP processor, while the entire solar array including the DC-DC power converter is modeled in software algorithms running on a computer. This "virtual plant" allows developing and debugging code for the digital controller, and also to improve the control algorithm. One important task in solar arrays is to track the maximum power point on the array in order to maximize the power that can be delivered. Digital controllers implemented with programmable processors are particularly attractive for this task because sophisticated tracking algorithms can be implemented and revised when needed to optimize their performance. The proposed co-simulation tools are thus very valuable in developing and optimizing the control algorithm, before the system is built. Examples that demonstrate the effectiveness of the proposed methodologies are presented. The proposed simulation tools are also valuable in the design of multi-channel arrays. In the specific system that we have designed and tested, the control algorithm is implemented on a single digital signal processor. In each of the channels the maximum power point is tracked individually. In the prototype we built, off-the-shelf commercial DC-DC converters were utilized. At the end, the overall performance of the entire system was evaluated using solar array simulators capable of simulating various I-V characteristics, and also by using an electronic load. Experimental results are presented.
Show less - Date Issued
- 2005
- Identifier
- CFE0000331, ucf:46290
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000331
- Title
- HYBRID AND HIERARCHICAL IMAGE REGISTRATION TECHNIQUES.
- Creator
-
Xu, Dongjiang, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
A large number of image registration techniques have been developed for various types of sensors and applications, with the aim to improve the accuracy, computational complexity, generality, and robustness. They can be broadly classified into two categories: intensity-based and feature-based methods. The primary drawback of the intensity-based approaches is that it may fail unless the two images are misaligned by a moderate difference in scale, rotation, and translation. In addition,...
Show moreA large number of image registration techniques have been developed for various types of sensors and applications, with the aim to improve the accuracy, computational complexity, generality, and robustness. They can be broadly classified into two categories: intensity-based and feature-based methods. The primary drawback of the intensity-based approaches is that it may fail unless the two images are misaligned by a moderate difference in scale, rotation, and translation. In addition, intensity-based methods lack the robustness in the presence of non-spatial distortions due to different imaging conditions between images. In this dissertation, the image registration is formulated as a two-stage hybrid approach combining both an initial matching and a final matching in a coarse-to-fine manner. In the proposed hybrid framework, the initial matching algorithm is applied at the coarsest scale of images, where the approximate transformation parameters could be first estimated. Subsequently, the robust gradient-based estimation algorithm is incorporated into the proposed hybrid approach using a multi-resolution scheme. Several novel and effective initial matching algorithms have been proposed for the first stage. The variations of the intensity characteristics between images may be large and non-uniform because of non-spatial distortions. Therefore, in order to effectively incorporate the gradient-based robust estimation into our proposed framework, one of the fundamental questions should be addressed: what is a good image representation to work with using gradient-based robust estimation under non-spatial distortions. With the initial matching algorithms applied at the highest level of decomposition, the proposed hybrid approach exhibits superior range of convergence. The gradient-based algorithms in the second stage yield a robust solution that precisely registers images with sub-pixel accuracy. A hierarchical iterative searching further enhances the convergence range and rate. The simulation results demonstrated that the proposed techniques provide significant benefits to the performance of image registration.
Show less - Date Issued
- 2004
- Identifier
- CFE0000317, ucf:46294
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000317
- Title
- HEURISTIC 3D RECONSTRUCTION OF IRREGULAR SPACED LIDAR.
- Creator
-
Shorter, Nicholas, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
As more data sources have become abundantly available, an increased interest in 3D reconstruction has emerged in the image processing academic community. Applications for 3D reconstruction of urban and residential buildings consist of urban planning, network planning for mobile communication, tourism information systems, spatial analysis of air pollution and noise nuisance, microclimate investigations, and Geographical Information Systems (GISs). Previous, classical, 3D reconstruction...
Show moreAs more data sources have become abundantly available, an increased interest in 3D reconstruction has emerged in the image processing academic community. Applications for 3D reconstruction of urban and residential buildings consist of urban planning, network planning for mobile communication, tourism information systems, spatial analysis of air pollution and noise nuisance, microclimate investigations, and Geographical Information Systems (GISs). Previous, classical, 3D reconstruction algorithms solely utilized aerial photography. With the advent of LIDAR systems, current algorithms explore using captured LIDAR data as an additional feasible source of information for 3D reconstruction. Preprocessing techniques are proposed for the development of an autonomous 3D Reconstruction algorithm. The algorithm is designed for autonomously deriving three dimensional models of urban and residential buildings from raw LIDAR data. First, a greedy insertion triangulation algorithm, modified with a proposed noise filtering technique, triangulates the raw LIDAR data. The normal vectors of those triangles are then passed to an unsupervised clustering algorithm Fuzzy Simplified Adaptive Resonance Theory (Fuzzy SART). Fuzzy SART returns a rough grouping of coplanar triangles. A proposed multiple regression algorithm then further refines the coplanar grouping by further removing outliers and deriving an improved planar segmentation of the raw LIDAR data. Finally, further refinement is achieved by calculating the intersection of the best fit roof planes and moving nearby points close to that intersection to exist at the intersection, resulting in straight roof ridges. The end result of the aforementioned techniques culminates in a well defined model approximating the considered building depicted by the LIDAR data.
Show less - Date Issued
- 2006
- Identifier
- CFE0001315, ucf:47017
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001315
- Title
- OCEANIC RAIN IDENTIFICATION USING MULTIFRACTAL ANALYSIS OF QUIKSCAT SIGMA-0.
- Creator
-
Torsekar, Vasud, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
The presence of rain over oceans interferes with the measurement of sea surface wind speed and direction from the Sea Winds scatterometer and as a result wind measurements contain biases in rain regions. In past research at the Central Florida Remote Sensing Lab, it has been observed that rain has multi-fractal behavior. In this report we present an algorithm to detect the presence of rain so that rain regions are flagged. The forward and aft views of the horizontal polarization σ0 are...
Show moreThe presence of rain over oceans interferes with the measurement of sea surface wind speed and direction from the Sea Winds scatterometer and as a result wind measurements contain biases in rain regions. In past research at the Central Florida Remote Sensing Lab, it has been observed that rain has multi-fractal behavior. In this report we present an algorithm to detect the presence of rain so that rain regions are flagged. The forward and aft views of the horizontal polarization σ0 are used for the extraction of textural information with the help of multi-fractals. A single negated multi-fractal exponent is computed to discriminate between wind and rain. Pixels with exponent value above a threshold are classified as rain pixels and those that do not meet the threshold are further examined with the help of correlation of the multi-fractal exponent within a predefined neighborhood of individual pixels. It was observed that the rain has less correlation within a neighborhood compared to wind. This property is utilized for reactivation of the pixels that fall below a certain threshold of correlation. An advantage of the algorithm is that it requires no training, that is, once a threshold is set, it does not need any further adjustments. Validation results are presented through comparison with the Tropical Rainfall Measurement Mission Microwave Imager (TMI) 2A12 rain retrieval product for one whole day. The results show that the algorithm is efficient in suppressing non-rain (wind) pixels. Also algorithm deficiencies are discussed, for high wind speed regions. Comparisons with other proposed approaches will also be presented.
Show less - Date Issued
- 2005
- Identifier
- CFE0000671, ucf:46498
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000671
- Title
- FORECASTING THE ONSET OF CLOUD-GROUND LIGHTNING USING S-POL AND NLDN DATA.
- Creator
-
Ramakrishnan, Kartik, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
The maximum number of thunderstorms in the United States occur in Central Florida. The cloud-ground lightning from these storms is responsible for extensive damage to life and property. The lightning from these storms is also responsible for delays and cancellations of space shuttle launch attempts at the Kennedy Space Center (KSC) and the 45th Space Wing unmanned launches at Cape Canaveral launch facilities. For these and other reasons accurate forecasting of cloud-ground lightning is of...
Show moreThe maximum number of thunderstorms in the United States occur in Central Florida. The cloud-ground lightning from these storms is responsible for extensive damage to life and property. The lightning from these storms is also responsible for delays and cancellations of space shuttle launch attempts at the Kennedy Space Center (KSC) and the 45th Space Wing unmanned launches at Cape Canaveral launch facilities. For these and other reasons accurate forecasting of cloud-ground lightning is of crucial importance.The second phase of NASA's Tropical Rainfall Measuring Mission Texas and Florida Underflights project (TEFLUN-B) was conducted between 1st August and 30th September, 1998. The S-band dual-polarization radar (S-Pol) belonging to the National Center for Atmospheric Research (NCAR) was part of the surface based facilities during this project, and was located at Melbourne, Florida. This provided an excellent opportunity to observe Florida thunderstorms with the help of a dual-polarization radar.This project aims at developing cloud-ground lightning forecasting signatures by analyzing S-Pol data for 10 thunderstorms that occurred over the Kennedy Space Center. Time-height trends of reflectivity, ice and graupel-hail as well as electric potential trends for these storms are taken into consideration while developing the forecasting signatures. This thesis proposes that a 35dBZ echo at the -5oC temperature level is the best indicator of imminent CG lightning with a POD of 90%, an FAR of 10% and a CSI of 81.8%. An electric potential level of approximately 1000 V/m also indicates the onset of cloud-ground lightning. An analysis of the microphysical structure of the thunderstorms reveals that the presence of graupel-hail at the -10oC temperature level is necessary in order for cloud-ground lightning to occur.
Show less - Date Issued
- 2004
- Identifier
- CFE0000143, ucf:46168
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000143
- Title
- BACKGROUND STABILIZATION AND MOTION DETECTION IN LAUNCH PAD VIDEO MONITORING.
- Creator
-
Gopalan, Kaushik, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
Automatic detection of moving objects in video sequences is a widely researched topic with application in surveillance operations. Methods based on background cancellation by frame differencing are extremely common. However this process becomes much more complicated when the background is not completely stable due to camera motion. This thesis considers a space application where surveillance cameras around a shuttle launch site are used to detect any debris from the shuttle. The ground shake...
Show moreAutomatic detection of moving objects in video sequences is a widely researched topic with application in surveillance operations. Methods based on background cancellation by frame differencing are extremely common. However this process becomes much more complicated when the background is not completely stable due to camera motion. This thesis considers a space application where surveillance cameras around a shuttle launch site are used to detect any debris from the shuttle. The ground shake due to the impact of the launch causes the background to be shaky. We stabilize the background by translation of each frame, the optimum translation being determined by minimizing the energy difference between consecutive frames. This process is optimized by using a sub-image instead of the whole frame, the sub-image being chosen by taking an edge detection plot of the background and choosing the area with greatest density of edges as the sub-image of interest. The stabilized sequence is then processed by taking the difference between consecutive frames and marking areas with high intensity as the areas where motion is taking place. The residual noise from the background stabilization part is filtered out by masking the areas where the background has edges, as these areas have the highest probability of false alarms due to background motion.
Show less - Date Issued
- 2005
- Identifier
- CFE0000801, ucf:46683
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000801
- Title
- DEBRIS TRACKING IN A SEMISTABLE BACKGROUND.
- Creator
-
Vanumamalai, KarthikKalathi, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
Object Tracking plays a very pivotal role in many computer vision applications such as video surveillance, human gesture recognition and object based video compressions such as MPEG-4. Automatic detection of any moving object and tracking its motion is always an important topic of computer vision and robotic fields. This thesis deals with the problem of detecting the presence of debris or any other unexpected objects in footage obtained during spacecraft launches, and this poses a challenge...
Show moreObject Tracking plays a very pivotal role in many computer vision applications such as video surveillance, human gesture recognition and object based video compressions such as MPEG-4. Automatic detection of any moving object and tracking its motion is always an important topic of computer vision and robotic fields. This thesis deals with the problem of detecting the presence of debris or any other unexpected objects in footage obtained during spacecraft launches, and this poses a challenge because of the non-stationary background. When the background is stationary, moving objects can be detected by frame differencing. Therefore there is a need for background stabilization before tracking any moving object in the scene. Here two problems are considered and in both footage from Space shuttle launch is considered with the objective to track any debris falling from the Shuttle. The proposed method registers two consecutive frames using FFT based image registration where the amount of transformation parameters (translation, rotation) is calculated automatically. This information is the next passed to a Kalman filtering stage which produces a mask image that is used to find high intensity areas which are of potential interest.
Show less - Date Issued
- 2005
- Identifier
- CFE0000886, ucf:46628
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000886
- Title
- VARIABLE RESOLUTION & DIMENSIONAL MAPPING FOR 3D MODEL OPTIMIZATION.
- Creator
-
Venezia, Joseph, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
Three-dimensional computer models, especially geospatial architectural data sets, can be visualized in the same way humans experience the world, providing a realistic, interactive experience. Scene familiarization, architectural analysis, scientific visualization, and many other applications would benefit from finely detailed, high resolution, 3D models. Automated methods to construct these 3D models traditionally has produced data sets that are often low fidelity or inaccurate; otherwise,...
Show moreThree-dimensional computer models, especially geospatial architectural data sets, can be visualized in the same way humans experience the world, providing a realistic, interactive experience. Scene familiarization, architectural analysis, scientific visualization, and many other applications would benefit from finely detailed, high resolution, 3D models. Automated methods to construct these 3D models traditionally has produced data sets that are often low fidelity or inaccurate; otherwise, they are initially highly detailed, but are very labor and time intensive to construct. Such data sets are often not practical for common real-time usage and are not easily updated. This thesis proposes Variable Resolution & Dimensional Mapping (VRDM), a methodology that has been developed to address some of the limitations of existing approaches to model construction from images. Key components of VRDM are texture palettes, which enable variable and ultra-high resolution images to be easily composited; texture features, which allow image features to integrated as image or geometry, and have the ability to modify the geometric model structure to add detail. These components support a primary VRDM objective of facilitating model refinement with additional data. This can be done until the desired fidelity is achieved as practical limits of infinite detail are approached. Texture Levels, the third component, enable real-time interaction with a very detailed model, along with the flexibility of having alternate pixel data for a given area of the model and this is achieved through extra dimensions. Together these techniques have been used to construct models that can contain GBs of imagery data.
Show less - Date Issued
- 2009
- Identifier
- CFE0002837, ucf:48081
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002837
- Title
- UNSUPERVISED BUILDING DETECTION FROM IRREGULARLY SPACED LIDAR AND AERIAL IMAGERY.
- Creator
-
Shorter, Nicholas, Kasparis, Takis, University of Central Florida
- Abstract / Description
-
As more data sources containing 3-D information are becoming available, an increased interest in 3-D imaging has emerged. Among these is the 3-D reconstruction of buildings and other man-made structures. A necessary preprocessing step is the detection and isolation of individual buildings that subsequently can be reconstructed in 3-D using various methodologies. Applications for both building detection and reconstruction have commercial use for urban planning, network planning for mobile...
Show moreAs more data sources containing 3-D information are becoming available, an increased interest in 3-D imaging has emerged. Among these is the 3-D reconstruction of buildings and other man-made structures. A necessary preprocessing step is the detection and isolation of individual buildings that subsequently can be reconstructed in 3-D using various methodologies. Applications for both building detection and reconstruction have commercial use for urban planning, network planning for mobile communication (cell phone tower placement), spatial analysis of air pollution and noise nuisances, microclimate investigations, geographical information systems, security services and change detection from areas affected by natural disasters. Building detection and reconstruction are also used in the military for automatic target recognition and in entertainment for virtual tourism. Previously proposed building detection and reconstruction algorithms solely utilized aerial imagery. With the advent of Light Detection and Ranging (LiDAR) systems providing elevation data, current algorithms explore using captured LiDAR data as an additional feasible source of information. Additional sources of information can lead to automating techniques (alleviating their need for manual user intervention) as well as increasing their capabilities and accuracy. Several building detection approaches surveyed in the open literature have fundamental weaknesses that hinder their use; such as requiring multiple data sets from different sensors, mandating certain operations to be carried out manually, and limited functionality to only being able to detect certain types of buildings. In this work, a building detection system is proposed and implemented which strives to overcome the limitations seen in existing techniques. The developed framework is flexible in that it can perform building detection from just LiDAR data (first or last return), or just nadir, color aerial imagery. If data from both LiDAR and aerial imagery are available, then the algorithm will use them both for improved accuracy. Additionally, the proposed approach does not employ severely limiting assumptions thus enabling the end user to apply the approach to a wider variety of different building types. The proposed approach is extensively tested using real data sets and it is also compared with other existing techniques. Experimental results are presented.
Show less - Date Issued
- 2009
- Identifier
- CFE0002783, ucf:48125
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002783
- Title
- Practical Implementations of the Active Set Method for Support Vector Machine Training with Semi-definite Kernels.
- Creator
-
Sentelle, Christopher, Georgiopoulos, Michael, Anagnostopoulos, Georgios, Kasparis, Takis, Stanley, Kenneth, Young, Cynthia, University of Central Florida
- Abstract / Description
-
The Support Vector Machine (SVM) is a popular binary classification model due to its superior generalization performance, relative ease-of-use, and applicability of kernel methods. SVM training entails solving an associated quadratic programming (QP) that presents significant challenges in terms of speed and memory constraints for very large datasets; therefore, research on numerical optimization techniques tailored to SVM training is vast. Slow training times are especially of concern when...
Show moreThe Support Vector Machine (SVM) is a popular binary classification model due to its superior generalization performance, relative ease-of-use, and applicability of kernel methods. SVM training entails solving an associated quadratic programming (QP) that presents significant challenges in terms of speed and memory constraints for very large datasets; therefore, research on numerical optimization techniques tailored to SVM training is vast. Slow training times are especially of concern when one considers that re-training is often necessary at several values of the model's regularization parameter, C, as well as associated kernel parameters.The active set method is suitable for solving SVM problem and is in general ideal when the Hessian is dense and the solution is sparse-the case for the l1-loss SVM formulation. There has recently been renewed interest in the active set method as a technique for exploring the entire SVM regularization path, which has been shown to solve the SVM solution at all points along the regularization path (all values of C) in not much more time than it takes, on average, to perform training at a single value of C with traditional methods. Unfortunately, the majority of active set implementations used for SVM training require positive definite kernels, and those implementations that do allow semi-definite kernels tend to be complex and can exhibit instability and, worse, lack of convergence. This severely limits applicability since it precludes the use of the linear kernel, can be an issue when duplicate data points exist, and doesn't allow use of low-rank kernel approximations to improve tractability for large datasets. The difficulty, in the case of a semi-definite kernel, arises when a particular active set results in a singular KKT matrix (or the equality-constrained problem formed using the active set is semi-definite). Typically this is handled by explicitly detecting the rank of the KKT matrix. Unfortunately, this adds significant complexity to the implementation; and, if care is not taken, numerical instability, or worse, failure to converge can result. This research shows that the singular KKT system can be avoided altogether with simple modifications to the active set method. The result is a practical, easy to implement active set method that does not need to explicitly detect the rank of the KKT matrix nor modify factorization or solution methods based upon the rank. Methods are given for both conventional SVM training as well as for computing the regularization path that are simple and numerically stable. First, an efficient revised simplex method is efficiently implemented for SVM training (SVM-RSQP) with semi-definite kernels and shown to out-perform competing active set implementations for SVM training in terms of training time as well as shown to perform on-par with state-of-the-art SVM training algorithms such as SMO and SVMLight. Next, a new regularization path-following algorithm for semi-definite kernels (Simple SVMPath) is shown to be orders of magnitude faster, more accurate, and significantly less complex than competing methods and does not require the use of external solvers. Theoretical analysis reveals new insights into the nature of the path-following algorithms. Finally, a method is given for computing the approximate regularization path and approximate kernel path using the warm-start capability of the proposed revised simplex method (SVM-RSQP) and shown to provide significant, orders of magnitude, speed-ups relative to the traditional (")grid search(") where re-training is performed at each parameter value. Surprisingly, it also shown that even when the solution for the entire path is not desired, computing the approximate path can be seen as a speed-up mechanism for obtaining the solution at a single value. New insights are given concerning the limiting behaviors of the regularization and kernel path as well as the use of low-rank kernel approximations.
Show less - Date Issued
- 2014
- Identifier
- CFE0005251, ucf:50600
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005251