Current Search: Atia, George (x)
View All Items
Pages
- Title
- Harnessing Spatial Intensity Fluctuations for Optical Imaging and Sensing.
- Creator
-
Akhlaghi Bouzan, Milad, Dogariu, Aristide, Saleh, Bahaa, Pang, Sean, Atia, George, University of Central Florida
- Abstract / Description
-
Properties of light such as amplitude and phase, temporal and spatial coherence, polarization, etc. are abundantly used for sensing and imaging. Regardless of the passive or active nature of the sensing method, optical intensity fluctuations are always present! While these fluctuations are usually regarded as noise, there are situations where one can harness the intensity fluctuations to enhance certain attributes of the sensing procedure. In this thesis, we developed different sensing...
Show moreProperties of light such as amplitude and phase, temporal and spatial coherence, polarization, etc. are abundantly used for sensing and imaging. Regardless of the passive or active nature of the sensing method, optical intensity fluctuations are always present! While these fluctuations are usually regarded as noise, there are situations where one can harness the intensity fluctuations to enhance certain attributes of the sensing procedure. In this thesis, we developed different sensing methodologies that use statistical properties of optical fluctuations for gauging specific information. We examine this concept in the context of three different aspects of computational optical imaging and sensing. First, we study imposing specific statistical properties to the probing field to image or characterize certain properties of an object through a statistical analysis of the spatially integrated scattered intensity. This offers unique capabilities for imaging and sensing techniques operating in highly perturbed environments and low-light conditions. Next, we examine optical sensing in the presence of strong perturbations that preclude any controllable field modification. We demonstrate that inherent properties of diffused coherent fields and fluctuations of integrated intensity can be used to track objects hidden behind obscurants. Finally, we address situations where, due to coherent noise, image accuracy is severely degraded by intensity fluctuations. By taking advantage of the spatial coherence properties of optical fields, we show that this limitation can be effectively mitigated and that a significant improvement in the signal-to-noise ratio can be achieved even in one single-shot measurement. The findings included in this dissertation illustrate different circumstances where optical fluctuations can affect the efficacy of computational optical imaging and sensing. A broad range of applications, including biomedical imaging and remote sensing, could benefit from the new approaches to suppress, enhance, and exploit optical fluctuations, which are described in this dissertation.
Show less - Date Issued
- 2017
- Identifier
- CFE0007274, ucf:52200
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007274
- Title
- Motor imagery classification using sparse representation of EEG signals.
- Creator
-
Saidi, Pouria, Atia, George, Vosoughi, Azadeh, Berman, Steven, University of Central Florida
- Abstract / Description
-
The human brain is unquestionably the most complex organ of the body as it controls and processes its movement and senses. A healthy brain is able to generate responses to the signals it receives, and transmit messages to the body. Some neural disorders can impair the communication between the brain and the body preventing the transmission of these messages. Brain Computer Interfaces (BCIs) are devices that hold immense potential to assist patients with such disorders by analyzing brain...
Show moreThe human brain is unquestionably the most complex organ of the body as it controls and processes its movement and senses. A healthy brain is able to generate responses to the signals it receives, and transmit messages to the body. Some neural disorders can impair the communication between the brain and the body preventing the transmission of these messages. Brain Computer Interfaces (BCIs) are devices that hold immense potential to assist patients with such disorders by analyzing brain signals, translating and classifying various brain responses, and relaying them to external devices and potentially back to the body. Classifying motor imagery brain signals where the signals are obtained based on imagined movement of the limbs is a major, yet very challenging, step in developing Brain Computer Interfaces (BCIs). Of primary importance is to use less data and computationally efficient algorithms to support real-time BCI. To this end, in this thesis we explore and develop algorithms that exploit the sparse characteristics of EEGs to classify these signals. Different feature vectors are extracted from EEG trials recorded by electrodes placed on the scalp.In this thesis, features from a small spatial region are approximated by a sparse linear combination of few atoms from a multi-class dictionary constructed from the features of the EEG training signals for each class. This is used to classify the signals based on the pattern of their sparse representation using a minimum-residual decision rule.We first attempt to use all the available electrodes to verify the effectiveness of the proposed methods. To support real time BCI, the electrodes are reduced to those near the sensorimotor cortex which are believed to be crucial for motor preparation and imagination.In a second approach, we try to incorporate the effect of spatial correlation across the neighboring electrodes near the sensorimotor cortex. To this end, instead of considering one feature vector at a time, we use a collection of feature vectors simultaneously to find the joint sparse representation of these vectors. Although we were not able to see much improvement with respect to the first approach, we envision that such improvements could be achieved using more refined models that can be subject of future works. The performance of the proposed approaches is evaluated using different features, including wavelet coefficients, energy of the signals in different frequency sub-bands, and also entropy of the signals. The results obtained from real data demonstrate that the combination of energy and entropy features enable efficient classification of motor imagery EEG trials related to hand and foot movements. This underscores the relevance of the energies and their distribution in different frequency sub-bands for classifying movement-specific EEG patterns in agreement with the existence of different levels within the alpha band. The proposed approach is also shown to outperform the state-of-the-art algorithm that uses feature vectors obtained from energies of multiple spatial projections.
Show less - Date Issued
- 2015
- Identifier
- CFE0005882, ucf:50884
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005882
- Title
- COMPRESSIVE AND CODED CHANGE DETECTION: THEORY AND APPLICATION TO STRUCTURAL HEALTH MONITORING.
- Creator
-
Sarayanibafghi, Omid, Atia, George, Vosoughi, Azadeh, Rahnavard, Nazanin, University of Central Florida
- Abstract / Description
-
In traditional sparse recovery problems, the goal is to identify the support of compressible signals using a small number of measurements. In contrast, in this thesis the problem of identification of a sparse number of statistical changes in stochastic phenomena is considered when decision makers only have access to compressed measurements, i.e., each measurement is derived by a subset of features. Herein, we propose a new framework that is termed Compressed Change Detection. The main...
Show moreIn traditional sparse recovery problems, the goal is to identify the support of compressible signals using a small number of measurements. In contrast, in this thesis the problem of identification of a sparse number of statistical changes in stochastic phenomena is considered when decision makers only have access to compressed measurements, i.e., each measurement is derived by a subset of features. Herein, we propose a new framework that is termed Compressed Change Detection. The main approach relies on integrating ideas from the theory of identifying codes with change point detection in sequential analysis. If the stochastic properties of certain features change, then the changes can be detected by examining the covering set of an identifying code of measurements. In particular, given a large number N of features, the goal is to detect a small set of features that undergoes a statistical change using a small number of measurements. Sufficient conditions are derived for the probability of false alarm and isolation to approach zero in the asymptotic regime where N is large.As an application of compressed change detection, the problem of detection of a sparse number of damages in a structure for Structural Health Monitoring (SHM) is considered. Since only a small number of damage scenarios can occur simultaneously, change detection is applied to responses of pairs of sensors that form an identifying code over a learned damage-sensing graph. Generalizations of the proposed framework with multiple concurrent changes and for arbitrary graph topologies are presented.
Show less - Date Issued
- 2016
- Identifier
- CFE0006387, ucf:51507
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006387
- Title
- Impact of wireless channel uncertainty upon M-ary distributed detection systems.
- Creator
-
Hajibabaei Najafabadi, Zahra, Vosoughi, Azadeh, Rahnavard, Nazanin, Atia, George, University of Central Florida
- Abstract / Description
-
We consider a wireless sensor network (WSN), consisting of several sensors and a fusion center (FC), which is tasked with solving an $M$-ary hypothesis testing problem. Sensors make $M$-ary decisions and transmit their digitally modulated decisions over orthogonal channels, which are subject to Rayleigh fading and noise, to the FC. Adopting Bayesian optimality criterion, we consider training and non-training based distributed detection systems and investigate the effect of imperfect channel...
Show moreWe consider a wireless sensor network (WSN), consisting of several sensors and a fusion center (FC), which is tasked with solving an $M$-ary hypothesis testing problem. Sensors make $M$-ary decisions and transmit their digitally modulated decisions over orthogonal channels, which are subject to Rayleigh fading and noise, to the FC. Adopting Bayesian optimality criterion, we consider training and non-training based distributed detection systems and investigate the effect of imperfect channel state information (CSI) on the optimal maximum a posteriori probability (MAP) fusion rules and detection performance, when the sum of training and data symbol transmit powers is fixed. Our results show that for Rayleigh fading channel, when sensors employ $M$-FSK or binary FSK (BFSK) modulation, the error probability is minimized when training symbol transmit power is zero (regardless of the reception mode at the FC). However, for coherent reception, $M$-PSK and binary PSK (BPSK) modulation the error probability is minimized when half of transmit power is allocated for training symbol. If the channel is Rician fading, regardless of the modulation, the error probability is minimized when training transmit power is zero.
Show less - Date Issued
- 2016
- Identifier
- CFE0006111, ucf:51209
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006111
- Title
- Resource allocation and load-shedding policies based on Markov decision processes for renewable energy generation and storage.
- Creator
-
Jimenez, Edwards, Atia, George, Richie, Samuel, Pazour, Jennifer, University of Central Florida
- Abstract / Description
-
In modern power systems, renewable energy has become an increasingly popular form of energy generation as a result of all the rules and regulations that are being implemented towards achieving clean energy worldwide. However, clean energy can have drawbacks in several forms. Wind energy, for example can introduce intermittency. In this thesis, we discuss a method to deal with this intermittency. In particular, by shedding some specific amount of load we can avoid a total system breakdown of...
Show moreIn modern power systems, renewable energy has become an increasingly popular form of energy generation as a result of all the rules and regulations that are being implemented towards achieving clean energy worldwide. However, clean energy can have drawbacks in several forms. Wind energy, for example can introduce intermittency. In this thesis, we discuss a method to deal with this intermittency. In particular, by shedding some specific amount of load we can avoid a total system breakdown of the entire power plant. The load shedding method discussed in this thesis utilizes a Markov Decision Process with backward policy iteration. This is based on a probabilistic method that chooses the best load-shedding path that minimizes the expected total cost to ensure no power failure. We compare our results with two control policies, a load-balancing policy and a less-load shedding policy. It is shown that the proposed MDP policy outperforms the other control policies and achieves the minimum total expected cost.
Show less - Date Issued
- 2015
- Identifier
- CFE0005635, ucf:50222
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005635
- Title
- Fast Compressed Automatic Target Recognition for a Compressive Infrared Imager.
- Creator
-
Millikan, Brian, Foroosh, Hassan, Rahnavard, Nazanin, Muise, Robert, Atia, George, Mahalanobis, Abhijit, Sun, Qiyu, University of Central Florida
- Abstract / Description
-
Many military systems utilize infrared sensors which allow an operator to see targets at night. Several of these are either mid-wave or long-wave high resolution infrared sensors, which are expensive to manufacture. But compressive sensing, which has primarily been demonstrated in medical applications, can be used to minimize the number of measurements needed to represent a high-resolution image. Using these techniques, a relatively low cost mid-wave infrared sensor can be realized which has...
Show moreMany military systems utilize infrared sensors which allow an operator to see targets at night. Several of these are either mid-wave or long-wave high resolution infrared sensors, which are expensive to manufacture. But compressive sensing, which has primarily been demonstrated in medical applications, can be used to minimize the number of measurements needed to represent a high-resolution image. Using these techniques, a relatively low cost mid-wave infrared sensor can be realized which has a high effective resolution. In traditional military infrared sensing applications, like targeting systems, automatic targeting recognition algorithms are employed to locate and identify targets of interest to reduce the burden on the operator. The resolution of the sensor can increase the accuracy and operational range of a targeting system. When using a compressive sensing infrared sensor, traditional decompression techniques can be applied to form a spatial-domain infrared image, but most are iterative and not ideal for real-time environments. A more efficient method is to adapt the target recognition algorithms to operate directly on the compressed samples. In this work, we will present a target recognition algorithm which utilizes a compressed target detection method to identify potential target areas and then a specialized target recognition technique that operates directly on the same compressed samples. We will demonstrate our method on the U.S. Army Night Vision and Electronic Sensors Directorate ATR Algorithm Development Image Database which has been made available by the Sensing Information Analysis Center.
Show less - Date Issued
- 2018
- Identifier
- CFE0007408, ucf:52739
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007408
- Title
- Visual-Textual Video Synopsis Generation.
- Creator
-
Sharghi Karganroodi, Aidean, Shah, Mubarak, Da Vitoria Lobo, Niels, Rahnavard, Nazanin, Atia, George, University of Central Florida
- Abstract / Description
-
In this dissertation we tackle the problem of automatic video summarization. Automatic summarization techniques enable faster browsing and indexing of large video databases. However, due to the inherent subjectivity of the task, no single video summarizer fits all users unless it adapts to individual user's needs. To address this issue, we introduce a fresh view on the task called "Query-focused'' extractive video summarization. We develop a supervised model that takes as input a video and...
Show moreIn this dissertation we tackle the problem of automatic video summarization. Automatic summarization techniques enable faster browsing and indexing of large video databases. However, due to the inherent subjectivity of the task, no single video summarizer fits all users unless it adapts to individual user's needs. To address this issue, we introduce a fresh view on the task called "Query-focused'' extractive video summarization. We develop a supervised model that takes as input a video and user's preference in form of a query, and creates a summary video by selecting key shots from the original video. We model the problem as subset selection via determinantal point process (DPP), a stochastic point process that assigns a probability value to each subset of any given set. Next, we develop a second model that exploits capabilities of memory networks in the framework and concomitantly reduces the level of supervision required to train the model. To automatically evaluate system summaries, we contend that a good metric for video summarization should focus on the semantic information that humans can perceive rather than the visual features or temporal overlaps. To this end, we collect dense per-video-shot concept annotations, compile a new dataset, and suggest an efficient evaluation method defined upon the concept annotations. To enable better summarization of videos, we improve the sequential DPP in two folds. In terms of learning, we propose a large-margin algorithm to address the exposure bias that is common in many sequence to sequence learning methods. In terms of modeling, we integrate a new probabilistic distribution into SeqDPP, the resulting model accepts user input about the expected length of the summary. We conclude this dissertation by developing a framework to generate textual synopsis for a video, thus, enabling users to quickly browse a large video database without watching the videos.
Show less - Date Issued
- 2019
- Identifier
- CFE0007862, ucf:52756
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007862
- Title
- Robust, Scalable, and Provable Approaches to High Dimensional Unsupervised Learning.
- Creator
-
Rahmani, Mostafa, Atia, George, Vosoughi, Azadeh, Mikhael, Wasfy, Nashed, M, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
This doctoral thesis focuses on three popular unsupervised learning problems: subspace clustering, robust PCA, and column sampling. For the subspace clustering problem, a new transformative idea is presented. The proposed approach, termed Innovation Pursuit, is a new geometrical solution to the subspace clustering problem whereby subspaces are identified based on their relative novelties. A detailed mathematical analysis is provided establishing sufficient conditions for the proposed method...
Show moreThis doctoral thesis focuses on three popular unsupervised learning problems: subspace clustering, robust PCA, and column sampling. For the subspace clustering problem, a new transformative idea is presented. The proposed approach, termed Innovation Pursuit, is a new geometrical solution to the subspace clustering problem whereby subspaces are identified based on their relative novelties. A detailed mathematical analysis is provided establishing sufficient conditions for the proposed method to correctly cluster the data points. The numerical simulations with both real and synthetic data demonstrate that Innovation Pursuit notably outperforms the state-of-the-art subspace clustering algorithms. For the robust PCA problem, we focus on both the outlier detection and the matrix decomposition problems. For the outlier detection problem, we present a new algorithm, termed Coherence Pursuit, in addition to two scalable randomized frameworks for the implementation of outlier detection algorithms. The Coherence Pursuit method is the first provable and non-iterative robust PCA method which is provably robust to both unstructured and structured outliers. Coherence Pursuit is remarkably simple and it notably outperforms the existing methods in dealing with structured outliers. In the proposed randomized designs, we leverage the low dimensional structure of the low rank component to apply the robust PCA algorithm to a random sketch of the data as opposed to the full scale data. Importantly, it is analytically shown that the presented randomized designs can make the computation or sample complexity of the low rank matrix recovery algorithm independent of the size of the data. At the end, we focus on the column sampling problem. A new sampling tool, dubbed Spatial Random Sampling, is presented which performs the random sampling in the spatial domain. The most compelling feature of Spatial Random Sampling is that it is the first unsupervised column sampling method which preserves the spatial distribution of the data.
Show less - Date Issued
- 2018
- Identifier
- CFE0007083, ucf:52010
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007083
- Title
- Reliability and Robustness Enhancement of Cooperative Vehicular Systems: A Bayesian Machine Learning Perspective.
- Creator
-
Nourkhiz Mahjoub, Hossein, Pourmohammadi Fallah, Yaser, Vosoughi, Azadeh, Yuksel, Murat, Atia, George, Eluru, Naveen, University of Central Florida
- Abstract / Description
-
Autonomous vehicles are expected to greatly transform the transportation domain in the near future. Some even envision that the human drivers may be fully replaced by automated systems. It is plausible to assume that at least a significant part of the driving task will be done by automated systems in not a distant future. Although we are observing a rapid advance towards this goal, which gradually pushes the traditional human-based driving toward more advanced autonomy levels, the full...
Show moreAutonomous vehicles are expected to greatly transform the transportation domain in the near future. Some even envision that the human drivers may be fully replaced by automated systems. It is plausible to assume that at least a significant part of the driving task will be done by automated systems in not a distant future. Although we are observing a rapid advance towards this goal, which gradually pushes the traditional human-based driving toward more advanced autonomy levels, the full autonomy concept still has a long way before being completely fulfilled and realized due to numerous technical and societal challenges. During this long transition phase, blended driving scenarios, composed of agents with different levels of autonomy, seems to be inevitable. Therefore, it is critical to design appropriate driving systems with different levels of intelligence in order to benefit all participants. Vehicular safety systems and their more advanced successors, i.e., Cooperative Vehicular Systems (CVS), have originated from this perspective. These systems aim to enhance the overall quality and performance of the current driving situation by incorporating the most advanced available technologies, ranging from on-board sensors such as radars, LiDARs, and cameras to other promising solutions e.g. Vehicle-to-Everything (V2X) communications. However, it is still challenging to attain the ideal anticipated benefits out of the cooperative vehicular systems, due to the inherent issues and challenges of their different components, such as sensors' failures in severe weather conditions or the poor performance of V2X technologies under dense communication channel loads. In this research we aim to address some of these challenges from a Bayesian Machine- Learning perspective, by proposing several novel ideas and solutions which facilitate the realization of more robust, reliable, and agile cooperative vehicular systems. More precisely, we have a two-fold contribution here. In one aspect, we have investigated the notion of Model-Based Communications (MBC) and demonstrated its effectiveness for V2X communication performance enhancement. This improvement is achieved due to the more intelligent communication strategy of MBC in comparison with the current state-of-the-art V2X technologies. Essentially, MBC proposes a conceptual change in the nature of the disseminated and shared information over the communication channel compared to what is being disseminated in current technologies. In the MBC framework, instead of sharing the raw dynamic information among the network agents, each agent shares the parameters of a stochastic forecasting model which represents its current and future behavior and updates these parameters as needed. This model sharing strategy enables the receivers to precisely predict the future behaviors of the transmitter even when the update frequency is very low. On the other hand, we have also proposed receiver-side solutions in order to enhance the CVS performance and reliability and mitigate the issues caused by imperfect communication and detection processes. The core concept for these solutions is incorporating other informative elements in the system to compensate for the lack of information which is lost during the imperfect communication or detection phases. For proof of concept, we have designed an adaptive FCW framework which considers the driver's feedbacks to the CVS system. This adaptive framework mitigates the negative impact of imperfectly received or detected information on system performance, using the inherent information of these feedbacks and responses. The effectiveness and superiority of this adaptive framework over traditional design has been demonstrated in this research.
Show less - Date Issued
- 2019
- Identifier
- CFE0007845, ucf:52807
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007845
- Title
- Relating First-person and Third-person Vision.
- Creator
-
Ardeshir Behrostaghi, Shervin, Borji, Ali, Shah, Mubarak, Hu, Haiyan, Atia, George, University of Central Florida
- Abstract / Description
-
Thanks to the availability and increasing popularity of wearable devices such as GoPro cameras, smart phones and glasses, we have access to a plethora of videos captured from the first person (egocentric) perspective. Capturing the world from the perspective of one's self, egocentric videos bear characteristics distinct from the more traditional third-person (exocentric) videos. In many computer vision tasks (e.g. identification, action recognition, face recognition, pose estimation, etc.),...
Show moreThanks to the availability and increasing popularity of wearable devices such as GoPro cameras, smart phones and glasses, we have access to a plethora of videos captured from the first person (egocentric) perspective. Capturing the world from the perspective of one's self, egocentric videos bear characteristics distinct from the more traditional third-person (exocentric) videos. In many computer vision tasks (e.g. identification, action recognition, face recognition, pose estimation, etc.), the human actors are the main focus. Hence, detecting, localizing, and recognizing the human actor is often incorporated as a vital component. In an egocentric video however, the person behind the camera is often the person of interest. This would change the nature of the task at hand, given that the camera holder is usually not visible in the content of his/her egocentric video. In other words, our knowledge about the visual appearance, pose, etc. on the egocentric camera holder is very limited, suggesting reliance on other cues in first person videos. First and third person videos have been separately studied in the past in the computer vision community. However, the relationship between first and third person vision has yet to be fully explored. Relating these two views systematically could potentially benefit many computer vision tasks and applications. This thesis studies this relationship in several aspects. We explore supervised and unsupervised approaches for relating these two views seeking different objectives such as identification, temporal alignment, and action classification. We believe that this exploration could lead to a better understanding the relationship of these two drastically different sources of information.
Show less - Date Issued
- 2018
- Identifier
- CFE0007151, ucf:52322
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007151
- Title
- Learning Kernel-based Approximate Isometries.
- Creator
-
Sedghi, Mahlagha, Georgiopoulos, Michael, Anagnostopoulos, Georgios, Atia, George, Liu, Fei, University of Central Florida
- Abstract / Description
-
The increasing availability of public datasets offers an inexperienced opportunity to conduct data-driven studies. Metric Multi-Dimensional Scaling aims to find a low-dimensional embedding of the data, preserving the pairwise dissimilarities amongst the data points in the original space. Along with the visualizability, this dimensionality reduction plays a pivotal role in analyzing and disclosing the hidden structures in the data. This work introduces Sparse Kernel-based Least Squares Multi...
Show moreThe increasing availability of public datasets offers an inexperienced opportunity to conduct data-driven studies. Metric Multi-Dimensional Scaling aims to find a low-dimensional embedding of the data, preserving the pairwise dissimilarities amongst the data points in the original space. Along with the visualizability, this dimensionality reduction plays a pivotal role in analyzing and disclosing the hidden structures in the data. This work introduces Sparse Kernel-based Least Squares Multi-Dimensional Scaling approach for exploratory data analysis and, when desirable, data visualization. We assume our embedding map belongs to a Reproducing Kernel Hilbert Space of vector-valued functions which allows for embeddings of previously unseen data. Also, given appropriate positive-definite kernel functions, it extends the applicability of our methodto non-numerical data. Furthermore, the framework employs Multiple Kernel Learning for implicitlyidentifying an effective feature map and, hence, kernel function. Finally, via the use ofsparsity-promoting regularizers, the technique is capable of embedding data on a, typically, lowerdimensionalmanifold by naturally inferring the embedding dimension from the data itself. In theprocess, key training samples are identified, whose participation in the embedding map's kernelexpansion is most influential. As we will show, such influence may be given interesting interpretations in the context of the data at hand. The resulting multi-kernel learning, non-convex framework can be effectively trained via a block coordinate descent approach, which alternates between an accelerated proximal average method-based iterative majorization for learning the kernel expansion coefficients and a simple quadratic program, which deduces the multiple-kernel learning coefficients. Experimental results showcase potential uses of the proposed framework on artificial data as well as real-world datasets, that underline the merits of our embedding framework. Our method discovers genuine hidden structure in the data, that in case of network data, matches the results of well-known Multi- level Modularity Optimization community structure detection algorithm.
Show less - Date Issued
- 2017
- Identifier
- CFE0007132, ucf:52315
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007132
- Title
- Effect of Nonclassical Optical Turbulence on a Propagating Laser Beam.
- Creator
-
Beason, Melissa, Phillips, Ronald, Atia, George, Richardson, Martin, Andrews, Larry, Shivamoggi, Bhimsen, University of Central Florida
- Abstract / Description
-
Theory developed for the propagation of a laser beam through optical turbulence generally assumes that the turbulence is both homogeneous and isotropic and that the associated spectrum follows the classical Kolmogorov spectral power law of . If the atmosphere deviates from these assumptions, beam statistics such as mean intensity, correlation, and scintillation index could vary significantly from mathematical predictions. This work considers the effect of nonclassical turbulence on a...
Show moreTheory developed for the propagation of a laser beam through optical turbulence generally assumes that the turbulence is both homogeneous and isotropic and that the associated spectrum follows the classical Kolmogorov spectral power law of . If the atmosphere deviates from these assumptions, beam statistics such as mean intensity, correlation, and scintillation index could vary significantly from mathematical predictions. This work considers the effect of nonclassical turbulence on a propagated beam. Namely, anisotropy of the turbulence and a power law that deviates from . A mathematical model is developed for the scintillation index of a Gaussian beam propagated through nonclassical turbulence and theory is extended for the covariance function of intensity of a plane wave propagated through nonclassical turbulence. Multiple experiments over a concrete runway and a grass range verify the presence of turbulence which varies between isotropy and anisotropy. Data is taken throughout the day and the evolution of optical turbulence is considered. Also, irradiance fluctuation data taken in May 2018 over a concrete runway and July 2018 over a grass range indicate an additional beam shaping effect. A simplistic mathematical model was formulated which reproduced the measured behavior of contours of equal mean intensity and scintillation index.?
Show less - Date Issued
- 2018
- Identifier
- CFE0007310, ucf:52646
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007310
- Title
- Game-Theoretic Frameworks and Strategies for Defense Against Network Jamming and Collocation Attacks.
- Creator
-
Hemida, Ahmed, Atia, George, Simaan, Marwan, Vosoughi, Azadeh, Sukthankar, Gita, Guirguis, Mina, University of Central Florida
- Abstract / Description
-
Modern networks are becoming increasingly more complex, heterogeneous, and densely connected. While more diverse services are enabled to an ever-increasing number of users through ubiquitous networking and pervasive computing, several important challenges have emerged. For example, densely connected networks are prone to higher levels of interference, which makes them more vulnerable to jamming attacks. Also, the utilization of software-based protocols to perform routing, load balancing and...
Show moreModern networks are becoming increasingly more complex, heterogeneous, and densely connected. While more diverse services are enabled to an ever-increasing number of users through ubiquitous networking and pervasive computing, several important challenges have emerged. For example, densely connected networks are prone to higher levels of interference, which makes them more vulnerable to jamming attacks. Also, the utilization of software-based protocols to perform routing, load balancing and power management functions in Software-Defined Networks gives rise to more vulnerabilities that could be exploited by malicious users and adversaries. Moreover, the increased reliance on cloud computing services due to a growing demand for communication and computation resources poses formidable security challenges due to the shared nature and virtualization of cloud computing. In this thesis, we study two types of attacks: jamming attacks on wireless networks and side-channel attacks on cloud computing servers. The former attacks disrupt the natural network operation by exploiting the static topology and dynamic channel assignment in wireless networks, while the latter attacks seek to gain access to unauthorized data by co-residing with target virtual machines (VMs) on the same physical node in a cloud server. In both attacks, the adversary faces a static attack surface and achieves her illegitimate goal by exploiting a stationary aspect of the network functionality. Hence, this dissertation proposes and develops counter approaches to both attacks using moving target defense strategies. We study the strategic interactions between the adversary and the network administrator within a game-theoretic framework.First, in the context of jamming attacks, we present and analyze a game-theoretic formulation between the adversary and the network defender. In this problem, the attack surface is the network connectivity (the static topology) as the adversary jams a subset of nodes to increase the level of interference in the network. On the other side, the defender makes judicious adjustments of the transmission footprint of the various nodes, thereby continuously adapting the underlying network topology to reduce the impact of the attack. The defender's strategy is based on playing Nash equilibrium strategies securing a worst-case network utility. Moreover, scalable decomposition-based approaches are developed yielding a scalable defense strategy whose performance closely approaches that of the non-decomposed game for large-scale and dense networks. We study a class of games considering discrete as well as continuous power levels.In the second problem, we consider multi-tenant clouds, where a number of VMs are typically collocated on the same physical machine to optimize performance and power consumption and maximize profit. This increases the risk of a malicious virtual machine performing side-channel attacks and leaking sensitive information from neighboring VMs. The attack surface, in this case, is the static residency of VMs on a set of physical nodes, hence we develop a timed migration defense approach. Specifically, we analyze a timing game in which the cloud provider decides when to migrate a VM to a different physical machine to mitigate the risk of being compromised by a collocated malicious VM. The adversary decides the rate at which she launches new VMs to collocate with the victim VMs. Our formulation captures a data leakage model in which the cost incurred by the cloud provider depends on the duration of collocation with malicious VMs. It also captures costs incurred by the adversary in launching new VMs and by the defender in migrating VMs. We establish sufficient conditions for the existence of Nash equilibria for general cost functions, as well as for specific instantiations, and characterize the best response for both players. Furthermore, we extend our model to characterize its impact on the attacker's payoff when the cloud utilizes intrusion detection systems that detect side-channel attacks. Our theoretical findings are corroborated with extensive numerical results in various settings as well as a proof-of-concept implementation in a realistic cloud setting.
Show less - Date Issued
- 2019
- Identifier
- CFE0007468, ucf:52677
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007468
- Title
- Sparse signal recovery under sensing and physical hardware constraints.
- Creator
-
Mardaninajafabadi, Davood, Atia, George, Mikhael, Wasfy, Vosoughi, Azadeh, Rahnavard, Nazanin, Abouraddy, Ayman, University of Central Florida
- Abstract / Description
-
This dissertation focuses on information recovery under two general types of sensing constraints and hardware limitations that arise in practical data acquisition systems. We study the effects of these practical limitations in the context of signal recovery problems from interferometric measurements such as for optical mode analysis.The first constraint stems from the limited number of degrees of freedom of an information gathering system, which gives rise to highly constrained sensing...
Show moreThis dissertation focuses on information recovery under two general types of sensing constraints and hardware limitations that arise in practical data acquisition systems. We study the effects of these practical limitations in the context of signal recovery problems from interferometric measurements such as for optical mode analysis.The first constraint stems from the limited number of degrees of freedom of an information gathering system, which gives rise to highly constrained sensing structures. In contrast to prior work on compressive signal recovery which relies for the most part on introducing additional hardware components to emulate randomization, we establish performance guarantees for successful signal recovery from a reduced number of measurements even with the constrained interferometer structure obviating the need for non-native components. Also, we propose control policies to guide the collection of informative measurements given prior knowledge about the constrained sensing structure. In addition, we devise a sequential implementation with a stopping rule, shown to reduce the sample complexity for a target performance in reconstruction.The second limitation considered is due to physical hardware constraints, such as the finite spatial resolution of the used components and their finite aperture size. Such limitations introduce non-linearities in the underlying measurement model. We first develop a more accurate measurement model with structured noise representing a known non-linear function of the input signal, obtained by leveraging side information about the sampling structure. Then, we devise iterative denoising algorithms shown to enhance the quality of sparse recovery in the presence of physical constraints by iteratively estimating and eliminating the non-linear term from the measurements. We also develop a class of clipping-cognizant reconstruction algorithms for modal reconstruction from interferometric measurements that compensate for clipping effects due to the finite aperture size of the used components and show they yield significant gains over schemes oblivious to such effects.
Show less - Date Issued
- 2019
- Identifier
- CFE0007675, ucf:52467
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007675
- Title
- Different Facial Recognition Techniques in Transform Domains.
- Creator
-
Al Obaidi, Taif, Mikhael, Wasfy, Atia, George, Jones, W Linwood, Myers, Brent, Moslehy, Faissal, University of Central Florida
- Abstract / Description
-
The human face is frequently used as the biometric signal presented to a machine for identificationpurposes. Several challenges are encountered while designing face identification systems.The challenges are either caused by the process of capturing the face image itself, or occur whileprocessing the face poses. Since the face image not only contains the face, this adds to the datadimensionality, and thus degrades the performance of the recognition system. Face Recognition(FR) has been a major...
Show moreThe human face is frequently used as the biometric signal presented to a machine for identificationpurposes. Several challenges are encountered while designing face identification systems.The challenges are either caused by the process of capturing the face image itself, or occur whileprocessing the face poses. Since the face image not only contains the face, this adds to the datadimensionality, and thus degrades the performance of the recognition system. Face Recognition(FR) has been a major signal processing topic of interest in the last few decades. Most commonapplications of the FR include, forensics, access authorization to facilities, or simply unlockingof a smart phone. The three factors governing the performance of a FR system are: the storagerequirements, the computational complexity, and the recognition accuracy. The typical FR systemconsists of the following main modules in each of the Training and Testing phases: Preprocessing,Feature Extraction, and Classification. The ORL, YALE, FERET, FEI, Cropped AR, and GeorgiaTech datasets are used to evaluate the performance of the proposed systems. The proposed systemsare categorized into Single-Transform and Two-Transform systems. In the first category, the featuresare extracted from a single domain, that of the Two-Dimensional Discrete Cosine Transform(2D DCT). In the latter category, the Two-Dimensional Discrete Wavelet Transform (2D DWT)coefficients are combined with those of the 2D DCT to form one feature vector. The feature vectorsare either used directly or further processed to obtain the persons' final models. The PrincipleComponent Analysis (PCA), the Sparse Representation, Vector Quantization (VQ) are employedas a second step in the Feature Extraction Module. Additionally, a technique is proposed in whichthe feature vector is composed of appropriately selected 2D DCT and 2D DWT coefficients basedon a residual minimization algorithm.
Show less - Date Issued
- 2018
- Identifier
- CFE0007146, ucf:52295
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007146
- Title
- Predictive modeling for assessing the reliability of bypass diodes in Photovoltaic modules.
- Creator
-
Shiradkar, Narendra, Sundaram, Kalpathy, Schoenfeld, Winston, Atia, George, Abdolvand, Reza, Xanthopoulos, Petros, University of Central Florida
- Abstract / Description
-
Solar Photovoltaics (PV) is one of the most promising renewable energy technologies for mitigating the effect of climate change. Reliability of PV modules directly impacts the Levelized Cost of Energy (LCOE), which is a metric for cost competitiveness of any energy technology. Further reduction in LCOE of PV through assured long term reliability is necessary in order to facilitate widespread use of solar energy without the need for subsidies. This dissertation is focused on frameworks for...
Show moreSolar Photovoltaics (PV) is one of the most promising renewable energy technologies for mitigating the effect of climate change. Reliability of PV modules directly impacts the Levelized Cost of Energy (LCOE), which is a metric for cost competitiveness of any energy technology. Further reduction in LCOE of PV through assured long term reliability is necessary in order to facilitate widespread use of solar energy without the need for subsidies. This dissertation is focused on frameworks for assessing reliability of bypass diodes in PV modules. Bypass diodes are critical components in PV modules that provide protection against shading. Failure of bypass diode in short circuit results in reducing the PV module power by one third, while diode failure in open circuit leaves the module susceptible for extreme hotspot heating and potentially fire hazard. PV modules, along with the bypass diodes are expected to last at least 25 years in field. The various failure mechanisms in bypass diodes such as thermal runaway, high temperature forward bias operation and thermal cycling are discussed. Operation of bypass diode under shading is modeled and method for calculating the module I-V curve under any shading scenario is presented. Frameworks for estimating the diode temperature in field deployed modules based on Typical Meteorological Year (TMY) data are developed. Model for predicting the susceptibility of bypass diodes for thermal runaway is presented. Diode wear out due to High Temperature Forward Bias (HTFB) operation and Thermal Cycling (TC) is studied under custom designed accelerated tests. Overall, this dissertation is an effort towards estimating the lifetime of bypass diodes in field deployed modules, and therefore, reducing the uncertainty in long term reliability of PV modules.
Show less - Date Issued
- 2015
- Identifier
- CFE0006001, ucf:51023
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006001
- Title
- Stability and Control in Complex Networks of Dynamical Systems.
- Creator
-
Manaffam, Saeed, Vosoughi, Azadeh, Behal, Aman, Atia, George, Rahnavard, Nazanin, Javidi, Tara, Das, Tuhin, University of Central Florida
- Abstract / Description
-
Stability analysis of networked dynamical systems has been of interest in many disciplines such as biology and physics and chemistry with applications such as LASER cooling and plasma stability. These large networks are often modeled to have a completely random (Erd\"os-R\'enyi) or semi-random (Small-World) topologies. The former model is often used due to mathematical tractability while the latter has been shown to be a better model for most real life networks.The recent emergence of cyber...
Show moreStability analysis of networked dynamical systems has been of interest in many disciplines such as biology and physics and chemistry with applications such as LASER cooling and plasma stability. These large networks are often modeled to have a completely random (Erd\"os-R\'enyi) or semi-random (Small-World) topologies. The former model is often used due to mathematical tractability while the latter has been shown to be a better model for most real life networks.The recent emergence of cyber physical systems, and in particular the smart grid, has given rise to a number of engineering questions regarding the control and optimization of such networks. Some of the these questions are: \emph{How can the stability of a random network be characterized in probabilistic terms? Can the effects of network topology and system dynamics be separated? What does it take to control a large random network? Can decentralized (pinning) control be effective? If not, how large does the control network needs to be? How can decentralized or distributed controllers be designed? How the size of control network would scale with the size of networked system?}Motivated by these questions, we began by studying the probability of stability of synchronization in random networks of oscillators. We developed a stability condition separating the effects of topology and node dynamics and evaluated bounds on the probability of stability for both Erd\"os-R\'enyi (ER) and Small-World (SW) network topology models. We then turned our attention to the more realistic scenario where the dynamics of the nodes and couplings are mismatched. Utilizing the concept of $\varepsilon$-synchronization, we have studied the probability of synchronization and showed that the synchronization error, $\varepsilon$, can be arbitrarily reduced using linear controllers.We have also considered the decentralized approach of pinning control to ensure stability in such complex networks. In the pinning method, decentralized controllers are used to control a fraction of the nodes in the network. This is different from traditional decentralized approaches where all the nodes have their own controllers. While the problem of selecting the minimum number of pinning nodes is known to be NP-hard and grows exponentially with the number of nodes in the network we have devised a suboptimal algorithm to select the pinning nodes which converges linearly with network size. We have also analyzed the effectiveness of the pinning approach for the synchronization of oscillators in the networks with fast switching, where the network links disconnect and reconnect quickly relative to the node dynamics.To address the scaling problem in the design of distributed control networks, we have employed a random control network to stabilize a random plant network. Our results show that for an ER plant network, the control network needs to grow linearly with the size of the plant network.
Show less - Date Issued
- 2015
- Identifier
- CFE0005834, ucf:50902
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005834
- Title
- Distributed Extremum Seeking and Cooperative Control for Mobile Cooperative Communication Systems.
- Creator
-
Alabri, Said, Qu, Zhihua, Wei, Lei, Vosoughi, Azadeh, Atia, George, University of Central Florida
- Abstract / Description
-
In this thesis, a distributed extremum seeking and cooperative control algorithm is designed for mobile agents to dispersethemselves optimally in maintaining communication quality and maximizing their coverage. The networked mobile agentslocally form a virtual multiple-input multiple-output (MIMO) communication system, and they cooperatively communicateamong them by using the decode and forward cooperative communication technique. The outage probability is usedas the measure of communication...
Show moreIn this thesis, a distributed extremum seeking and cooperative control algorithm is designed for mobile agents to dispersethemselves optimally in maintaining communication quality and maximizing their coverage. The networked mobile agentslocally form a virtual multiple-input multiple-output (MIMO) communication system, and they cooperatively communicateamong them by using the decode and forward cooperative communication technique. The outage probability is usedas the measure of communication quality, and it can be estimated real-time. A general performance index balancing outageprobability and spatial dispersion is chosen for the overall system. The extremum seeking control approachis used to estimate and optimize the value of the performance index, and the cooperative formation control is applied tomove the mobile agents to achieve the optimal solution by using only the locally-available information. Through theintegration of cooperative communication and cooperative control, network connectivity and coverage of the mobile agentsare much improved when compared to either non-cooperative communication approaches or other existing control results.Analytical analysis is carried out to demonstrate the performance and robustness of the proposal methodology, andsimulation is done to illustrate its effectiveness.
Show less - Date Issued
- 2013
- Identifier
- CFE0005082, ucf:50744
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005082
- Title
- Data-driven Predictive Analytics For Distributed Smart Grid Control: Optimization of Energy Storage, Voltage and Demand Response.
- Creator
-
Valizadehhaghi, Hamed, Qu, Zhihua, Behal, Aman, Atia, George, Turgut, Damla, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
The smart grid is expected to support an interconnected network of self-contained microgrids. Nonetheless, the distributed integration of renewable generation and demand response adds complexity to the control and optimization of smart grid. Forecasts are essential due to the existence of stochastic variations and uncertainty. Forecasting data are spatio-temporal which means that the data correspond to regular intervals, say every hour, and the analysis has to take account of spatial...
Show moreThe smart grid is expected to support an interconnected network of self-contained microgrids. Nonetheless, the distributed integration of renewable generation and demand response adds complexity to the control and optimization of smart grid. Forecasts are essential due to the existence of stochastic variations and uncertainty. Forecasting data are spatio-temporal which means that the data correspond to regular intervals, say every hour, and the analysis has to take account of spatial dependence among the distributed generators or locations. Hence, smart grid operations must take account of, and in fact benefit from the temporal dependence as well as the spatial dependence. This is particularly important considering the buffering effect of energy storage devices such as batteries, heating/cooling systems and electric vehicles. The data infrastructure of smart grid is the key to address these challenges, however, how to utilize stochastic modeling and forecasting tools for optimal and reliable planning, operation and control of smart grid remains an open issue.Utilities are seeking to become more proactive in decision-making, adjusting their strategies based on realistic predictive views into the future, thus allowing them to side-step problems and capitalize on the smart grid technologies, such as energy storage, that are now being deployed atscale. Predictive analytics, capable of managing intermittent loads, renewables, rapidly changing weather patterns and other grid conditions, represent the ultimate goal for smart grid capabilities.Within this framework, this dissertation develops high-performance analytics, such as predictive analytics, and ways of employing analytics to improve distributed and cooperative optimization software which proves to be the most significant value-add in the smart grid age, as new network management technologies prove reliable and fundamental. Proposed optimization and control approaches for active and reactive power control are robust to variations and offer a certain level of optimality by combining real-time control with hours-ahead network operation schemes. The main objective is managing spatial and temporal availability of the energy resources in different look-ahead time horizons. Stochastic distributed optimization is realized by integrating a distributed sub-gradient method with conditional ensemble predictions of the energy storage capacity and distributed generation. Hence, the obtained solutions can reflect on the system requirements for the upcoming times along with the instantaneous cooperation between distributed resources. As an important issue for smart grid, the conditional ensembles are studied for capturing wind, photovoltaic, and vehicle-to-grid availability variations. The following objectives are pursued:- Spatio-temporal adaptive modeling of data including electricity demand, electric vehicles and renewable energy (wind and solar power)- Predictive data analytics and forecasting- Distributed control- Integration of energy storage systemsFull distributional characterization and spatio-temporal modeling of data ensembles are utilized in order to retain the conditional and temporal interdependence between projection data and available capacity. Then, by imposing measures of the most likely ensembles, the distributed control method is carried out for cooperative optimization of the renewable generation and energy storage within the smart grid.
Show less - Date Issued
- 2016
- Identifier
- CFE0006408, ucf:51481
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006408
- Title
- Compressive Sensing and Recovery of Structured Sparse Signals.
- Creator
-
Shahrasbi, Behzad, Rahnavard, Nazanin, Vosoughi, Azadeh, Wei, Lei, Atia, George, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
In the recent years, numerous disciplines including telecommunications, medical imaging, computational biology, and neuroscience benefited from increasing applications of high dimensional datasets. This calls for efficient ways of data capturing and data processing. Compressive sensing (CS), which is introduced as an efficient sampling (data capturing) method, is addressing this need. It is well-known that the signals, which belong to an ambient high-dimensional space, have much smaller...
Show moreIn the recent years, numerous disciplines including telecommunications, medical imaging, computational biology, and neuroscience benefited from increasing applications of high dimensional datasets. This calls for efficient ways of data capturing and data processing. Compressive sensing (CS), which is introduced as an efficient sampling (data capturing) method, is addressing this need. It is well-known that the signals, which belong to an ambient high-dimensional space, have much smaller dimensionality in an appropriate domain. CS taps into this principle and dramatically reduces the number of samples that is required to be captured to avoid any distortion in the information content of the data. This reduction in the required number of samples enables many new applications that were previously infeasible using classical sampling techniques.Most CS-based approaches take advantage of the inherent low-dimensionality in many datasets. They try to determine a sparse representation of the data, in an appropriately chosen basis using only a few significant elements. These approaches make no extra assumptions regarding possible relationships among the significant elements of that basis. In this dissertation, different ways of incorporating the knowledge about such relationships are integrated into the data sampling and the processing schemes.We first consider the recovery of temporally correlated sparse signals and show that using the time correlation model. The recovery performance can be significantly improved. Next, we modify the sampling process of sparse signals to incorporate the signal structure in a more efficient way. In the image processing application, we show that exploiting the structure information in both signal sampling and signal recovery improves the efficiency of the algorithm. In addition, we show that region-of-interest information can be included in the CS sampling and recovery steps to provide a much better quality for the region-of-interest area compared the rest of the image or video. In spectrum sensing applications, CS can dramatically improve the sensing efficiency by facilitating the coordination among spectrum sensors. A cluster-based spectrum sensing with coordination among spectrum sensors is proposed for geographically disperse cognitive radio networks. Further, CS has been exploited in this problem for simultaneous sensing and localization. Having access to this information dramatically facilitates the implementation of advanced communication technologies as required by 5G communication networks.
Show less - Date Issued
- 2015
- Identifier
- CFE0006392, ucf:51509
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006392