Current Search: Rahnavard, Nazanin (x)
View All Items
 Title
 Resource Optimization in Visible Light Communication using Internet of Things.
 Creator

Dey, Akash, Yuksel, Murat, Pourmohammadi Fallah, Yaser, Rahnavard, Nazanin, University of Central Florida
 Abstract / Description

In the modern day, there is a serious spectrum crunch in the legacy radio frequency (RF) band, for which visible light communication (VLC) can be a promising option. VLC is a shortrange wireless communication variant which uses the visible light spectrum. In this thesis, we are using a VLCbased architecture for providing scalable communications to InternetofThings (IoT) devices where a multielement hemispherical bulb is used that can transmit data streams from multiple light emitting...
Show moreIn the modern day, there is a serious spectrum crunch in the legacy radio frequency (RF) band, for which visible light communication (VLC) can be a promising option. VLC is a shortrange wireless communication variant which uses the visible light spectrum. In this thesis, we are using a VLCbased architecture for providing scalable communications to InternetofThings (IoT) devices where a multielement hemispherical bulb is used that can transmit data streams from multiple light emitting diode (LED) boards. The essence of this architecture is that it uses a LineofSight (LoS) alignment protocol that handles the handoff issue created by the movement of receivers inside a room.We start by proposing an optimization problem aiming to minimize the total consumed energy emitted by each LED taking into consideration the LEDs' power budget, users' perceived qualityofservice, LEDuser associations, and illumination uniformity constraints. Then, because of the nonconvexity of the problem, we propose to solve it in two stages: (1) We design an efficient algorithm for LEDuser association for fixed LED powers, and (2) using the LEDuser association, we find an approximate solution based on Taylor series to optimize the LEDs' power. We devise two heuristic solutions based on this approach. The first heuristic solution, called the Low Complexity Two Stages Solution (TSS), optimizes the association between the LEDs and the mobile users before and then the power of each LED is optimized. In the second heuristic, named the Maximum Uniformity Approach, we try to improve the illumination uniformity first and then adjust the power values for each LED so that they do not go above a certain value. Finally, we illustrate the performance of our method via simulations.
Show less  Date Issued
 2019
 Identifier
 CFE0007451, ucf:52693
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0007451
 Title
 End to End Brain Fiber Orientation Estimation Using Deep Learning.
 Creator

Puttashamachar, Nandakishore, Bagci, Ulas, Shah, Mubarak, Rahnavard, Nazanin, Sundaram, Kalpathy, University of Central Florida
 Abstract / Description

In this work, we explore the various Brain Neuron tracking techniques, one of the most significant applications of Diffusion Tensor Imaging. Tractography is a noninvasive method to analyze underlying tissue microstructure. Understanding the structure and organization of the tissues facilitates a diagnosis method to identify any aberrations which can occurwithin tissues due to loss of cell functionalities, provides acute information on the occurrences of brain ischemia or stroke, the...
Show moreIn this work, we explore the various Brain Neuron tracking techniques, one of the most significant applications of Diffusion Tensor Imaging. Tractography is a noninvasive method to analyze underlying tissue microstructure. Understanding the structure and organization of the tissues facilitates a diagnosis method to identify any aberrations which can occurwithin tissues due to loss of cell functionalities, provides acute information on the occurrences of brain ischemia or stroke, the mutation of certain neurological diseases such as Alzheimer, multiple sclerosis and so on. Under all these circumstances, accurate localization of the aberrations in efficient manner can help save a life. Following up with the limitations introduced by the current Tractography techniques such as computational complexity, reconstruction errors during tensor estimation and standardization, we aim to elucidate these limitations through our research findings. We introduce an End to End Deep Learning framework which can accurately estimate the most probable likelihood orientation at each voxel along a neuronal pathway. We use Probabilistic Tractography as our baseline model to obtain the training data and which also serve as a Tractography Gold Standard for our evaluations. Through experiments we show that our Deep Network can do a significant improvement over current Tractography implementations by reducing the runtime complexity to a significant new level. Our architecture also allows for variable sized input DWI signals eliminating the need to worry about memory issues as seen with the traditional techniques. The advantageof this architecture is that it is perfectly desirable to be processed on a cloud setup and utilize the existing multi GPU frameworks to perform whole brain Tractography in minutes rather than hours. The proposed method is a good alternative to the current state of the art orientation estimation technique which we demonstrate across multiple benchmarks.
Show less  Date Issued
 2017
 Identifier
 CFE0007292, ucf:52156
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0007292
 Title
 Realtime SIL Emulation Architecture for Cooperative Automated Vehicles.
 Creator

Gupta, Nitish, Pourmohammadi Fallah, Yaser, Rahnavard, Nazanin, Vosoughi, Azadeh, University of Central Florida
 Abstract / Description

This thesis presents a robust, flexible and realtime architecture for SoftwareintheLoop (SIL) testing of connected vehicle safety applications. Emerging connected and automated vehicles (CAV) use sensing, communication and computing technologies in the design of a host of new safety applications. Testing and verification of these applications is a major concern for the automotive industry. The CAV safety applications work by sharing their state and movement information over wireless...
Show moreThis thesis presents a robust, flexible and realtime architecture for SoftwareintheLoop (SIL) testing of connected vehicle safety applications. Emerging connected and automated vehicles (CAV) use sensing, communication and computing technologies in the design of a host of new safety applications. Testing and verification of these applications is a major concern for the automotive industry. The CAV safety applications work by sharing their state and movement information over wireless communication links. Vehicular communication has fueled the development of various Cooperative Vehicle Safety (CVS) applications. Development of safety applications for CAV requires testing in many different scenarios. However, the recreation of test scenarios for evaluating safety applications is a very challenging task. This is mainly due to the randomness in communication, difficulty in recreating vehicle movements precisely, and safety concerns for certain scenarios. We propose to develop a standalone Remote Vehicle Emulator (RVE) that can reproduce V2V messages of remote vehicles from simulations or from previous tests, while also emulating the over the air behavior of multiple communicating nodes. This is expected to significantly accelerate the development cycle. RVE is a unique and easily configurable emulation cum simulation setup to allow Software in the Loop (SIL) testing of connected vehicle applications in a realistic and safe manner. It will help in tailoring numerous test scenarios, expediting algorithm development and validation as well as increase the probability of finding failure modes. This, in turn, will help improve the quality of safety applications while saving testing time and reducing cost.The RVE architecture consists of two modules, the Mobility Generator, and the Communication emulator. Both of these modules consist of a sequence of events that are handled based on the type of testing to be carried out. The communication emulator simulates the behavior of MAC layer while also considering the channel model to increase the probability of successful transmission. It then produces over the air messages that resemble the output of multiple nodes transmitting, including corrupted messages due to collisions. The algorithm that goes inside the emulator has been optimized so as to minimize the communication latency and make this a realistic and realtime safety testing tool. Finally, we provide a multimetric experimental evaluation wherein we verified the simulation results with an identically configured ns3 simulator. With the aim to improve the quality of testing of CVS applications, this unique architecture would serve as a fundamental design for the future of CVS application testing.
Show less  Date Issued
 2018
 Identifier
 CFE0007185, ucf:52280
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0007185
 Title
 TrustBased Rating Prediction and Malicious Profile Detection in Online Social Recommender Systems.
 Creator

Davoudi, Anahita, Chatterjee, Mainak, Hu, Haiyan, Zou, Changchun, Rahnavard, Nazanin, University of Central Florida
 Abstract / Description

Online social networks and recommender systems have become an effective channel for influencing millions of users by facilitating exchange and spread of information. This dissertation addresses multiple challenges that are faced by online social recommender systems such as: i) finding the extent of information spread; ii) predicting the rating of a product; and iii) detecting malicious profiles. Most of the research in this area do not capture the social interactions and rely on empirical or...
Show moreOnline social networks and recommender systems have become an effective channel for influencing millions of users by facilitating exchange and spread of information. This dissertation addresses multiple challenges that are faced by online social recommender systems such as: i) finding the extent of information spread; ii) predicting the rating of a product; and iii) detecting malicious profiles. Most of the research in this area do not capture the social interactions and rely on empirical or statistical approaches without considering the temporal aspects. We capture the temporal spread of information using a probabilistic model and use nonlinear differential equations to model the diffusion process. To predict the rating of a product, we propose a social trust model and use the matrix factorization method to estimate user's taste by incorporating useritem rating matrix. The effect of tastes of friends of a user is captured using a trust model which is based on similarities between users and their centralities. Similarity is modeled using Vector Space Similarity and Pearson Correlation Coefficient algorithms, whereas degree, eigenvector, Katz, and PageRank are used to model centrality. As rating of a product has tremendous influence on its saleability, social recommender systems are vulnerable to profile injection attacks that affect user's opinion towards favorable or unfavorable recommendations for a product. We propose a classification approach for detecting attackers based on attributes that provide the likelihood of a user profile of that of an attacker. To evaluate the performance, we inject push and nuke attacks, and use precision and recall to identify the attackers. All proposed models have been validated using datasets from Facebook, Epinions, and Digg. Results exhibit that the proposed models are able to better predict the information spread, rating of a product, and identify malicious user profiles with high accuracy and low false positives.
Show less  Date Issued
 2018
 Identifier
 CFE0007168, ucf:52245
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0007168
 Title
 Analysis of Driver Behavior Modeling in Connected Vehicle Safety Systems Through High Fidelity Simulation.
 Creator

Jamialahmadi, Ahmad, Pourmohammadi Fallah, Yaser, Rahnavard, Nazanin, Chatterjee, Mainak, University of Central Florida
 Abstract / Description

A critical aspect of connected vehicle safety analysis is understanding the impact of human behavior on the overall performance of the safety system. Given the variation in human driving behavior and the expectancy for high levels of performance, it is crucial for these systems to be flexible to various driving characteristics. However, design, testing, and evaluation of these active safety systems remain a challenging task, exacerbated by the lack of behavioral data and practical test...
Show moreA critical aspect of connected vehicle safety analysis is understanding the impact of human behavior on the overall performance of the safety system. Given the variation in human driving behavior and the expectancy for high levels of performance, it is crucial for these systems to be flexible to various driving characteristics. However, design, testing, and evaluation of these active safety systems remain a challenging task, exacerbated by the lack of behavioral data and practical test platforms. Additionally, the need for the operation of these systems in critical and dangerous situations makes the burden of their evaluation very costly and timeconsuming. As an alternative option, researchers attempt to use simulation platforms to study and evaluate their algorithms. In this work, we introduce a high fidelity simulation platform, designed for a hybrid transportation system involving both humandriven and automated vehicles. We decompose the human driving task and offer a modular approach in simulating a largescale traffic scenario, making it feasible for extensive studying of automated and active safety systems. Furthermore, we propose a humaninterpretable driver model represented as a closedloop feedback controller. For this model, we analyze a large driving dataset to extract expressive parameters that would best describe different driving characteristics. Finally, we recreate a similarly dense traffic scenario within our simulator and conduct a thorough analysis of different humanspecific and systemspecific factors and study their effect on the performance and safety of the traffic network.
Show less  Date Issued
 2018
 Identifier
 CFE0007573, ucf:52578
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0007573
 Title
 Describing Images by Semantic Modeling using Attributes and Tags.
 Creator

Mahmoudkalayeh, Mahdi, Shah, Mubarak, Sukthankar, Gita, Rahnavard, Nazanin, Zhang, Teng, University of Central Florida
 Abstract / Description

This dissertation addresses the problem of describing images using visual attributes and textual tags, a fundamental task that narrows down the semantic gap between the visual reasoning of humans and machines. Automatic image annotation assigns relevant textual tags to the images. In this dissertation, we propose a queryspecific formulation based on Weighted Multiview Nonnegative Matrix Factorization to perform automatic image annotation. Our proposed technique seamlessly adapt to the...
Show moreThis dissertation addresses the problem of describing images using visual attributes and textual tags, a fundamental task that narrows down the semantic gap between the visual reasoning of humans and machines. Automatic image annotation assigns relevant textual tags to the images. In this dissertation, we propose a queryspecific formulation based on Weighted Multiview Nonnegative Matrix Factorization to perform automatic image annotation. Our proposed technique seamlessly adapt to the changes in training data, naturally solves the problem of feature fusion and handles the challenge of the rare tags. Unlike tags, attributes are categoryagnostic, hence their combination models an exponential number of semantic labels. Motivated by the fact that most attributes describe local properties, we propose exploiting localization cues, through semantic parsing of human face and body to improve personrelated attribute prediction. We also demonstrate that imagelevel attribute labels can be effectively used as weak supervision for the task of semantic segmentation. Next, we analyze the Selfie images by utilizing tags and attributes. We collect the first largescale Selfie dataset and annotate it with different attributes covering characteristics such as gender, age, race, facial gestures, and hairstyle. We then study the popularity and sentiments of the selfies given an estimated appearance of various semantic concepts. In brief, we automatically infer what makes a good selfie. Despite its extensive usage, the deep learning literature falls short in understanding the characteristics and behavior of the Batch Normalization. We conclude this dissertation by providing a fresh view, in light of information geometry and Fisher kernels to why the batch normalization works. We propose Mixture Normalization that disentangles modes of variation in the underlying distribution of the layer outputs and confirm that it effectively accelerates training of different batchnormalized architectures including InceptionV3, Densely Connected Networks, and Deep Convolutional Generative Adversarial Networks while achieving better generalization error.
Show less  Date Issued
 2019
 Identifier
 CFE0007493, ucf:52640
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0007493
 Title
 COMPRESSIVE AND CODED CHANGE DETECTION: THEORY AND APPLICATION TO STRUCTURAL HEALTH MONITORING.
 Creator

Sarayanibafghi, Omid, Atia, George, Vosoughi, Azadeh, Rahnavard, Nazanin, University of Central Florida
 Abstract / Description

In traditional sparse recovery problems, the goal is to identify the support of compressible signals using a small number of measurements. In contrast, in this thesis the problem of identification of a sparse number of statistical changes in stochastic phenomena is considered when decision makers only have access to compressed measurements, i.e., each measurement is derived by a subset of features. Herein, we propose a new framework that is termed Compressed Change Detection. The main...
Show moreIn traditional sparse recovery problems, the goal is to identify the support of compressible signals using a small number of measurements. In contrast, in this thesis the problem of identification of a sparse number of statistical changes in stochastic phenomena is considered when decision makers only have access to compressed measurements, i.e., each measurement is derived by a subset of features. Herein, we propose a new framework that is termed Compressed Change Detection. The main approach relies on integrating ideas from the theory of identifying codes with change point detection in sequential analysis. If the stochastic properties of certain features change, then the changes can be detected by examining the covering set of an identifying code of measurements. In particular, given a large number N of features, the goal is to detect a small set of features that undergoes a statistical change using a small number of measurements. Sufficient conditions are derived for the probability of false alarm and isolation to approach zero in the asymptotic regime where N is large.As an application of compressed change detection, the problem of detection of a sparse number of damages in a structure for Structural Health Monitoring (SHM) is considered. Since only a small number of damage scenarios can occur simultaneously, change detection is applied to responses of pairs of sensors that form an identifying code over a learned damagesensing graph. Generalizations of the proposed framework with multiple concurrent changes and for arbitrary graph topologies are presented.
Show less  Date Issued
 2016
 Identifier
 CFE0006387, ucf:51507
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0006387
 Title
 Reliable Spectrum Hole Detection in SpectrumHeterogeneous Mobile Cognitive Radio Networks via Sequential Bayesian Nonparametric Clustering.
 Creator

Zaeemzadeh, Alireza, Rahnavard, Nazanin, Vosoughi, Azadeh, Qi, GuoJun, University of Central Florida
 Abstract / Description

In this work, the problem of detecting radio spectrum opportunities in spectrumheterogeneous cognitive radio networks is addressed. Spectrum opportunities are the frequency channels that are underutilized by the primary licensed users. Thus, by enabling the unlicensed users to detect and utilize them, we can improve the efficiency, reliability, and the flexibility of the radio spectrum usage. The main objective of this work is to discover the spectrum opportunities in time, space, and...
Show moreIn this work, the problem of detecting radio spectrum opportunities in spectrumheterogeneous cognitive radio networks is addressed. Spectrum opportunities are the frequency channels that are underutilized by the primary licensed users. Thus, by enabling the unlicensed users to detect and utilize them, we can improve the efficiency, reliability, and the flexibility of the radio spectrum usage. The main objective of this work is to discover the spectrum opportunities in time, space, and frequency domains, by proposing a lowcost and practical framework. Spectrumheterogeneous networks are the networks in which different sensors experience different spectrum opportunities. Thus, the sensing data from sensors cannot be combined to reach consensus and to detect the spectrum opportunities. Moreover, unreliable data, caused by noise or malicious attacks, will deteriorate the performance of the decisionmaking process. The problem becomes even more challenging when the locations of the sensors are unknown. In this work, a probabilistic model is proposed to cluster the sensors based on their readings, not requiring any knowledge of location of the sensors. The complexity of the model, which is the number of clusters, is automatically inferred from the sensing data. The processing node, also referred to as the base station or the fusion center, infers the probability distributions of cluster memberships, channel availabilities, and devices' reliability in an online manner. After receiving each chunk of sensing data, the probability distributions are updated, without requiring to repeat the computations on previous sensing data. All the update rules are derived mathematically, by employing Bayesian data analysis techniques and variational inference.Furthermore, the inferred probability distributions are employed to assign unique spectrum opportunities to each of the sensors. To avoid interference among the sensors, physically adjacent devices should not utilize the same channels. However, since the location of the devices is not known, cluster membership information is used as a measure of adjacency. This is based on the assumption that the measurements of the devices are spatially correlated. Thus, adjacent devices, which experience similar spectrum opportunities, belong to the same cluster. Then, the problem is mapped into a energy minimization problem and solved via graph cuts. The goal of the proposed graphtheorybased method is to assign each device an available channel, while avoiding interference among neighboring devices. The numerical simulations illustrates the effectiveness of the proposed methods, compared to the existing frameworks.
Show less  Date Issued
 2017
 Identifier
 CFE0006963, ucf:51639
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0006963
 Title
 Impact of wireless channel uncertainty upon Mary distributed detection systems.
 Creator

Hajibabaei Najafabadi, Zahra, Vosoughi, Azadeh, Rahnavard, Nazanin, Atia, George, University of Central Florida
 Abstract / Description

We consider a wireless sensor network (WSN), consisting of several sensors and a fusion center (FC), which is tasked with solving an $M$ary hypothesis testing problem. Sensors make $M$ary decisions and transmit their digitally modulated decisions over orthogonal channels, which are subject to Rayleigh fading and noise, to the FC. Adopting Bayesian optimality criterion, we consider training and nontraining based distributed detection systems and investigate the effect of imperfect channel...
Show moreWe consider a wireless sensor network (WSN), consisting of several sensors and a fusion center (FC), which is tasked with solving an $M$ary hypothesis testing problem. Sensors make $M$ary decisions and transmit their digitally modulated decisions over orthogonal channels, which are subject to Rayleigh fading and noise, to the FC. Adopting Bayesian optimality criterion, we consider training and nontraining based distributed detection systems and investigate the effect of imperfect channel state information (CSI) on the optimal maximum a posteriori probability (MAP) fusion rules and detection performance, when the sum of training and data symbol transmit powers is fixed. Our results show that for Rayleigh fading channel, when sensors employ $M$FSK or binary FSK (BFSK) modulation, the error probability is minimized when training symbol transmit power is zero (regardless of the reception mode at the FC). However, for coherent reception, $M$PSK and binary PSK (BPSK) modulation the error probability is minimized when half of transmit power is allocated for training symbol. If the channel is Rician fading, regardless of the modulation, the error probability is minimized when training transmit power is zero.
Show less  Date Issued
 2016
 Identifier
 CFE0006111, ucf:51209
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0006111
 Title
 Fast Compressed Automatic Target Recognition for a Compressive Infrared Imager.
 Creator

Millikan, Brian, Foroosh, Hassan, Rahnavard, Nazanin, Muise, Robert, Atia, George, Mahalanobis, Abhijit, Sun, Qiyu, University of Central Florida
 Abstract / Description

Many military systems utilize infrared sensors which allow an operator to see targets at night. Several of these are either midwave or longwave high resolution infrared sensors, which are expensive to manufacture. But compressive sensing, which has primarily been demonstrated in medical applications, can be used to minimize the number of measurements needed to represent a highresolution image. Using these techniques, a relatively low cost midwave infrared sensor can be realized which has...
Show moreMany military systems utilize infrared sensors which allow an operator to see targets at night. Several of these are either midwave or longwave high resolution infrared sensors, which are expensive to manufacture. But compressive sensing, which has primarily been demonstrated in medical applications, can be used to minimize the number of measurements needed to represent a highresolution image. Using these techniques, a relatively low cost midwave infrared sensor can be realized which has a high effective resolution. In traditional military infrared sensing applications, like targeting systems, automatic targeting recognition algorithms are employed to locate and identify targets of interest to reduce the burden on the operator. The resolution of the sensor can increase the accuracy and operational range of a targeting system. When using a compressive sensing infrared sensor, traditional decompression techniques can be applied to form a spatialdomain infrared image, but most are iterative and not ideal for realtime environments. A more efficient method is to adapt the target recognition algorithms to operate directly on the compressed samples. In this work, we will present a target recognition algorithm which utilizes a compressed target detection method to identify potential target areas and then a specialized target recognition technique that operates directly on the same compressed samples. We will demonstrate our method on the U.S. Army Night Vision and Electronic Sensors Directorate ATR Algorithm Development Image Database which has been made available by the Sensing Information Analysis Center.
Show less  Date Issued
 2018
 Identifier
 CFE0007408, ucf:52739
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0007408
 Title
 VisualTextual Video Synopsis Generation.
 Creator

Sharghi Karganroodi, Aidean, Shah, Mubarak, Da Vitoria Lobo, Niels, Rahnavard, Nazanin, Atia, George, University of Central Florida
 Abstract / Description

In this dissertation we tackle the problem of automatic video summarization. Automatic summarization techniques enable faster browsing and indexing of large video databases. However, due to the inherent subjectivity of the task, no single video summarizer fits all users unless it adapts to individual user's needs. To address this issue, we introduce a fresh view on the task called "Queryfocused'' extractive video summarization. We develop a supervised model that takes as input a video and...
Show moreIn this dissertation we tackle the problem of automatic video summarization. Automatic summarization techniques enable faster browsing and indexing of large video databases. However, due to the inherent subjectivity of the task, no single video summarizer fits all users unless it adapts to individual user's needs. To address this issue, we introduce a fresh view on the task called "Queryfocused'' extractive video summarization. We develop a supervised model that takes as input a video and user's preference in form of a query, and creates a summary video by selecting key shots from the original video. We model the problem as subset selection via determinantal point process (DPP), a stochastic point process that assigns a probability value to each subset of any given set. Next, we develop a second model that exploits capabilities of memory networks in the framework and concomitantly reduces the level of supervision required to train the model. To automatically evaluate system summaries, we contend that a good metric for video summarization should focus on the semantic information that humans can perceive rather than the visual features or temporal overlaps. To this end, we collect dense pervideoshot concept annotations, compile a new dataset, and suggest an efficient evaluation method defined upon the concept annotations. To enable better summarization of videos, we improve the sequential DPP in two folds. In terms of learning, we propose a largemargin algorithm to address the exposure bias that is common in many sequence to sequence learning methods. In terms of modeling, we integrate a new probabilistic distribution into SeqDPP, the resulting model accepts user input about the expected length of the summary. We conclude this dissertation by developing a framework to generate textual synopsis for a video, thus, enabling users to quickly browse a large video database without watching the videos.
Show less  Date Issued
 2019
 Identifier
 CFE0007862, ucf:52756
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0007862
 Title
 Sparse signal recovery under sensing and physical hardware constraints.
 Creator

Mardaninajafabadi, Davood, Atia, George, Mikhael, Wasfy, Vosoughi, Azadeh, Rahnavard, Nazanin, Abouraddy, Ayman, University of Central Florida
 Abstract / Description

This dissertation focuses on information recovery under two general types of sensing constraints and hardware limitations that arise in practical data acquisition systems. We study the effects of these practical limitations in the context of signal recovery problems from interferometric measurements such as for optical mode analysis.The first constraint stems from the limited number of degrees of freedom of an information gathering system, which gives rise to highly constrained sensing...
Show moreThis dissertation focuses on information recovery under two general types of sensing constraints and hardware limitations that arise in practical data acquisition systems. We study the effects of these practical limitations in the context of signal recovery problems from interferometric measurements such as for optical mode analysis.The first constraint stems from the limited number of degrees of freedom of an information gathering system, which gives rise to highly constrained sensing structures. In contrast to prior work on compressive signal recovery which relies for the most part on introducing additional hardware components to emulate randomization, we establish performance guarantees for successful signal recovery from a reduced number of measurements even with the constrained interferometer structure obviating the need for nonnative components. Also, we propose control policies to guide the collection of informative measurements given prior knowledge about the constrained sensing structure. In addition, we devise a sequential implementation with a stopping rule, shown to reduce the sample complexity for a target performance in reconstruction.The second limitation considered is due to physical hardware constraints, such as the finite spatial resolution of the used components and their finite aperture size. Such limitations introduce nonlinearities in the underlying measurement model. We first develop a more accurate measurement model with structured noise representing a known nonlinear function of the input signal, obtained by leveraging side information about the sampling structure. Then, we devise iterative denoising algorithms shown to enhance the quality of sparse recovery in the presence of physical constraints by iteratively estimating and eliminating the nonlinear term from the measurements. We also develop a class of clippingcognizant reconstruction algorithms for modal reconstruction from interferometric measurements that compensate for clipping effects due to the finite aperture size of the used components and show they yield significant gains over schemes oblivious to such effects.
Show less  Date Issued
 2019
 Identifier
 CFE0007675, ucf:52467
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0007675
 Title
 Stability and Control in Complex Networks of Dynamical Systems.
 Creator

Manaffam, Saeed, Vosoughi, Azadeh, Behal, Aman, Atia, George, Rahnavard, Nazanin, Javidi, Tara, Das, Tuhin, University of Central Florida
 Abstract / Description

Stability analysis of networked dynamical systems has been of interest in many disciplines such as biology and physics and chemistry with applications such as LASER cooling and plasma stability. These large networks are often modeled to have a completely random (Erd\"osR\'enyi) or semirandom (SmallWorld) topologies. The former model is often used due to mathematical tractability while the latter has been shown to be a better model for most real life networks.The recent emergence of cyber...
Show moreStability analysis of networked dynamical systems has been of interest in many disciplines such as biology and physics and chemistry with applications such as LASER cooling and plasma stability. These large networks are often modeled to have a completely random (Erd\"osR\'enyi) or semirandom (SmallWorld) topologies. The former model is often used due to mathematical tractability while the latter has been shown to be a better model for most real life networks.The recent emergence of cyber physical systems, and in particular the smart grid, has given rise to a number of engineering questions regarding the control and optimization of such networks. Some of the these questions are: \emph{How can the stability of a random network be characterized in probabilistic terms? Can the effects of network topology and system dynamics be separated? What does it take to control a large random network? Can decentralized (pinning) control be effective? If not, how large does the control network needs to be? How can decentralized or distributed controllers be designed? How the size of control network would scale with the size of networked system?}Motivated by these questions, we began by studying the probability of stability of synchronization in random networks of oscillators. We developed a stability condition separating the effects of topology and node dynamics and evaluated bounds on the probability of stability for both Erd\"osR\'enyi (ER) and SmallWorld (SW) network topology models. We then turned our attention to the more realistic scenario where the dynamics of the nodes and couplings are mismatched. Utilizing the concept of $\varepsilon$synchronization, we have studied the probability of synchronization and showed that the synchronization error, $\varepsilon$, can be arbitrarily reduced using linear controllers.We have also considered the decentralized approach of pinning control to ensure stability in such complex networks. In the pinning method, decentralized controllers are used to control a fraction of the nodes in the network. This is different from traditional decentralized approaches where all the nodes have their own controllers. While the problem of selecting the minimum number of pinning nodes is known to be NPhard and grows exponentially with the number of nodes in the network we have devised a suboptimal algorithm to select the pinning nodes which converges linearly with network size. We have also analyzed the effectiveness of the pinning approach for the synchronization of oscillators in the networks with fast switching, where the network links disconnect and reconnect quickly relative to the node dynamics.To address the scaling problem in the design of distributed control networks, we have employed a random control network to stabilize a random plant network. Our results show that for an ER plant network, the control network needs to grow linearly with the size of the plant network.
Show less  Date Issued
 2015
 Identifier
 CFE0005834, ucf:50902
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0005834
 Title
 Compressive Sensing and Recovery of Structured Sparse Signals.
 Creator

Shahrasbi, Behzad, Rahnavard, Nazanin, Vosoughi, Azadeh, Wei, Lei, Atia, George, Pensky, Marianna, University of Central Florida
 Abstract / Description

In the recent years, numerous disciplines including telecommunications, medical imaging, computational biology, and neuroscience benefited from increasing applications of high dimensional datasets. This calls for efficient ways of data capturing and data processing. Compressive sensing (CS), which is introduced as an efficient sampling (data capturing) method, is addressing this need. It is wellknown that the signals, which belong to an ambient highdimensional space, have much smaller...
Show moreIn the recent years, numerous disciplines including telecommunications, medical imaging, computational biology, and neuroscience benefited from increasing applications of high dimensional datasets. This calls for efficient ways of data capturing and data processing. Compressive sensing (CS), which is introduced as an efficient sampling (data capturing) method, is addressing this need. It is wellknown that the signals, which belong to an ambient highdimensional space, have much smaller dimensionality in an appropriate domain. CS taps into this principle and dramatically reduces the number of samples that is required to be captured to avoid any distortion in the information content of the data. This reduction in the required number of samples enables many new applications that were previously infeasible using classical sampling techniques.Most CSbased approaches take advantage of the inherent lowdimensionality in many datasets. They try to determine a sparse representation of the data, in an appropriately chosen basis using only a few significant elements. These approaches make no extra assumptions regarding possible relationships among the significant elements of that basis. In this dissertation, different ways of incorporating the knowledge about such relationships are integrated into the data sampling and the processing schemes.We first consider the recovery of temporally correlated sparse signals and show that using the time correlation model. The recovery performance can be significantly improved. Next, we modify the sampling process of sparse signals to incorporate the signal structure in a more efficient way. In the image processing application, we show that exploiting the structure information in both signal sampling and signal recovery improves the efficiency of the algorithm. In addition, we show that regionofinterest information can be included in the CS sampling and recovery steps to provide a much better quality for the regionofinterest area compared the rest of the image or video. In spectrum sensing applications, CS can dramatically improve the sensing efficiency by facilitating the coordination among spectrum sensors. A clusterbased spectrum sensing with coordination among spectrum sensors is proposed for geographically disperse cognitive radio networks. Further, CS has been exploited in this problem for simultaneous sensing and localization. Having access to this information dramatically facilitates the implementation of advanced communication technologies as required by 5G communication networks.
Show less  Date Issued
 2015
 Identifier
 CFE0006392, ucf:51509
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0006392
 Title
 On Distributed Estimation for Resource Constrained Wireless Sensor Networks.
 Creator

Sani, Alireza, Vosoughi, Azadeh, Rahnavard, Nazanin, Wei, Lei, Atia, George, Chatterjee, Mainak, University of Central Florida
 Abstract / Description

We study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically...
Show moreWe study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically distributed tiny sensors are tasked with collecting data from the field. Each sensor locally processes its noisy observation (local processing can include compression,dimension reduction, quantization, etc) and transmits the processed observation over communication channels to the FC, where the received data is used to form a global estimate of the unknown source such that the Mean Square Error (MSE) of the DES is minimized. The accuracy of DES depends on many factors such as intensity of observation noises in sensors, quantization errors in sensors, available power and bandwidth of the network, quality of communication channels between sensors and the FC, and the choice of fusion rule in the FC. Taking into account all of these contributing factors and implementing a DES system which minimizes the MSE and satisfies all constraints is a challenging task. In order to probe into different aspects of this challenging task we identify and formulate the following three problems and address them accordingly:1 Consider an inhomogeneous WSN where the sensors' observations is modeled linear with additive Gaussian noise. The communication channels between sensors and FC are orthogonal power and bandwidthconstrained erroneous wireless fading channels. The unknown to be estimated is a Gaussian vector. Sensors employ uniform multibit quantizers and BPSK modulation. Given this setup, we ask: what is the best fusion rule in the FC? what is the best transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize the MSE? In order to answer these questions, we derive some upper bounds on global MSE and through minimizing those bounds, we propose various resource allocation schemes for the problem, through which we investigate the effect of contributing factors on the MSE.2 Consider an inhomogeneous WSN with an FC which is tasked with estimating a scalar Gaussian unknown. The sensors are equipped with uniform multibit quantizers and the communication channels are modeled as Binary Symmetric Channels (BSC). In contrast to former problem the sensors experience independent multiplicative noises (in addition to additive noise). The natural question in this scenario is: how does multiplicative noise affect the DES system performance? how does it affect the resource allocation for sensors, with respect to the case where there is no multiplicative noise? We propose a linear fusion rule in the FC and derive the associated MSE in closedform. We propose several rate allocation schemes with different levels of complexity which minimize the MSE. Implementing the proposed schemes lets us study the effect of multiplicative noise on DES system performance and its dynamics. We also derive Bayesian CramerRao Lower Bound (BCRLB) and compare the MSE performance of our porposed methods against the bound.As a dual problem we also answer the question: what is the minimum required bandwidth of thenetwork to satisfy a predetermined target MSE?3 Assuming the framework of Bayesian DES of a Gaussian unknown with additive and multiplicative Gaussian noises involved, we answer the following question: Can multiplicative noise improve the DES performance in any case/scenario? the answer is yes, and we call the phenomena as 'enhancement mode' of multiplicative noise. Through deriving different lower bounds, such as BCRLB,WeissWeinstein Bound (WWB), Hybrid CRLB (HCRLB), Nayak Bound (NB), Yatarcos Bound (YB) on MSE, we identify and characterize the scenarios that the enhancement happens. We investigate two situations where variance of multiplicative noise is known and unknown. Wealso compare the performance of wellknown estimators with the derived bounds, to ensure practicability of the mentioned enhancement modes.
Show less  Date Issued
 2017
 Identifier
 CFE0006913, ucf:51698
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0006913
 Title
 Applied Advanced Error Control Coding for General Purpose Representation and Association Machine Systems.
 Creator

Dai, Bowen, Wei, Lei, Lin, Mingjie, Rahnavard, Nazanin, Turgut, Damla, Sun, Qiyu, University of Central Florida
 Abstract / Description

GeneralPurpose Representation and Association Machine (GPRAM) is proposed to be focusing on computations in terms of variation and flexibility, rather than precision and speed. GPRAM system has a vague representation and has no predefined tasks. With several important lessons learned from error control coding, neuroscience and human visual system, we investigate several types of error control codes, including Hamming code and LowDensity Parity Check (LDPC) codes, and extend them to...
Show moreGeneralPurpose Representation and Association Machine (GPRAM) is proposed to be focusing on computations in terms of variation and flexibility, rather than precision and speed. GPRAM system has a vague representation and has no predefined tasks. With several important lessons learned from error control coding, neuroscience and human visual system, we investigate several types of error control codes, including Hamming code and LowDensity Parity Check (LDPC) codes, and extend them to different directions.While in error control codes, solely XOR logic gate is used to connect different nodes. Inspired by biosystems and Turbo codes, we suggest and study nonlinear codes with expanded operations, such as codes including AND and OR gates which raises the problem of priorprobabilities mismatching. Prior discussions about critical challenges in designing codes and iterative decoding for nonequiprobable symbols may pave the way for a more comprehensive understanding of biosignal processing. The limitation of XOR operation in iterative decoding with nonequiprobable symbols is described and can be potentially resolved by applying quasiXOR operation and intermediate transformation layer. Constructing codes for nonequiprobable symbols with the former approach cannot satisfyingly perform with regarding to error correction capability. Probabilistic messages for sumproduct algorithm using XOR, AND, and OR operations with nonequiprobable symbols are further computed. The primary motivation for the constructing codes is to establish the GPRAM system rather than to conduct error control coding per se. The GPRAM system is fundamentally developed by applying various operations with substantial overcomplete basis. This system is capable of continuously achieving better and simpler approximations for complex tasks.The approaches of decoding LDPC codes with nonequiprobable binary symbols are discussed due to the aforementioned priorprobabilities mismatching problem. The traditional Tanner graph should be modified because of the distinction of message passing to information bits and to parity check bits from check nodes. In other words, the message passing along two directions are identical in conventional Tanner graph, while the message along the forward direction and backward direction are different in our case. A method of optimizing signal constellation is described, which is able to maximize the channel mutual information.A simple Image Processing Unit (IPU) structure is proposed for GPRAM system, to which images are inputted. The IPU consists of a randomly constructed LDPC code, an iterative decoder, a switch, and scaling and decision device. The quality of input images has been severely deteriorated for the purpose of mimicking visual information variability (VIV) experienced in human visual systems. The IPU is capable of (a) reliably recognizing digits from images of which quality is extremely inadequate; (b) achieving similar hyperacuity performance comparing to human visual system; and (c) significantly improving the recognition rate with applying randomly constructed LDPC code, which is not specifically optimized for the tasks.
Show less  Date Issued
 2016
 Identifier
 CFE0006449, ucf:51413
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0006449
 Title
 Prototype Development in General Purpose Representation and Association Machine Using Communication Theory.
 Creator

Li, Huihui, Wei, Lei, Rahnavard, Nazanin, Vosoughi, Azadeh, Da Vitoria Lobo, Niels, Wang, Wei, University of Central Florida
 Abstract / Description

Biological system study has been an intense research area in neuroscience and cognitive science for decades of years. Biological human brain is created as an intelligent system that integrates various types of sensor information and processes them intelligently. Neurons, as activated brain cells help the brain to make instant and rough decisions. From the 1950s, researchers start attempting to understand the strategies the biological system employs, then eventually translate them into machine...
Show moreBiological system study has been an intense research area in neuroscience and cognitive science for decades of years. Biological human brain is created as an intelligent system that integrates various types of sensor information and processes them intelligently. Neurons, as activated brain cells help the brain to make instant and rough decisions. From the 1950s, researchers start attempting to understand the strategies the biological system employs, then eventually translate them into machinebased algorithms. Modern computers have been developed to meet our need to handle computational tasks which our brains are not capable of performing with precision and speed. While in these existing manmade intelligent systems, most of them are designed for specific purposes. The modern computers solve sophistic problems based on fixed representation and association formats, instead of employing versatile approaches to explore the unsolved problems.Because of the above limitations of the conventional machines, General Purpose Representation and Association Machine (GPRAM) System is proposed to focus on using a versatile approach with hierarchical representation and association structures to do a quick and rough assessment on multitasks. Through lessons learned from neuroscience, error control coding and digital communications, a prototype of GPRAM system by employing (7,4) Hamming codes and short LowDensity Parity Check (LDPC) codes is implemented. Types of learning processes are presented, which prove the capability of GPRAM for handling multitasks.Furthermore, a study of low resolution simple patterns and face images recognition using an Image Processing Unit (IPU) structure for GPRAM system is presented. IPU structure consists of a randomly constructed LDPC code, an iterative decoder, a switch and scaling, and decision devices. All the input images have been severely degraded to mimic human Visual Information Variability (VIV) experienced in human visual system. The numerical results show that 1) IPU can reliably recognize simple pattern images in different shapes and sizes; 2) IPU demonstrates an excellent multiclass recognition performance for the face images with high degradation. Our results are comparable to popular machine learning recognition methods towards images without any quality degradation; 3) A bunch of methods have been discussed for improving IPU recognition performance, e.g. designing various detection and power scaling methods, constructing specific LDPC codes with large minimum girth, etc.Finally, novel methods to optimize Mary PSK, Mary DPSK, and dualring QAM signaling with nonequal symbol probabilities over AWGN channels are presented. In digital communication systems, MPSK, MDPSK, and dualring QAM signaling with equiprobable symbols have been well analyzed and widely used in practice. Inspired by biosystems, we suggest investigating signaling with nonequiprobable symbol probabilities, since in biosystems it is highlyunlikely to follow the ideal setting and uniform construction of single type of system. The results show that the optimizing system has lower error probabilities than conventional systems and the improvements are dramatic. Even though the communication systems are used as the testing environment, clearly, our final goal is to extend current communication theory to accommodate or better understand bioneural information processing systems.
Show less  Date Issued
 2017
 Identifier
 CFE0006758, ucf:51846
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0006758
 Title
 Signal processing with Fourier analysis, novel algorithms and applications.
 Creator

Syed, Alam, Foroosh, Hassan, Sun, Qiyu, Bagci, Ulas, Rahnavard, Nazanin, Atia, George, Katsevich, Alexander, University of Central Florida
 Abstract / Description

Fourier analysis is the study of the way general functions may be represented or approximatedby sums of simpler trigonometric functions, also analogously known as sinusoidal modeling. Theoriginal idea of Fourier had a profound impact on mathematical analysis, physics, and engineeringbecause it diagonalizes timeinvariant convolution operators. In the past signal processing was atopic that stayed almost exclusively in electrical engineering, where only the experts could cancelnoise, compress...
Show moreFourier analysis is the study of the way general functions may be represented or approximatedby sums of simpler trigonometric functions, also analogously known as sinusoidal modeling. Theoriginal idea of Fourier had a profound impact on mathematical analysis, physics, and engineeringbecause it diagonalizes timeinvariant convolution operators. In the past signal processing was atopic that stayed almost exclusively in electrical engineering, where only the experts could cancelnoise, compress and reconstruct signals. Nowadays it is almost ubiquitous, as everyone now dealswith modern digital signals.Medical imaging, wireless communications and power systems of the future will experience moredata processing conditions and wider range of applications requirements than the systems of today.Such systems will require more powerful, efficient and flexible signal processing algorithms thatare well designed to handle such needs. No matter how advanced our hardware technology becomeswe will still need intelligent and efficient algorithms to address the growing demands in signalprocessing. In this thesis, we investigate novel techniques to solve a suite of four fundamentalproblems in signal processing that have a wide range of applications. The relevant equations, literatureof signal processing applications, analysis and final numerical algorithms/methods to solvethem using Fourier analysis are discussed for different applications in the electrical engineering /computer science. The first four chapters cover the following topics of central importance in thefield of signal processing: Fast Phasor Estimation using Adaptive Signal Processing (Chapter 2) Frequency Estimation from Nonuniform Samples (Chapter 3) 2D Polar and 3D Spherical Polar Nonuniform Discrete Fourier Transform (Chapter 4)iv Robust 3D registration using Spherical Polar Discrete Fourier Transform and Spherical Harmonics(Chapter 5)Even though each of these four methods discussed may seem completely disparate, the underlyingmotivation for more efficient processing by exploiting the Fourier domain signal structureremains the same. The main contribution of this thesis is the innovation in the analysis, synthesis, discretization of certain wellknown problems like phasor estimation, frequency estimation, computations of a particular nonuniform Fourier transform and signal registration on the transformed domain. We conduct propositions and evaluations of certain applications relevant algorithms suchas, frequency estimation algorithm using nonuniform sampling, polar and spherical polar Fourier transform. The techniques proposed are also useful in the field of computer vision and medical imaging. From a practical perspective, the proposed algorithms are shown to improve the existing solutions in the respective fields where they are applied/evaluated. The formulation and final proposition is shown to have a variety of benefits. Future work with potentials in medical imaging, directional wavelets, volume rendering, video/3D object classifications, high dimensional registration are also discussed in the final chapter. Finally, in the spirit of reproducible research, we release the implementation of these algorithms to the public using Github.
Show less  Date Issued
 2017
 Identifier
 CFE0006803, ucf:51775
 Format
 Document (PDF)
 PURL
 http://purl.flvc.org/ucf/fd/CFE0006803