Current Search: Jha, Sumit (x)
View All Items
Pages
- Title
- Automated Synthesis of Memristor Crossbar Networks.
- Creator
-
Chakraborty, Dwaipayan, Jha, Sumit Kumar, Leavens, Gary, Ewetz, Rickard, Valliyil Thankachan, Sharma, Xu, Mengyu, University of Central Florida
- Abstract / Description
-
The advancement of semiconductor device technology over the past decades has enabled the design of increasingly complex electrical and computational machines. Electronic design automation (EDA) has played a significant role in the design and implementation of transistor-based machines. However, as transistors move closer toward their physical limits, the speed-up provided by Moore's law will grind to a halt. Once again, we find ourselves on the verge of a paradigm shift in the computational...
Show moreThe advancement of semiconductor device technology over the past decades has enabled the design of increasingly complex electrical and computational machines. Electronic design automation (EDA) has played a significant role in the design and implementation of transistor-based machines. However, as transistors move closer toward their physical limits, the speed-up provided by Moore's law will grind to a halt. Once again, we find ourselves on the verge of a paradigm shift in the computational sciences as newer devices pave the way for novel approaches to computing. One of such devices is the memristor -- a resistor with non-volatile memory.Memristors can be used as junctional switches in crossbar circuits, which comprise of intersecting sets of vertical and horizontal nanowires. The major contribution of this dissertation lies in automating the design of such crossbar circuits -- doing a new kind of EDA for a new kind of computational machinery. In general, this dissertation attempts to answer the following questions:a. How can we synthesize crossbars for computing large Boolean formulas, up to 128-bit?b. How can we synthesize more compact crossbars for small Boolean formulas, up to 8-bit?c. For a given loop-free C program doing integer arithmetic, is it possible to synthesize an equivalent crossbar circuit?We have presented novel solutions to each of the above problems. Our new, proposed solutions resolve a number of significant bottlenecks in existing research, via the usage of innovative logic representation and artificial intelligence techniques. For large Boolean formulas (up to 128-bit), we have utilized Reduced Ordered Binary Decision Diagrams (ROBDDs) to automatically synthesize linearly growing crossbar circuits that compute them. This cutting edge approach towards flow-based computing has yielded state-of-the-art results. It is worth noting that this approach is scalable to n-bit Boolean formulas. We have made significant original contributions by leveraging artificial intelligence for automatic synthesis of compact crossbar circuits. This inventive method has been expanded to encompass crossbar networks with 1D1M (1-diode-1-memristor) switches, as well. The resultant circuits satisfy the tight constraints of the Feynman Grand Prize challenge and are able to perform 8-bit binary addition. A leading edge development for end-to-end computation with flow-based crossbars has been implemented, which involves methodical translation of loop-free C programs into crossbar circuits via automated synthesis. The original contributions described in this dissertation reflect the substantial progress we have made in the area of electronic design automation for synthesis of memristor crossbar networks.
Show less - Date Issued
- 2019
- Identifier
- CFE0007609, ucf:52528
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007609
- Title
- Adaptive Architectural Strategies for Resilient Energy-Aware Computing.
- Creator
-
Ashraf, Rizwan, DeMara, Ronald, Lin, Mingjie, Wang, Jun, Jha, Sumit, Johnson, Mark, University of Central Florida
- Abstract / Description
-
Reconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited...
Show moreReconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited to implement Evolvable Hardware (EHW). EHW utilize genetic algorithms to realize logic circuits at runtime, as directed by the objective function. However, the size of problems solved using EHW as compared with traditional approaches has been limited to relatively compact circuits. This is due to the increase in complexity of the genetic algorithm with increase in circuit size. To address this research challenge of scalability, the Netlist-Driven Evolutionary Refurbishment (NDER) technique was designed and implemented herein to enable on-the-fly permanent fault mitigation in FPGA circuits. NDER has been shown to achieve refurbishment of relatively large sized benchmark circuits as compared to related works. Additionally, Design Diversity (DD) techniques which are used to aid such evolutionary refurbishment techniques are also proposed and the efficacy of various DD techniques is quantified and evaluated.Similarly, there exists a growing need for adaptable logic datapaths in custom-designed nanometer-scale ICs, for ensuring operational reliability in the presence of Process, Voltage, and Temperature (PVT) and, transistor-aging variations owing to decreased feature sizes for electronic devices. Without such adaptability, excessive design guardbands are required to maintain the desired integration and performance levels. To address these challenges, the circuit-level technique of Self-Recovery Enabled Logic (SREL) was designed herein. At design-time, vulnerable portions of the circuit identified using conventional Electronic Design Automation tools are replicated to provide post-fabrication adaptability via intelligent techniques. In-situ timing sensors are utilized in a feedback loop to activate suitable datapaths based on current conditions that optimize performance and energy consumption. Primarily, SREL is able to mitigate the timing degradations caused due to transistor aging effects in sub-micron devices by reducing the stress induced on active elements by utilizing power-gating. As a result, fewer guardbands need to be included to achieve comparable performance levels which leads to considerable energy savings over the operational lifetime.The need for energy-efficient operation in current computing systems has given rise to Near-Threshold Computing as opposed to the conventional approach of operating devices at nominal voltage. In particular, the goal of exascale computing initiative in High Performance Computing (HPC) is to achieve 1 EFLOPS under the power budget of 20MW. However, it comes at the cost of increased reliability concerns, such as the increase in performance variations and soft errors. This has given rise to increased resiliency requirements for HPC applications in terms of ensuring functionality within given error thresholds while operating at lower voltages. My dissertation research devised techniques and tools to quantify the effects of radiation-induced transient faults in distributed applications on large-scale systems. A combination of compiler-level code transformation and instrumentation are employed for runtime monitoring to assess the speed and depth of application state corruption as a result of fault injection. Finally, fault propagation models are derived for each HPC application that can be used to estimate the number of corrupted memory locations at runtime. Additionally, the tradeoffs between performance and vulnerability and the causal relations between compiler optimization and application vulnerability are investigated.
Show less - Date Issued
- 2015
- Identifier
- CFE0006206, ucf:52889
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006206
- Title
- Computational Methods for Analyzing RNA Folding Landscapes and its Applications.
- Creator
-
Li, Yuan, Zhang, Shaojie, Hua, Kien, Jha, Sumit, Hu, Haiyan, Li, Xiaoman, University of Central Florida
- Abstract / Description
-
Non-protein-coding RNAs play critical regulatory roles in cellular life. Many ncRNAs fold into specific structures in order to perform their biological functions. Some of the RNAs, such as riboswitches, can even fold into alternative structural conformations in order to participate in different biological processes. In addition, these RNAs can transit dynamically between different functional structures along folding pathways on their energy landscapes. These alternative functional structures...
Show moreNon-protein-coding RNAs play critical regulatory roles in cellular life. Many ncRNAs fold into specific structures in order to perform their biological functions. Some of the RNAs, such as riboswitches, can even fold into alternative structural conformations in order to participate in different biological processes. In addition, these RNAs can transit dynamically between different functional structures along folding pathways on their energy landscapes. These alternative functional structures are usually energetically favored and are stable in their local energy landscapes. Moreover, conformational transitions between any pair of alternate structures usually involve high energy barriers, such that RNAs can become kinetically trapped by these stable and local optimal structures.We have proposed a suite of computational approaches for analyzing and discovering regulatory RNAs through studying folding pathways, alternative structures and energy landscapes associated with conformational transitions of regulatory RNAs. First, we developed an approach, RNAEAPath, which can predict low-barrier folding pathways between two conformational structures of a single RNA molecule. Using RNAEAPath, we can analyze folding pathways between two functional RNA structures, and therefore study the mechanism behind RNA functional transitions from a thermodynamic perspective. Second, we introduced an approach, RNASLOpt, for finding all the stable and local optimal structures on the energy landscape of a single RNA molecule. We can use the generated stable and local optimal structures to represent the RNA energy landscape in a compact manner. In addition, we applied RNASLOpt to several known riboswitches and predicted their alternate functional structures accurately. Third, we integrated a comparative approach with RNASLOpt, and developed RNAConSLOpt, which can find all the consensus stable and local optimal structuresthat are conserved among a set of homologous regulatory RNAs. We can use RNAConSLOpt to predict alternate functional structures for regulatory RNA families. Finally, we have proposed a pipeline making use of RNAConSLOpt to computationally discover novel riboswitches in bacterial genomes. An application of the proposed pipeline to a set of bacteria in Bacillus genus results in the re-discovery of many known riboswitches, and the detection of several novel putative riboswitch elements.
Show less - Date Issued
- 2012
- Identifier
- CFE0004400, ucf:49365
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004400
- Title
- Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images.
- Creator
-
Torosdagli, Neslisah, Bagci, Ulas, Hughes, Charles, Jha, Sumit Kumar, Lisle, Curtis, University of Central Florida
- Abstract / Description
-
The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a...
Show moreThe medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions.Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems.
Show less - Date Issued
- 2019
- Identifier
- CFE0007541, ucf:52593
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007541
- Title
- Performance Evaluation of Connectivity and Capacity of Dynamic Spectrum Access Networks.
- Creator
-
Al-tameemi, Osama, Chatterjee, Mainak, Bassiouni, Mostafa, Jha, Sumit, Wei, Lei, Choudhury, Sudipto, University of Central Florida
- Abstract / Description
-
Recent measurements on radio spectrum usage have revealed the abundance of under- utilized bands of spectrum that belong to licensed users. This necessitated the paradigm shift from static to dynamic spectrum access (DSA) where secondary networks utilize unused spectrum holes in the licensed bands without causing interference to the licensed user. However, wide scale deployment of these networks have been hindered due to lack of knowledge of expected performance in realistic environments and...
Show moreRecent measurements on radio spectrum usage have revealed the abundance of under- utilized bands of spectrum that belong to licensed users. This necessitated the paradigm shift from static to dynamic spectrum access (DSA) where secondary networks utilize unused spectrum holes in the licensed bands without causing interference to the licensed user. However, wide scale deployment of these networks have been hindered due to lack of knowledge of expected performance in realistic environments and lack of cost-effective solutions for implementing spectrum database systems. In this dissertation, we address some of the fundamental challenges on how to improve the performance of DSA networks in terms of connectivity and capacity. Apart from showing performance gains via simulation experiments, we designed, implemented, and deployed testbeds that achieve economics of scale. We start by introducing network connectivity models and show that the well-established disk model does not hold true for interference-limited networks. Thus, we characterize connectivity based on signal to interference and noise ratio (SINR) and show that not all the deployed secondary nodes necessarily contribute towards the network's connectivity. We identify such nodes and show that even-though a node might be communication-visible it can still be connectivity-invisible. The invisibility of such nodes is modeled using the concept of Poisson thinning. The connectivity-visible nodes are combined with the coverage shrinkage to develop the concept of effective density which is used to characterize the con- nectivity. Further, we propose three techniques for connectivity maximization. We also show how traditional flooding techniques are not applicable under the SINR model and analyze the underlying causes for that. Moreover, we propose a modified version of probabilistic flooding that uses lower message overhead while accounting for the node outreach and in- terference. Next, we analyze the connectivity of multi-channel distributed networks and show how the invisibility that arises among the secondary nodes results in thinning which we characterize as channel abundance. We also capture the thinning that occurs due to the nodes' interference. We study the effects of interference and channel abundance using Poisson thinning on the formation of a communication link between two nodes and also on the overall connectivity of the secondary network. As for the capacity, we derive the bounds on the maximum achievable capacity of a randomly deployed secondary network with finite number of nodes in the presence of primary users since finding the exact capacity involves solving an optimization problem that shows in-scalability both in time and search space dimensionality. We speed up the optimization by reducing the optimizer's search space. Next, we characterize the QoS that secondary users can expect. We do so by using vector quantization to partition the QoS space into finite number of regions each of which is represented by one QoS index. We argue that any operating condition of the system can be mapped to one of the pre-computed QoS indices using a simple look-up in Olog (N) time thus avoiding any cumbersome computation for QoS evaluation. We implement the QoS space on an 8-bit microcontroller and show how the mathematically intensive operations can be computed in a shorter time. To demonstrate that there could be low cost solutions that scale, we present and implement an architecture that enables dynamic spectrum access for any type of network ranging from IoT to cellular. The three main components of this architecture are the RSSI sensing network, the DSA server, and the service engine. We use the concept of modular design in these components which allows transparency between them, scalability, and ease of maintenance and upgrade in a plug-n-play manner, without requiring any changes to the other components. Moreover, we provide a blueprint on how to use off-the-shelf commercially available software configurable RF chips to build low cost spectrum sensors. Using testbed experiments, we demonstrate the efficiency of the proposed architecture by comparing its performance to that of a legacy system. We show the benefits in terms of resilience to jamming, channel relinquishment on primary arrival, and best channel determination and allocation. We also show the performance gains in terms of frame error rater and spectral efficiency.
Show less - Date Issued
- 2016
- Identifier
- CFE0006063, ucf:50980
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006063
- Title
- Techniques for automated parameter estimation in computational models of probabilistic systems.
- Creator
-
Hussain, Faraz, Jha, Sumit, Leavens, Gary, Turgut, Damla, Uddin, Nizam, University of Central Florida
- Abstract / Description
-
The main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complexstochastic systems such that the resultant model meets user-specified behavioral specifications.These algorithms are designed to operate on probabilistic systems (-) systems that, in general,behave differently under identical conditions. The algorithms work using an approach thatcombines formal verification and...
Show moreThe main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complexstochastic systems such that the resultant model meets user-specified behavioral specifications.These algorithms are designed to operate on probabilistic systems (-) systems that, in general,behave differently under identical conditions. The algorithms work using an approach thatcombines formal verification and mathematical optimization to explore a model's parameterspace.The problem of determining whether a model instantiated with a given set of parametervalues satisfies the desired specification is first defined using formal verification terminology,and then reformulated in terms of statistical hypothesis testing. Parameter space explorationinvolves determining the outcome of the hypothesis testing query for each parameter pointand is guided using simulated annealing. The first algorithm uses the sequential probabilityratio test (SPRT) to solve the hypothesis testing problems, whereas the second algorithmuses an approach based on Bayesian statistical model checking (BSMC).The SPRT-based parameter synthesis algorithm was used to validate that a given model ofglucose-insulin metabolism has the capability of representing diabetic behavior by synthesizingvalues of three parameters that ensure that the glucose-insulin subsystem spends at least 20minutes in a diabetic scenario. The BSMC-based algorithm was used to discover the valuesof parameters in a physiological model of the acute inflammatory response that guarantee aset of desired clinical outcomes.These two applications demonstrate how our algorithms use formal verification, statisticalhypothesis testing and mathematical optimization to automatically synthesize parameters ofcomplex probabilistic models in order to meet user-specified behavioral properties
Show less - Date Issued
- 2016
- Identifier
- CFE0006117, ucf:51200
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006117
- Title
- Computational Methods for Comparative Non-coding RNA Analysis: from Secondary Structures to Tertiary Structures.
- Creator
-
Ge, Ping, Zhang, Shaojie, Guha, Ratan, Stanley, Kenneth, Jha, Sumit, Song, Hojun, University of Central Florida
- Abstract / Description
-
Unlike message RNAs (mRNAs) whose information is encoded in the primary sequences, the cellular roles of non-coding RNAs (ncRNAs) originate from the structures. Therefore studying the structural conservation in ncRNAs is important to yield an in-depth understanding of their functionalities. In the past years, many computational methods have been proposed to analyze the common structural patterns in ncRNAs using comparative methods. However, the RNA structural comparison is not a trivial task,...
Show moreUnlike message RNAs (mRNAs) whose information is encoded in the primary sequences, the cellular roles of non-coding RNAs (ncRNAs) originate from the structures. Therefore studying the structural conservation in ncRNAs is important to yield an in-depth understanding of their functionalities. In the past years, many computational methods have been proposed to analyze the common structural patterns in ncRNAs using comparative methods. However, the RNA structural comparison is not a trivial task, and the existing approaches still have numerous issues in efficiency and accuracy. In this dissertation, we will introduce a suite ofnovel computational tools that extend the classic models for ncRNA secondary and tertiary structure comparisons.For RNA secondary structure analysis, we first developed a computational tool, named PhyloRNAalifold, to integrate the phylogenetic information into the consensus structural folding. The underlying idea of this algorithm is that the importance of a co-varying mutation should be determined by its position on the phylogenetic tree. By assigning high scores to the critical covariances, the prediction of RNA secondary structure can be more accurate. Besides structure prediction, we also developed a computational tool, named ProbeAlign, to improvethe efficiency of genome-wide ncRNA screening by using high-throughput RNA structural probing data. It treats the chemical reactivities embedded in the probing information as pairing attributes of the searching targets. This approach can avoid the time-consuming base pair matching in the secondary structure alignment. The application of ProbeAlign to the FragSeq datasets shows its capability of genome-wide ncRNAs analysis.For RNA tertiary structure analysis, we first developed a computational tool, named STAR3D, to find the global conservation in RNA 3D structures. STAR3D aims at finding the consensus of stacks by using 2D topology and 3D geometry together. Then, the loop regions can be ordered and aligned according to their relative positions in the consensus. This stack-guided alignment method adopts the divide-and-conquer strategy into RNA 3D structural alignment, which has improved its efficiency dramatically. Furthermore, we also have clustered all loop regions in non-redundant RNA 3D structures to de novo detect plausible RNA structural motifs. The computational pipeline, named RNAMSC, was extended to handle large-scale PDB datasets, and solid downstream analysis was performed to ensure the clustering results are valid and easily to be applied to further research. The final results contain many interesting variations of known motifs, such as GNAA tetraloop, kink-turn, sarcin-ricin and t-loops. We also discovered novel functional motifs that conserved in a wide range of ncRNAs, including ribosomal RNA, sgRNA, SRP RNA, GlmS riboswitch and twister ribozyme.
Show less - Date Issued
- 2016
- Identifier
- CFE0006104, ucf:51212
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006104
- Title
- Measuring the evolving Internet ecosystem with exchange points.
- Creator
-
Ahmad, Mohammad Zubair, Guha, Ratan, Bassiouni, Mostafa, Chatterjee, Mainak, Jha, Sumit, Goldiez, Brian, University of Central Florida
- Abstract / Description
-
The Internet ecosystem comprising of thousands of Autonomous Systems (ASes) now include Internet eXchange Points (IXPs) as another critical component in the infrastructure. Peering plays a significant part in driving the economic growth of ASes and is contributing to a variety of structural changes in the Internet. IXPs are a primary component of this peering ecosystem and are playing an increasing role not only in the topology evolution of the Internet but also inter-domain path routing. In...
Show moreThe Internet ecosystem comprising of thousands of Autonomous Systems (ASes) now include Internet eXchange Points (IXPs) as another critical component in the infrastructure. Peering plays a significant part in driving the economic growth of ASes and is contributing to a variety of structural changes in the Internet. IXPs are a primary component of this peering ecosystem and are playing an increasing role not only in the topology evolution of the Internet but also inter-domain path routing. In this dissertation we study and analyze the overall affects of peering and IXP infrastructure on the Internet. We observe IXP peering is enabling a quicker flattening of the Internet topology and leading to over-utilization of popular inter-AS links. Indiscriminate peering at these locations is leading to higher end-to-end path latencies for ASes peering at an exchange point, an effect magnified at the most popular worldwide IXPs. We first study the effects of recently discovered IXP links on the inter-AS routes using graph based approaches and find that it points towards the changing and flattening landscape in the evolution of the Internet's topology. We then study more IXP effects by using measurements to investigate the networks benefits of peering. We propose and implement a measurement framework which identifies default paths through IXPs and compares them with alternate paths isolating the IXP hop. Our system is running and recording default and alternate path latencies and made publicly available. We model the probability of an alternate path performing better than a default path through an IXP by identifying the underlying factors influencing the end-to end path latency. Our first-of-its-kind modeling study, which uses a combination of statistical and machine learning approaches, shows that path latencies depend on the popularity of the particular IXP, the size of the provider ASes of the networks peering at common locations and the relative position of the IXP hop along the path. An in-depth comparison of end-to-end path latencies reveal a significant percentage of alternate paths outperforming the default route through an IXP. This characteristic of higher path latencies is magnified in the popular continental exchanges as measured by us in a case study looking at the largest regional IXPs. We continue by studying another effect of peering which has numerous applications in overlay routing, Triangle Inequality Violations (TIVs). These TIVs in the Internet delay space are created due to peering and we compare their essential characteristics with overlay paths such as detour routes. They are identified and analyzed from existing measurement datasets but on a scale not carried out earlier. This implementation exhibits the effectiveness of GPUs in analyzing big data sets while the TIVs studied show that the a set of common inter-AS links create these TIVs. This result provides a new insight about the development of TIVs by analyzing a very large data set using GPGPUs.Overall our work presents numerous insights into the inner workings of the Internet's peering ecosystem. Our measurements show the effects of exchange points on the evolving Internet and exhibits their importance to Internet routing.
Show less - Date Issued
- 2013
- Identifier
- CFE0004802, ucf:49744
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004802
- Title
- Holistic Representations for Activities and Crowd Behaviors.
- Creator
-
Solmaz, Berkan, Shah, Mubarak, Da Vitoria Lobo, Niels, Jha, Sumit, Ilie, Marcel, Moore, Brian, University of Central Florida
- Abstract / Description
-
In this dissertation, we address the problem of analyzing the activities of people in a variety of scenarios, this is commonly encountered in vision applications. The overarching goal is to devise new representations for the activities, in settings where individuals or a number of people may take a part in specific activities. Different types of activities can be performed by either an individual at the fine level or by several people constituting a crowd at the coarse level. We take into...
Show moreIn this dissertation, we address the problem of analyzing the activities of people in a variety of scenarios, this is commonly encountered in vision applications. The overarching goal is to devise new representations for the activities, in settings where individuals or a number of people may take a part in specific activities. Different types of activities can be performed by either an individual at the fine level or by several people constituting a crowd at the coarse level. We take into account the domain specific information for modeling these activities. The summary of the proposed solutions is presented in the following.The holistic description of videos is appealing for visual detection and classification tasks for several reasons including capturing the spatial relations between the scene components, simplicity, and performance [1, 2, 3]. First, we present a holistic (global) frequency spectrum based descriptor for representing the atomic actions performed by individuals such as: bench pressing, diving, hand waving, boxing, playing guitar, mixing, jumping, horse riding, hula hooping etc. We model and learn these individual actions for classifying complex user uploaded videos. Our method bypasses the detection of interest points, the extraction of local video descriptors and the quantization of local descriptors into a code book; it represents each video sequence as a single feature vector. This holistic feature vector is computed by applying a bank of 3-D spatio-temporal filters on the frequency spectrum of a video sequence; hence it integrates the information about the motion and scene structure. We tested our approach on two of the most challenging datasets, UCF50 [4] and HMDB51 [5], and obtained promising results which demonstrates the robustness and the discriminative power of our holistic video descriptor for classifying videos of various realistic actions.In the above approach, a holistic feature vector of a video clip is acquired by dividing the video into spatio-temporal blocks then concatenating the features of the individual blocks together. However, such a holistic representation blindly incorporates all the video regions regardless of their contribution in classification. Next, we present an approach which improves the performance of the holistic descriptors for activity recognition. In our novel method, we improve the holistic descriptors by discovering the discriminative video blocks. We measure the discriminativity of a block by examining its response to a pre-learned support vector machine model. In particular, a block is considered discriminative if it responds positively for positive training samples, and negatively for negative training samples. We pose the problem of finding the optimal blocks as a problem of selecting a sparse set of blocks, which maximizes the total classifier discriminativity. Through a detailed set of experiments on benchmark datasets [6, 7, 8, 9, 5, 10], we show that our method discovers the useful regions in the videos and eliminates the ones which are confusing for classification, which results in significant performance improvement over the state-of-the-art.In contrast to the scenes where an individual performs a primitive action, there may be scenes with several people, where crowd behaviors may take place. For these types of scenes the traditional approaches for recognition will not work due to severe occlusion and computational requirements. The number of videos is limited and the scenes are complicated, hence learning these behaviors is not feasible. For this problem, we present a novel approach, based on the optical flow in a video sequence, for identifying five specific and common crowd behaviors in visual scenes. In the algorithm, the scene is overlaid by a grid of particles, initializing a dynamical system which is derived from the optical flow. Numerical integration of the optical flow provides particle trajectories that represent the motion in the scene. Linearization of the dynamical system allows a simple and practical analysis and classification of the behavior through the Jacobian matrix. Essentially, the eigenvalues of this matrix are used to determine the dynamic stability of points in the flow and each type of stability corresponds to one of the five crowd behaviors. The identified crowd behaviors are (1) bottlenecks: where many pedestrians/vehicles from various points in the scene are entering through one narrow passage, (2) fountainheads: where many pedestrians/vehicles are emerging from a narrow passage only to separate in many directions, (3) lanes: where many pedestrians/vehicles are moving at the same speeds in the same direction, (4) arches or rings: where the collective motion is curved or circular, and (5) blocking: where there is a opposing motion and desired movement of groups of pedestrians is somehow prohibited. The implementation requires identifying a region of interest in the scene, and checking the eigenvalues of the Jacobian matrix in that region to determine the type of flow, that corresponds to various well-defined crowd behaviors. The eigenvalues are only considered in these regions of interest, consistent with the linear approximation and the implied behaviors. Since changes in eigenvalues can mean changes in stability, corresponding to changes in behavior, we can repeat the algorithm over clips of long video sequences to locate changes in behavior. This method was tested on over real videos representing crowd and traffic scenes.
Show less - Date Issued
- 2013
- Identifier
- CFE0004941, ucf:49638
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004941
- Title
- Spectrum Map and its Application in Cognitive Radio Networks.
- Creator
-
Debroy, Saptarshi, Chatterjee, Mainak, Bassiouni, Mostafa, Zou, Changchun, Jha, Sumit, Catbas, Necati, University of Central Florida
- Abstract / Description
-
Recent measurements on radio spectrum usage have revealed the abundance of underutilizedbands of spectrum that belong to licensed users. This necessitated the paradigm shift from static to dynamic spectrum access. Cognitive radio based secondary networks thatutilize such unused spectrum holes in the licensed band, have been proposed as a possible solution to the spectrum crisis. The idea is to detect times when a particular licensed band is unused and use it for transmission without causing...
Show moreRecent measurements on radio spectrum usage have revealed the abundance of underutilizedbands of spectrum that belong to licensed users. This necessitated the paradigm shift from static to dynamic spectrum access. Cognitive radio based secondary networks thatutilize such unused spectrum holes in the licensed band, have been proposed as a possible solution to the spectrum crisis. The idea is to detect times when a particular licensed band is unused and use it for transmission without causing interference to the licensed user. We argue that prior knowledge about occupancy of such bands and the corresponding achievable performance metrics can potentially help secondary networks to devise effective strategiesto improve utilization.In this work, we use Shepard's method of interpolation to create a spectrum mapthat provides a spatial distribution of spectrum usage over a region of interest. It is achieved by intelligently fusing the spectrum usage reports shared by the secondary nodes at various locations. The obtained spectrum map is a continuous and differentiable 2-dimension distribution function in space. With the spectrum usage distribution known, we show how different radio spectrum and network performance metrics like channel capacity, secondary network throughput, spectral efficiency, and bit error rate can be estimated. We show the applicability of the spectrum map in solving the intra-cell channel allocation problem incentralized cognitive radio networks, such as IEEE 802.22. We propose a channel allocationscheme where the base station allocates interference free channels to the consumer premise equipments (CPE) using the spectrum map that it creates by fusing the spectrum usage information shared by some CPEs. The most suitable CPEs for information sharing arechosen on a dynamic basis using an iterative clustering algorithm. Next, we present a contention based media access control (MAC) protocol for distributed cognitive radio network. The unlicensed secondary users contend among themselves over a common control channel. Winners of the contention get to access the available channels ensuring high utilization and minimum collision with primary incumbent. Last, we propose a multi-channel, multi-hop routing protocol with secondary transmission power control. The spectrum map, created and maintained by a set of sensors, acts as the basis of finding the best route for every source destination pair. The proposed routing protocol ensures primary receiver protection and maximizes achievable link capacity.Through simulation experiments we show the correctness of the prediction model and how it can be used by secondary networks for strategic positioning of secondary transmitter-receiver pairs and selecting the best candidate channels. The simulation model mimics realistic distribution of TV stations for urban and non-urban areas. Results validate the nature and accuracy of estimation, prediction of performance metrics, and efficiency of the allocation process in an IEEE 802.22 network. Results for the proposed MAC protocol show high channel utilization with primary quality of service degradation within a tolerable limit. Performance evaluation of the proposed routing scheme reveals that it ensures primary receiver protection through secondary power control and maximizes route capacity.
Show less - Date Issued
- 2014
- Identifier
- CFE0005324, ucf:50515
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005324
- Title
- Automatic Detection of Brain Functional Disorder Using Imaging Data.
- Creator
-
Dey, Soumyabrata, Shah, Mubarak, Jha, Sumit, Hu, Haiyan, Weeks, Arthur, Rao, Ravishankar, University of Central Florida
- Abstract / Description
-
Recently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity,...
Show moreRecently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity, which are all subjective.Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool to understand the functioning of the brain such as identifying the brain regions responsible for different cognitive tasks or analyzing the statistical differences of the brain functioning between the diseased and control subjects. ADHD is also being studied using the fMRI data. In this dissertation we aim to solve the problem of automatic diagnosis of the ADHD subjects using their resting state fMRI (rs-fMRI) data.As a core step of our approach, we model the functions of a brain as a connectivity network, which is expected to capture the information about how synchronous different brain regions are in terms of their functional activities. The network is constructed by representing different brain regions as the nodes where any two nodes of the network are connected by an edge if the correlation of the activity patterns of the two nodes is higher than some threshold. The brain regions, represented as the nodes of the network, can be selected at different granularities e.g. single voxels or cluster of functionally homogeneous voxels. The topological differences of the constructed networks of the ADHD and control group of subjects are then exploited in the classification approach.We have developed a simple method employing the Bag-of-Words (BoW) framework for the classification of the ADHD subjects. We represent each node in the network by a 4-D feature vector: node degree and 3-D location. The 4-D vectors of all the network nodes of the training data are then grouped in a number of clusters using K-means; where each such cluster is termed as a word. Finally, each subject is represented by a histogram (bag) of such words. The Support Vector Machine (SVM) classifier is used for the detection of the ADHD subjects using their histogram representation. The method is able to achieve 64% classification accuracy.The above simple approach has several shortcomings. First, there is a loss of spatial information while constructing the histogram because it only counts the occurrences of words ignoring the spatial positions. Second, features from the whole brain are used for classification, but some of the brain regions may not contain any useful information and may only increase the feature dimensions and noise of the system. Third, in our study we used only one network feature, the degree of a node which measures the connectivity of the node, while other complex network features may be useful for solving the proposed problem.In order to address the above shortcomings, we hypothesize that only a subset of the nodes of the network possesses important information for the classification of the ADHD subjects. To identify the important nodes of the network we have developed a novel algorithm. The algorithm generates different random subset of nodes each time extracting the features from a subset to compute the feature vector and perform classification. The subsets are then ranked based on the classification accuracy and the occurrences of each node in the top ranked subsets are measured. Our algorithm selects the highly occurring nodes for the final classification. Furthermore, along with the node degree, we employ three more node features: network cycles, the varying distance degree and the edge weight sum. We concatenate the features of the selected nodes in a fixed order to preserve the relative spatial information. Experimental validation suggests that the use of the features from the nodes selected using our algorithm indeed help to improve the classification accuracy. Also, our finding is in concordance with the existing literature as the brain regions identified by our algorithms are independently found by many other studies on the ADHD. We achieved a classification accuracy of 69.59% using this approach. However, since this method represents each voxel as a node of the network which makes the number of nodes of the network several thousands. As a result, the network construction step becomes computationally very expensive. Another limitation of the approach is that the network features, which are computed for each node of the network, captures only the local structures while ignore the global structure of the network.Next, in order to capture the global structure of the networks, we use the Multi-Dimensional Scaling (MDS) technique to project all the subjects from an unknown network-space to a low dimensional space based on their inter-network distance measures. For the purpose of computing distance between two networks, we represent each node by a set of attributes such as the node degree, the average power, the physical location, the neighbor node degrees, and the average powers of the neighbor nodes. The nodes of the two networks are then mapped in such a way that for all pair of nodes, the sum of the attribute distances, which is the inter-network distance, is minimized. To reduce the network computation cost, we enforce that the maximum relevant information is preserved with minimum redundancy. To achieve this, the nodes of the network are constructed with clusters of highly active voxels while the activity levels of the voxels are measured based on the average power of their corresponding fMRI time-series. Our method shows promise as we achieve impressive classification accuracies (73.55%) on the ADHD-200 data set. Our results also reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.So far, we have only used the fMRI data for solving the ADHD diagnosis problem. Finally, we investigated the answers of the following questions. Do the structural brain images contain useful information related to the ADHD diagnosis problem? Can the classification accuracy of the automatic diagnosis system be improved combining the information of the structural and functional brain data? Towards that end, we developed a new method to combine the information of structural and functional brain images in a late fusion framework. For structural data we input the gray matter (GM) brain images to a Convolutional Neural Network (CNN). The output of the CNN is a feature vector per subject which is used to train the SVM classifier. For the functional data we compute the average power of each voxel based on its fMRI time series. The average power of the fMRI time series of a voxel measures the activity level of the voxel. We found significant differences in the voxel power distribution patterns of the ADHD and control groups of subjects. The Local binary pattern (LBP) texture feature is used on the voxel power map to capture these differences. We achieved 74.23% accuracy using GM features, 77.30% using LBP features and 79.14% using combined information.In summary this dissertation demonstrated that the structural and functional brain imaging data are useful for the automatic detection of the ADHD subjects as we achieve impressive classification accuracies on the ADHD-200 data set. Our study also helps to identify the brain regions which are useful for ADHD subject classification. These findings can help in understanding the pathophysiology of the problem. Finally, we expect that our approaches will contribute towards the development of a biological measure for the diagnosis of the ADHD subjects.
Show less - Date Issued
- 2014
- Identifier
- CFE0005786, ucf:50060
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005786
- Title
- Action potentials as indicators of metabolic perturbations for temporal proteomic analysis.
- Creator
-
Kolli, Aditya Reddy, Hickman, James, Clausen, Christian, Ballantyne, John, Gesquiere, Andre, Jha, Sumit, University of Central Florida
- Abstract / Description
-
The single largest cause of compound attrition during drug development is due to inadequate tools capable of predicting and identifying protein interactions. Several tools have been developed to explore how a compound interferes with specific pathways. However, these tools lack the potential to chronically monitor the time dependent temporal changes in complex biochemical networks, thus limiting our ability to identify possible secondary signaling pathways that could lead to potential...
Show moreThe single largest cause of compound attrition during drug development is due to inadequate tools capable of predicting and identifying protein interactions. Several tools have been developed to explore how a compound interferes with specific pathways. However, these tools lack the potential to chronically monitor the time dependent temporal changes in complex biochemical networks, thus limiting our ability to identify possible secondary signaling pathways that could lead to potential toxicity. To overcome this, we have developed an in silico neuronal-metabolic model by coupling the membrane electrical activity to intracellular biochemical pathways that would enable us to perform non-invasive temporal proteomics. This model is capable of predicting and correlating the changes in cellular signaling, metabolic networks and action potential responses to metabolic perturbation.The neuronal-metabolic model was experimentally validated by performing biochemical and electrophysiological measurements on NG108-15 cells followed by testing its prediction capabilities for pathway analysis. The model accurately predicted the changes in neuronal action potentials and the changes in intracellular biochemical pathways when exposed to metabolic perturbations. NG108-15 cells showed a large effect upon exposure to 2DG compared to cyanide and malonate as these cells have elevated glycolysis. A combinational treatment of 2DG, cyanide and malonate had a much higher and faster effect on the cells. A time-dependent change in neuronal action potentials occurred based on the inhibited pathway. We conclude that the experimentally validated in silico model accurately predicts the changes in neuronal action potential shapes and proteins activities to perturbations, and would be a powerful tool for performing proteomics facilitating drug discovery by using action potential peak shape analysis to determine pathway perturbation from an administered compound.
Show less - Date Issued
- 2014
- Identifier
- CFE0005822, ucf:50037
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005822
- Title
- Visual Geo-Localization and Location-Aware Image Understanding.
- Creator
-
Roshan Zamir, Amir, Shah, Mubarak, Jha, Sumit, Sukthankar, Rahul, Lin, Mingjie, Fathpour, Sasan, University of Central Florida
- Abstract / Description
-
Geo-localization is the problem of discovering the location where an image or video was captured. Recently, large scale geo-localization methods which are devised for ground-level imagery and employ techniques similar to image matching have attracted much interest. In these methods, given a reference dataset composed of geo-tagged images, the problem is to estimate the geo-location of a query by finding its matching reference images.In this dissertation, we address three questions central to...
Show moreGeo-localization is the problem of discovering the location where an image or video was captured. Recently, large scale geo-localization methods which are devised for ground-level imagery and employ techniques similar to image matching have attracted much interest. In these methods, given a reference dataset composed of geo-tagged images, the problem is to estimate the geo-location of a query by finding its matching reference images.In this dissertation, we address three questions central to geo-spatial analysis of ground-level imagery: \textbf{1) How to geo-localize images and videos captured at unknown locations? 2) How to refine the geo-location of already geo-tagged data? 3) How to utilize the extracted geo-tags?}We present a new framework for geo-locating an image utilizing a novel multiple nearest neighbor feature matching method using Generalized Minimum Clique Graphs (GMCP). First, we extract local features (e.g., SIFT) from the query image and retrieve a number of nearest neighbors for each query feature from the reference data set. Next, we apply our GMCP-based feature matching to select a single nearest neighbor for each query feature such that all matches are globally consistent. Our approach to feature matching is based on the proposition that the first nearest neighbors are not necessarily the best choices for finding correspondences in image matching. Therefore, the proposed method considers multiple reference nearest neighbors as potential matches and selects the correct ones by enforcing the consistency among their global features (e.g., GIST) using GMCP. Our evaluations using a new data set of 102k Street View images shows the proposed method outperforms the state-of-the-art by 10 percent.Geo-localization of images can be extended to geo-localization of a video. We have developed a novel method for estimating the geo-spatial trajectory of a moving camera with unknown intrinsic parameters in a city-scale. The proposed method is based on a three step process: 1) individual geo-localization of video frames using Street View images to obtain the likelihood of the location (latitude and longitude) given the current observation, 2) Bayesian tracking to estimate the frame location and video's temporal evolution using previous state probabilities and current likelihood, and 3) applying a novel Minimum Spanning Trees based trajectory reconstruction to eliminate trajectory loops or noisy estimations. Thus far, we have assumed reliable geo-tags for reference imagery are available through crowdsourcing. However, crowdsourced images are well known to suffer from the acute shortcoming of having inaccurate geo-tags. We have developed the first method for refinement of GPS-tags which automatically discovers the subset of corrupted geo-tags and refines them. We employ Random Walks to discover the uncontaminated subset of location estimations and robustify Random Walks with a novel adaptive damping factor that conforms to the level of noise in the input. In location-aware image understanding, we are interested in improving the image analysis by putting it in the right geo-spatial context. This approach is of particular importance as the majority of cameras and mobile devices are now being equipped with GPS chips. Therefore, developing techniques which can leverage the geo-tags of images for improving the performance of traditional computer vision tasks is of particular interest. We have developed a location-aware multimodal approach which incorporates business directories, textual information, and web images to identify businesses in a geo-tagged query image.
Show less - Date Issued
- 2014
- Identifier
- CFE0005544, ucf:50282
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005544