Current Search: Heinrich, Mark (x)
View All Items
- Title
- RESOURCE BANKING: AN ENERGY-EFFICIENT, RUN-TIME ADAPTIVE PROCESSOR DESIGN TECHNIQUE.
- Creator
-
Staples, Jacob, Heinrich, Mark, University of Central Florida
- Abstract / Description
-
From the earliest and simplest scalar computation engines to modern superscalar out-of-order processors, the evolution of computational machinery during the past century has largely been driven by a single goal: performance. In today's world of cheap, billion-plus transistor count processors and with an exploding market in mobile computing, a design landscape has emerged where energy efficiency, arguably more than any other single metric, determines the viability of a processor for a given...
Show moreFrom the earliest and simplest scalar computation engines to modern superscalar out-of-order processors, the evolution of computational machinery during the past century has largely been driven by a single goal: performance. In today's world of cheap, billion-plus transistor count processors and with an exploding market in mobile computing, a design landscape has emerged where energy efficiency, arguably more than any other single metric, determines the viability of a processor for a given application. The historical emphasis on performance has left modern processors bloated and over provisioned for everyday tasks in the hope that during computationally intensive periods some performance improvement will be observed. This work explores an energy-efficient processor design technique that ensures even a highly over provisioned out-of-order processor has only as many of its computational resources active as it requires for efficient computation at any given time. Specifically, this paper examines the feasibility of a dynamically banked register file and reorder buffer with variable banking policies that enable unused rename registers or reorder buffer entries to be voltage gated (turned off) during execution to save power. The impact of bank placement, turn-off and turn-on policies as well as rail stabilization latencies for this approach are explored for high-performance desktop and server designs as well as low-power mobile processors.
Show less - Date Issued
- 2011
- Identifier
- CFE0003991, ucf:48675
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003991
- Title
- Learning Algorithms for Fat Quantification and Tumor Characterization.
- Creator
-
Hussein, Sarfaraz, Bagci, Ulas, Shah, Mubarak, Heinrich, Mark, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
Obesity is one of the most prevalent health conditions. About 30% of the world's and over 70% of the United States' adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the...
Show moreObesity is one of the most prevalent health conditions. About 30% of the world's and over 70% of the United States' adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice.
Show less - Date Issued
- 2018
- Identifier
- CFE0007196, ucf:52288
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007196
- Title
- Realtime Editing in Virtual Reality for Room Scale Scans.
- Creator
-
Greenwood, Charles, Laviola II, Joseph, Hughes, Charles, Heinrich, Mark, University of Central Florida
- Abstract / Description
-
This work presents a system for the design and implementation of tools that support the editing of room-scale scans within a virtual reality environment, in real time. The moniker REVRRSS ((")reverse(")) thus stands for Real-time Editing (in) Virtual Reality (of) Room Scale Scans. The tools were evaluated for usefulness based upon whether they meet the criterion of real time usability. Users evaluated the editing experience with traditional keyboard-video-mouse compared to a head mounted...
Show moreThis work presents a system for the design and implementation of tools that support the editing of room-scale scans within a virtual reality environment, in real time. The moniker REVRRSS ((")reverse(")) thus stands for Real-time Editing (in) Virtual Reality (of) Room Scale Scans. The tools were evaluated for usefulness based upon whether they meet the criterion of real time usability. Users evaluated the editing experience with traditional keyboard-video-mouse compared to a head mounted display and hand-held controllers for Virtual Reality. Results show that users prefer the VR approach. The quality of the finished product when using VR is comparable to that of traditional desktop controls. The architecture developed here can be adapted to innumerable future projects and tools.
Show less - Date Issued
- 2019
- Identifier
- CFE0007463, ucf:52678
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007463
- Title
- Ray Collection Bounding Volume Hierarchy.
- Creator
-
Rivera, Kris, Pattanaik, Sumanta, Heinrich, Mark, Hughes, Charles, University of Central Florida
- Abstract / Description
-
This thesis presents Ray Collection BVH, an improvement over a current dayRay Tracing acceleration structure to both build and perform the steps necessary toefficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonlyused acceleration structure, which aides in rendering complex scenes in 3D spaceusing Ray Tracing by breaking the scene of triangles into a simple hierarchicalstructure. The algorithm this thesis explores was developed in an attempt ataccelerating the process...
Show moreThis thesis presents Ray Collection BVH, an improvement over a current dayRay Tracing acceleration structure to both build and perform the steps necessary toefficiently render dynamic scenes. Bounding Volume Hierarchy (BVH) is a commonlyused acceleration structure, which aides in rendering complex scenes in 3D spaceusing Ray Tracing by breaking the scene of triangles into a simple hierarchicalstructure. The algorithm this thesis explores was developed in an attempt ataccelerating the process of both constructing this structure, and also using it to renderthese complex scenes more efficiently.The idea of using "ray collection" as a data structure was accidentally stumbledupon by the author in testing a theory he had for a class project. The overall scheme ofthe algorithm essentially collects a set of localized rays together and intersects themwith subsequent levels of the BVH at each build step. In addition, only part of theacceleration structure is built on a per-Ray need basis. During this partial build, theRays responsible for creating the scene are partially processed, also saving time on theoverall procedure.Ray tracing is a widely used technique for simple rendering from realistic imagesto making movies. Particularly, in the movie industry, the level of realism brought in tothe animated movies through ray tracing is incredible. So any improvement brought tothese algorithms to improve the speed of rendering would be considered useful and welcome. This thesis makes contributions towards improving the overall speed of scenerendering, and hence may be considered as an important and useful contribution.
Show less - Date Issued
- 2011
- Identifier
- CFE0004160, ucf:49063
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004160
- Title
- Harmony Oriented Architecture.
- Creator
-
Martin, Kyle, Hua, Kien, Wu, Annie, Heinrich, Mark, University of Central Florida
- Abstract / Description
-
This thesis presents Harmony Oriented Architecture: a novel architectural paradigm that applies the principles of Harmony Oriented Programming to the architecture of scalable and evolvable distributed systems. It is motivated by research on Ultra Large Scale systems that has revealed inherent limitations in human ability to design large-scale software systems that can only be overcome through radical alternatives to traditional object-oriented software engineering practice that simplifies the...
Show moreThis thesis presents Harmony Oriented Architecture: a novel architectural paradigm that applies the principles of Harmony Oriented Programming to the architecture of scalable and evolvable distributed systems. It is motivated by research on Ultra Large Scale systems that has revealed inherent limitations in human ability to design large-scale software systems that can only be overcome through radical alternatives to traditional object-oriented software engineering practice that simplifies the construction of highly scalable and evolvable system.HOP eschews encapsulation and information hiding, the core principles of object- oriented design, in favor of exposure and information sharing through a spatial abstraction. This helps to avoid the brittle interface dependencies that impede the evolution of object-oriented software. HOA extends these concepts to distributed systems resulting in an architecture in which application components are represented by objects in a spatial database and executed in strict isolation using an embedded application server. Application components store their state entirely in the database and interact solely by diffusing data into a space for proximate components to observe. This architecture provides a high degree of decoupling, isolation, and state exposure allowing highly scalable and evolvable applications to be built.A proof-of-concept prototype of a non-distributed HOA middleware platform supporting JavaScript application components is implemented and evaluated. Results show remarkably good performance considering that little effort was made to optimize the implementation.
Show less - Date Issued
- 2011
- Identifier
- CFE0004480, ucf:49298
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004480
- Title
- The Design, Implementation, and Refinement of Wait-Free Algorithms and Containers.
- Creator
-
Feldman, Steven, Dechev, Damian, Heinrich, Mark, Orooji, Ali, Mucciolo, Eduardo, University of Central Florida
- Abstract / Description
-
My research has been on the development of concurrent algorithms for shared memory systems that provide guarantees of progress.Research into such algorithms is important to developers implementing applications on mission critical and time sensitive systems.These guarantees of progress provide safety properties and freedom from many hazards, such as dead-lock, live-lock, and thread starvation.In addition to the safety concerns, the fine-grained synchronization used in implementing these...
Show moreMy research has been on the development of concurrent algorithms for shared memory systems that provide guarantees of progress.Research into such algorithms is important to developers implementing applications on mission critical and time sensitive systems.These guarantees of progress provide safety properties and freedom from many hazards, such as dead-lock, live-lock, and thread starvation.In addition to the safety concerns, the fine-grained synchronization used in implementing these algorithms promises to provide scalable performance in massively parallel systems.My research has resulted in the development of wait-free versions of the stack, hash map, ring buffer, vector, and a multi-word compare-and-swap algorithms.Through this experience, I have learned and developed new techniques and methodologies for implementing non-blocking and wait-free algorithms.I have worked with and refined existing techniques to improve their practicality and applicability.In the creation of the aforementioned algorithms, I have developed an association model for use with descriptor-based operations.This model, originally developed for the multi-word compare-and-swap algorithm, has been applied to the design of the vector and ring buffer algorithms.To unify these algorithms and techniques, I have released Tervel, a wait-free library of common algorithms and containers.This library includes a framework that simplifies and improves the design of non-blocking algorithms.I have reimplemented several algorithms using this framework and the resulting implementation exhibits less code duplication and fewer perceivable states.When reimplementing algorithms, I have adapted their Application Programming Interface (API) specification to remove ambiguity and non-deterministic behavior found when using a sequential API in a concurrent environment.To improve the performance of my algorithm implementations, I extended OVIS's Lightweight Distributed Metric Service (LDMS)'s data collection and transport system to support performance monitoring using perf_event and PAPI libraries.These libraries have provided me with deeper insights into the behavior of my algorithms, and I was able to use these insights to improve the design and performance of my algorithms.
Show less - Date Issued
- 2015
- Identifier
- CFE0005946, ucf:50813
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005946
- Title
- On the security of NoSQL cloud database services.
- Creator
-
Ahmadian, Mohammad, Marinescu, Dan, Wocjan, Pawel, Heinrich, Mark, Brennan, Joseph, University of Central Florida
- Abstract / Description
-
Processing a vast volume of data generated by web, mobile and Internet-enabled devices, necessitates a scalable and flexible data management system. Database-as-a-Service (DBaaS) is a new cloud computing paradigm, promising a cost-effective and scalable, fully-managed database functionality meeting the requirements of online data processing. Although DBaaS offers many benefits it also introduces new threats and vulnerabilities. While many traditional data processing threats remain, DBaaS...
Show moreProcessing a vast volume of data generated by web, mobile and Internet-enabled devices, necessitates a scalable and flexible data management system. Database-as-a-Service (DBaaS) is a new cloud computing paradigm, promising a cost-effective and scalable, fully-managed database functionality meeting the requirements of online data processing. Although DBaaS offers many benefits it also introduces new threats and vulnerabilities. While many traditional data processing threats remain, DBaaS introduces new challenges such as confidentiality violation and information leakage in the presence of privileged malicious insiders and adds new dimension to the data security. We address the problem of building a secure DBaaS for a public cloud infrastructure where, the Cloud Service Provider (CSP) is not completely trusted by the data owner. We present a high level description of several architectures combining modern cryptographic primitives for achieving this goal. A novel searchable security scheme is proposed to leverage secure query processing in presence of a malicious cloud insider without disclosing sensitive information. A holistic database security scheme comprised of data confidentiality and information leakage prevention is proposed in this dissertation. The main contributions of our work are:(i) A searchable security scheme for non-relational databases of the cloud DBaaS; (ii) Leakage minimization in the untrusted cloud.The analysis of experiments that employ a set of established cryptographic techniques to protect databases and minimize information leakage, proves that the performance of the proposed solution is bounded by communication cost rather than by the cryptographic computational effort.
Show less - Date Issued
- 2017
- Identifier
- CFE0006848, ucf:51777
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006848
- Title
- A Compiler-based Framework for Automatic Extraction of Program Skeletons for Exascale Hardware/Software Co-design.
- Creator
-
Rudraiah Dakshinamurthy, Amruth, Dechev, Damian, Heinrich, Mark, Deo, Narsingh, University of Central Florida
- Abstract / Description
-
The design of high-performance computing architectures requires performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a `...
Show moreThe design of high-performance computing architectures requires performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a ``program skeleton'' that we discuss in this paper is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed for the purposes of the skeleton. In this work, we develop a semi-automatic approach for extracting program skeletons based on compiler program analysis. We demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator. Extracting such a program skeleton from a large-scale parallel program requires a substantial amount of manual effort and often introduces human errors. We outline a semi-automatic approach for extracting program skeletons from large-scale parallel applications that reduces cost and eliminates errors inherent in manual approaches. Our skeleton generation approach is based on the use of the extensible and open-source ROSE compiler infrastructure that allows us to perform flow and dependency analysis on larger programs in order to determine what code can be removed from the program to generate a skeleton.
Show less - Date Issued
- 2013
- Identifier
- CFE0004743, ucf:49795
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004743
- Title
- Approximate In-memory computing on RERAMs.
- Creator
-
Khokhar, Salman Anwar, Heinrich, Mark, Leavens, Gary, Yuksel, Murat, Bagci, Ulas, Rahman, Talat, University of Central Florida
- Abstract / Description
-
Computing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a...
Show moreComputing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a memory-processor communication bottleneck, which is commonly referredto as the 'memory wall'. The relatively slower progress in memory technology compared with processing units has continued to exacerbate the memory wall problem. As feature sizes in the CMOSlogic family reduce further, quantum tunneling effects are becoming more prominent. Simultaneously, chip transistor density is already so high that all transistors cannot be powered up at the same time without violating temperature constraints, a phenomenon characterized as dark-silicon. Coupled with this, there is also an increase in leakage currents with smaller feature sizes, resultingin a breakdown of 'Dennard's' scaling. All these challenges cannot be met without fundamental changes in current computing paradigms. One viable solution is in-memory computing, wherecomputing and storage are performed alongside each other. A number of emerging memory fabrics such as ReRAMS, STT-RAMs, and PCM RAMs are capable of performing logic in-memory.ReRAMs possess high storage density, have extremely low power consumption and a low cost of fabrication. These advantages are due to the simple nature of its basic constituting elements whichallow nano-scale fabrication. We use flow-based computing on ReRAM crossbars for computing that exploits natural sneak paths in those crossbars.Another concurrent development in computing is the maturation of domains that are error resilient while being highly data and power intensive. These include machine learning, pattern recognition,computer vision, image processing, and networking, etc. This shift in the nature of computing workloads has given weight to the idea of (")approximate computing("), in which device efficiency is improved by sacrificing tolerable amounts of accuracy in computation. We present a mathematically rigorous foundation for the synthesis of approximate logic and its mapping to ReRAM crossbars using search based and graphical methods.
Show less - Date Issued
- 2019
- Identifier
- CFE0007827, ucf:52817
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007827
- Title
- Simulation, Analysis, and Optimization of Heterogeneous CPU-GPU Systems.
- Creator
-
Giles, Christopher, Heinrich, Mark, Ewetz, Rickard, Lin, Mingjie, Pattanaik, Sumanta, Flitsiyan, Elena, University of Central Florida
- Abstract / Description
-
With the computing industry's recent adoption of the Heterogeneous System Architecture (HSA) standard, we have seen a rapid change in heterogeneous CPU-GPU processor designs. State-of-the-art heterogeneous CPU-GPU processors tightly integrate multicore CPUs and multi-compute unit GPUs together on a single die. This brings the MIMD processing capabilities of the CPU and the SIMD processing capabilities of the GPU together into a single cohesive package with new HSA features comprising better...
Show moreWith the computing industry's recent adoption of the Heterogeneous System Architecture (HSA) standard, we have seen a rapid change in heterogeneous CPU-GPU processor designs. State-of-the-art heterogeneous CPU-GPU processors tightly integrate multicore CPUs and multi-compute unit GPUs together on a single die. This brings the MIMD processing capabilities of the CPU and the SIMD processing capabilities of the GPU together into a single cohesive package with new HSA features comprising better programmability, coherency between the CPU and GPU, shared Last Level Cache (LLC), and shared virtual memory address spaces. These advancements can potentially bring marked gains in heterogeneous processor performance and have piqued the interest of researchers who wish to unlock these potential performance gains. Therefore, in this dissertation I explore the heterogeneous CPU-GPU processor and application design space with the goal of answering interesting research questions, such as, (1) what are the architectural design trade-offs in heterogeneous CPU-GPU processors and (2) how do we best maximize heterogeneous CPU-GPU application performance on a given system. To enable my exploration of the heterogeneous CPU-GPU design space, I introduce a novel discrete event-driven simulation library called KnightSim and a novel computer architectural simulator called M2S-CGM. M2S-CGM includes all of the simulation elements necessary to simulate coherent execution between a CPU and GPU with shared LLC and shared virtual memory address spaces. I then utilize M2S-CGM for the conduct of three architectural studies. First, I study the architectural effects of shared LLC and CPU-GPU coherence on the overall performance of non-collaborative GPU-only applications. Second, I profile and analyze a set of collaborative CPU-GPU applications to determine how to best optimize them for maximum collaborative performance. Third, I study the impact of varying four key architectural parameters on collaborative CPU-GPU performance by varying GPU compute unit coalesce size, GPU to memory controller bandwidth, GPU frequency, and system wide switching fabric latency.
Show less - Date Issued
- 2019
- Identifier
- CFE0007807, ucf:52346
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007807
- Title
- The Performance and Power Impact of Using Multiple DRAM Address Mapping Schemes in Multicore Processors.
- Creator
-
Jadaa, Rami, Heinrich, Mark, DeMara, Ronald, Yuan, Jiann-Shiun, University of Central Florida
- Abstract / Description
-
Lowest-level cache misses are satisfied by the main memory through a specific address mapping scheme that is hard-coded in the memory controller. A dynamic address mapping scheme technique is investigated to provide higher performance and lower power consumption, and a method to throttle memory to meet a specific power budget. Several experiments are conducted on single and multithreaded synthetic memory traces -to study extreme cases- and validate the usability of the proposed dynamic...
Show moreLowest-level cache misses are satisfied by the main memory through a specific address mapping scheme that is hard-coded in the memory controller. A dynamic address mapping scheme technique is investigated to provide higher performance and lower power consumption, and a method to throttle memory to meet a specific power budget. Several experiments are conducted on single and multithreaded synthetic memory traces -to study extreme cases- and validate the usability of the proposed dynamic mapping scheme over the fixed one. Results show that applications' performance varies according to the mapping scheme used, and a dynamic mapping scheme achieves up to 2x increase in peak bandwidth utilization and around 30% higher energy efficiency than a system using only a single fixed scheme Moreover, the technique can be used to limit memory accesses into a subset of the memory devices by controlling data allocation at a finer granularity, providing a method to throttle main memory by allowing un-accessed devices to be put into power-down mode, hence saving power to meet a certain power budget.
Show less - Date Issued
- 2011
- Identifier
- CFE0004121, ucf:49118
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004121
- Title
- Online, Supervised and Unsupervised Action Localization in Videos.
- Creator
-
Soomro, Khurram, Shah, Mubarak, Heinrich, Mark, Hu, Haiyan, Bagci, Ulas, Yun, Hae-Bum, University of Central Florida
- Abstract / Description
-
Action recognition classifies a given video among a set of action labels, whereas action localization determines the location of an action in addition to its class. The overall aim of this dissertation is action localization. Many of the existing action localization approaches exhaustively search (spatially and temporally) for an action in a video. However, as the search space increases with high resolution and longer duration videos, it becomes impractical to use such sliding window...
Show moreAction recognition classifies a given video among a set of action labels, whereas action localization determines the location of an action in addition to its class. The overall aim of this dissertation is action localization. Many of the existing action localization approaches exhaustively search (spatially and temporally) for an action in a video. However, as the search space increases with high resolution and longer duration videos, it becomes impractical to use such sliding window techniques. The first part of this dissertation presents an efficient approach for localizing actions by learning contextual relations between different video regions in training. In testing, we use the context information to estimate the probability of each supervoxel belonging to the foreground action and use Conditional Random Field (CRF) to localize actions. In the above method and typical approaches to this problem, localization is performed in an offline manner where all the video frames are processed together. This prevents timely localization and prediction of actions/interactions - an important consideration for many tasks including surveillance and human-machine interaction. Therefore, in the second part of this dissertation we propose an online approach to the challenging problem of localization and prediction of actions/interactions in videos. In this approach, we use human poses and superpixels in each frame to train discriminative appearance models and perform online prediction of actions/interactions with Structural SVM. Above two approaches rely on human supervision in the form of assigning action class labels to videos and annotating actor bounding boxes in each frame of training videos. Therefore, in the third part of this dissertation we address the problem of unsupervised action localization. Given unlabeled videos without annotations, this approach aims at: 1) Discovering action classes using a discriminative clustering approach, and 2) Localizing actions using a variant of Knapsack problem.
Show less - Date Issued
- 2017
- Identifier
- CFE0006917, ucf:51685
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006917