Current Search: computer (x)
Pages
-
-
Title
-
Functional Scaffolding for Musical Composition: A New Approach in Computer-Assisted Music Composition.
-
Creator
-
Hoover, Amy, Stanley, Kenneth, Wu, Annie, Laviola II, Joseph, Anderson, Thaddeus, University of Central Florida
-
Abstract / Description
-
While it is important for systems intended to enhance musical creativity to define and explore musical ideas conceived by individual users, many limit musical freedom by focusing on maintaining musical structure, thereby impeding the user's freedom to explore his or her individual style. This dissertation presents a comprehensive body of work that introduces a new musical representation that allows users to explore a space of musical rules that are created from their own melodies. This...
Show moreWhile it is important for systems intended to enhance musical creativity to define and explore musical ideas conceived by individual users, many limit musical freedom by focusing on maintaining musical structure, thereby impeding the user's freedom to explore his or her individual style. This dissertation presents a comprehensive body of work that introduces a new musical representation that allows users to explore a space of musical rules that are created from their own melodies. This representation, called functional scaffolding for musical composition (FSMC), exploits a simple yet powerful property of multipart compositions: The pattern of notes and rhythms in different instrumental parts of the same song are functionally related. That is, in principle, one part can be expressed as a function of another. Music in FSMC is represented accordingly as a functional relationship between an existing human composition, or scaffold, and an additional generated voice. This relationship is encoded by a type of artificial neural network called a compositional pattern producing network (CPPN). A human user without any musical expertise can then explore how these additional generated voices should relate to the scaffold through an interactive evolutionary process akin to animal breeding. The utility of this insight is validated by two implementations of FSMC called NEAT Drummer and MaestroGenesis, that respectively help users tailor drum patterns and complete multipart arrangements from as little as a single original monophonic track. The five major contributions of this work address the overarching hypothesis in this dissertation that functional relationships alone, rather than specialized music theory, are sufficient for generating plausible additional voices. First, to validate FSMC and determine whether plausible generated voices result from the human-composed scaffold or intrinsic properties of the CPPN, drum patterns are created with NEAT Drummer to accompany several different polyphonic pieces. Extending the FSMC approach to generate pitched voices, the second contribution reinforces the importance of functional transformations through quality assessments that indicate that some partially FSMC-generated pieces are indistinguishable from those that are fully human. While the third contribution focuses on constructing and exploring a space of plausible voices with MaestroGenesis, the fourth presents results from a two-year study where students discuss their creative experience with the program. Finally, the fifth contribution is a plugin for MaestroGenesis called MaestroGenesis Voice (MG-V) that provides users a more natural way to incorporate MaestroGenesis in their creative endeavors by allowing scaffold creation through the human voice. Together, the chapters in this dissertation constitute a comprehensive approach to assisted music generation, enabling creativity without the need for musical expertise.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005350, ucf:50495
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005350
-
-
Title
-
AR Physics: Transforming physics diagrammatic representations on paper into interactive simulations.
-
Creator
-
Zhou, Yao, Underberg-Goode, Natalie, Lindgren, Robb, Moshell, Jack, Peters, Philip, University of Central Florida
-
Abstract / Description
-
A problem representation is a cognitive structure created by the solver in correspondence to the problem. Sketching representative diagrams in the domain of physics encourages a problem solving strategy that starts from 'envisionment' by which one internally simulates the physical events and predicts outcomes. Research studies also show that sketching representative diagrams improves learner's performance in solving physics problems. The pedagogic benefits of sketching representations on...
Show moreA problem representation is a cognitive structure created by the solver in correspondence to the problem. Sketching representative diagrams in the domain of physics encourages a problem solving strategy that starts from 'envisionment' by which one internally simulates the physical events and predicts outcomes. Research studies also show that sketching representative diagrams improves learner's performance in solving physics problems. The pedagogic benefits of sketching representations on paper make this traditional learning strategy remain pivotal and worthwhile to be preserved and integrated into the current digital learning landscape.In this paper, I describe AR Physics, an Augmented Reality based application that intends to facilitate one's learning of physics concepts about objects' linear motion. It affords the verified physics learning strategy of sketching representative diagrams on paper, and explores the capability of Augmented Reality in enhancing visual conceptions. The application converts the diagrams drawn on paper into virtual representations displayed on a tablet screen. As such learners can create physics simulation based on the diagrams and test their (")envisionment(") for the diagrams. Users' interaction with AR Physics consists of three steps: 1) sketching a diagram on paper; 2) capturing the sketch with a tablet camera to generate a virtual duplication of the diagram on the tablet screen, and 3) placing a physics object and configuring relevant parameters through the application interface to construct a physics simulation.A user study about the efficiency and usability of AR Physics was performed with 12 college students. The students interacted with the application, and completed three tasks relevant to the learning material. They were given eight questions afterwards to examine their post-learning outcome. The same questions were also given prior to the use of the application in order to comparewith the post results. System Usability Scale (SUS) was adopted to assess the application's usability and interviews were conducted to collect subjects' opinions about Augmented Reality in general. The results of the study demonstrate that the application can effectively facilitate subjects' understanding the target physics concepts. The overall satisfaction with the application's usability was disclosed by the SUS score. Finally subjects expressed that they gained a clearer idea about Augmented Reality through the use of the application.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005566, ucf:50292
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005566
-
-
Title
-
Curvelets and the Radon Transform.
-
Creator
-
Dickerson, Jill, Katsevich, Alexander, Tamasan, Alexandru, Moore, Brian, University of Central Florida
-
Abstract / Description
-
Computed Tomography (CT) is the standard in medical imaging field. In this study, we look at the curvelet transform in an attempt to use it as a basis for representing a function. In doing so, we seek a way to reconstruct a function from the Radon data that may produce clearer results. Using curvelet decomposition, any known function can be represented as a sum of curvelets with corresponding coefficients. It can be shown that these corresponding coefficients can be found using the Radon data...
Show moreComputed Tomography (CT) is the standard in medical imaging field. In this study, we look at the curvelet transform in an attempt to use it as a basis for representing a function. In doing so, we seek a way to reconstruct a function from the Radon data that may produce clearer results. Using curvelet decomposition, any known function can be represented as a sum of curvelets with corresponding coefficients. It can be shown that these corresponding coefficients can be found using the Radon data, even if the function is unknown. The use of curvelets has the potential to solve partial or truncated Radon data problems. As a result, using a curvelet representation to invert radon data allows the chance of higher quality images to be produced. This paper examines this method of reconstruction for computed tomography (CT). A brief history of CT, an introduction to the theory behind the method, and implementation details will be provided.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004674, ucf:49852
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004674
-
-
Title
-
Meshless Direct Numerical Simulation of Turbulent Incompressible Flows.
-
Creator
-
Vidal Urbina, Andres, Kassab, Alain, Kumar, Ranganathan, Ilegbusi, Olusegun, Divo, Eduardo, University of Central Florida
-
Abstract / Description
-
A meshless direct pressure-velocity coupling procedure is presented to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) of turbulent incompressible flows in regular and irregular geometries. The proposed method is a combination of several efficient techniques found in different Computational Fluid Dynamic (CFD) procedures and it is a major improvement of the algorithm published in 2007 by this author. This new procedure has very low numerical diffusion and some...
Show moreA meshless direct pressure-velocity coupling procedure is presented to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) of turbulent incompressible flows in regular and irregular geometries. The proposed method is a combination of several efficient techniques found in different Computational Fluid Dynamic (CFD) procedures and it is a major improvement of the algorithm published in 2007 by this author. This new procedure has very low numerical diffusion and some preliminary calculations with 2D steady state flows show that viscous effects become negligible faster that ever predicted numerically.The fundamental idea of this proposal lays on several important inconsistencies found in three of the most popular techniques used in CFD, segregated procedures, streamline-vorticity formulation for 2D viscous flows and the fractional-step method, very popular in DNS/LES.The inconsistencies found become important in elliptic flows and they might lead to some wrong solutions if coarse grids are used. In all methods studied, the mathematical basement was found to be correct in most cases, but inconsistencies were found when writing the boundary conditions. In all methods analyzed, it was found that it is basically impossible to satisfy the exact set of boundary conditions and all formulations use a reduced set, valid for parabolic flows only.For example, for segregated methods, boundary condition of normal derivative for pressure zero is valid only in parabolic flows. Additionally, the complete proposal for mass balance correction is right exclusively for parabolic flows.In the streamline-vorticity formulation, the boundary conditions normally used for the streamline function, violates the no-slip condition for viscous flow. Finally, in the fractional-step method, the boundary condition for pseudo-velocity implies a zero normal derivative for pressure in the wall (correct in parabolic flows only) and, when the flows reaches steady state, the procedure does not guarantee mass balance.The proposed procedure is validated in two cases of 2D flow in steady state, backward-facing step and lid-driven cavity. Comparisons are performed with experiments and excellent agreement was obtained in the solutions that were free from numerical instabilities.A study on grid usage is done. It was found that if the discretized equations are written in terms of a local Reynolds number, a strong criterion can be developed to determine, in advance, the grid requirements for any fluid flow calculation.The 2D-DNS on parallel plates is presented to study the basic features present in the simulation of any turbulent flow. Calculations were performed on a short geometry, using a uniform and very fine grid to avoid any numerical instability. Inflow conditions were white noise and high frequency oscillations. Results suggest that, if no numerical instability is present, inflow conditions alone are not enough to sustain permanently the turbulent regime.Finally, the 2D-DNS on a backward-facing step is studied. Expansion ratios of 1.14 and 1.40 are used and calculations are performed in the transitional regime. Inflow conditions were white noise and high frequency oscillations. In general, good agreement is found on most variables when comparing with experimental data.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005733, ucf:50148
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005733
-
-
Title
-
A FRAMEWORK FOR EFFICIENT DATA DISTRIBUTION IN PEER-TO-PEER NETWORKS.
-
Creator
-
Purandare, Darshan, Guha, Ratan, University of Central Florida
-
Abstract / Description
-
Peer to Peer (P2P) models are based on user altruism, wherein a user shares its content with other users in the pool and it also has an interest in the content of the other nodes. Most P2P systems in their current form are not fair in terms of the content served by a peer and the service obtained from swarm. Most systems suffer from free rider's problem where many high uplink capacity peers contribute much more than they should while many others get a free ride for downloading the content...
Show morePeer to Peer (P2P) models are based on user altruism, wherein a user shares its content with other users in the pool and it also has an interest in the content of the other nodes. Most P2P systems in their current form are not fair in terms of the content served by a peer and the service obtained from swarm. Most systems suffer from free rider's problem where many high uplink capacity peers contribute much more than they should while many others get a free ride for downloading the content. This leaves high capacity nodes with very little or no motivation to contribute. Many times such resourceful nodes exit the swarm or don't even participate. The whole scenario is unfavorable and disappointing for P2P networks in general, where participation is a must and a very important feature. As the number of users increases in the swarm, the swarm becomes robust and scalable. Other important issues in the present day P2P system are below optimal Quality of Service (QoS) in terms of download time, end-to-end latency and jitter rate, uplink utilization, excessive cross ISP traffic, security and cheating threats etc. These current day problems in P2P networks serve as a motivation for present work. To this end, we present an efficient data distribution framework in Peer-to-Peer (P2P) networks for media streaming and file sharing domain. The experiments with our model, an alliance based peering scheme for media streaming, show that such a scheme distributes data to the swarm members in a near-optimal way. Alliances are small groups of nodes that share data and other vital information for symbiotic association. We show that alliance formation is a loosely coupled and an effective way to organize the peers and our model maps to a small world network, which form efficient overlay structures and are robust to network perturbations such as churn. We present a comparative simulation based study of our model with CoolStreaming/DONet (a popular model) and present a quantitative performance evaluation. Simulation results show that our model scales well under varying workloads and conditions, delivers near optimal levels of QoS, reduces cross ISP traffic considerably and for most cases, performs at par or even better than Cool-Streaming/DONet. In the next phase of our work, we focussed on BitTorrent P2P model as it the most widely used file sharing protocol. Many studies in academia and industry have shown that though BitTorrent scales very well but is far from optimal in terms of fairness to end users, download time and uplink utilization. Furthermore, random peering and data distribution in such model lead to suboptimal performance. Lately, new breed of BitTorrent clients like BitTyrant have shown successful strategic attacks against BitTorrent. Strategic peers configure the BitTorrent client software such that for very less or no contribution, they can obtain good download speeds. Such strategic nodes exploit the altruism in the swarm and consume resources at the expense of other honest nodes and create an unfair swarm. More unfairness is generated in the swarm with the presence of heterogeneous bandwidth nodes. We investigate and propose a new token-based anti-strategic policy that could be used in BitTorrent to minimize the free-riding by strategic clients. We also proposed other policies against strategic attacks that include using a smart tracker that denies the request of strategic clients for peer listmultiple times, and black listing the non-behaving nodes that do not follow the protocol policies. These policies help to stop the strategic behavior of peers to a large extent and improve overall system performance. We also quantify and validate the benefits of using bandwidth peer matching policy. Our simulations results show that with the above proposed changes, uplink utilization and mean download time in BitTorrent network improves considerably. It leaves strategic clients with little or no incentive to behave greedily. This reduces free riding and creates fairer swarm with very little computational overhead. Finally, we show that our model is self healing model where user behavior changes from selfish to altruistic in the presence of the aforementioned policies.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002260, ucf:47864
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002260
-
-
Title
-
Approximate In-memory computing on RERAMs.
-
Creator
-
Khokhar, Salman Anwar, Heinrich, Mark, Leavens, Gary, Yuksel, Murat, Bagci, Ulas, Rahman, Talat, University of Central Florida
-
Abstract / Description
-
Computing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a...
Show moreComputing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a memory-processor communication bottleneck, which is commonly referredto as the 'memory wall'. The relatively slower progress in memory technology compared with processing units has continued to exacerbate the memory wall problem. As feature sizes in the CMOSlogic family reduce further, quantum tunneling effects are becoming more prominent. Simultaneously, chip transistor density is already so high that all transistors cannot be powered up at the same time without violating temperature constraints, a phenomenon characterized as dark-silicon. Coupled with this, there is also an increase in leakage currents with smaller feature sizes, resultingin a breakdown of 'Dennard's' scaling. All these challenges cannot be met without fundamental changes in current computing paradigms. One viable solution is in-memory computing, wherecomputing and storage are performed alongside each other. A number of emerging memory fabrics such as ReRAMS, STT-RAMs, and PCM RAMs are capable of performing logic in-memory.ReRAMs possess high storage density, have extremely low power consumption and a low cost of fabrication. These advantages are due to the simple nature of its basic constituting elements whichallow nano-scale fabrication. We use flow-based computing on ReRAM crossbars for computing that exploits natural sneak paths in those crossbars.Another concurrent development in computing is the maturation of domains that are error resilient while being highly data and power intensive. These include machine learning, pattern recognition,computer vision, image processing, and networking, etc. This shift in the nature of computing workloads has given weight to the idea of (")approximate computing("), in which device efficiency is improved by sacrificing tolerable amounts of accuracy in computation. We present a mathematically rigorous foundation for the synthesis of approximate logic and its mapping to ReRAM crossbars using search based and graphical methods.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007827, ucf:52817
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007827
-
-
Title
-
On the design and performance of cognitive packets over wired networks and mobile ad hoc networks.
-
Creator
-
Lent, Marino Ricardo, Gelenbe, Erol, Engineering and Computer Science
-
Abstract / Description
-
University of Central Florida College of Engineering Thesis; This dissertation studied cognitive packet networks (CPN) which build networked learning systems that support adaptive, quality of service-driven routing of packets in wired networks and in wireless, mobile ad hoc networks.
-
Date Issued
-
2003
-
Identifier
-
CFR0001374, ucf:52931
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFR0001374
-
-
Title
-
A unified approach to dynamic modeling of high switching frequency pwm converters.
-
Creator
-
Iannello, Christopher J., Batarseh, Issa, Engineering
-
Abstract / Description
-
University of Central Florida College of Engineering Thesis; This dissertation will present the development of a unified approach for dynamic modeling of the PWM and soft-switching power converters. Dynamic modeling of non-linear power converters is very important for the design and stability of their closed loop control. While the use of equivalent circuits is often preferred due to simulation efficiency issues, no unified and widely applicable method for the formulation of these equivalents...
Show moreUniversity of Central Florida College of Engineering Thesis; This dissertation will present the development of a unified approach for dynamic modeling of the PWM and soft-switching power converters. Dynamic modeling of non-linear power converters is very important for the design and stability of their closed loop control. While the use of equivalent circuits is often preferred due to simulation efficiency issues, no unified and widely applicable method for the formulation of these equivalents exists.
Show less
-
Date Issued
-
2001
-
Identifier
-
CFR0000833, ucf:52929
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFR0000833
-
-
Title
-
IMPROVING PERFORMANCE AND PROGRAMMER PRODUCTIVITY FOR I/O-INTENSIVE HIGH PERFORMANCE COMPUTING APPLICATIONS.
-
Creator
-
Sehrish, Saba, Wang, Jun, University of Central Florida
-
Abstract / Description
-
Due to the explosive growth in the size of scientific data sets, data-intensive computing is an emerging trend in computational science. HPC applications are generating and processing large amount of data ranging from terabytes (TB) to petabytes (PB). This new trend of growth in data for HPC applications has imposed challenges as to what is an appropriate parallel programming framework to efficiently process large data sets. In this work, we study the applicability of two programming models ...
Show moreDue to the explosive growth in the size of scientific data sets, data-intensive computing is an emerging trend in computational science. HPC applications are generating and processing large amount of data ranging from terabytes (TB) to petabytes (PB). This new trend of growth in data for HPC applications has imposed challenges as to what is an appropriate parallel programming framework to efficiently process large data sets. In this work, we study the applicability of two programming models (MPI/MPI-IO and MapReduce) to a variety of I/O-intensive HPC applications ranging from simulations to analytics. We identify several performance and programmer productivity related limitations of these existing programming models, if used for I/O-intensive applications. We propose new frameworks which will improve both performance and programmer productivity for the emerging I/O-intensive applications. Message Passing Interface (MPI) is widely used for writing HPC applications. MPI/MPI- IO allows a fine-grained control of assigning data and task distribution. At the programming frameworks level, various optimizations have been proposed to improve the performance of MPI/MPI-IO function calls. These performance optimizations are provided as various function options to the programmers. In order to write an efficient code, they are required to know the exact usage of the optimization functions, hence programmer productivity is limited. We propose an abstraction called Reduced Function Set Abstraction (RFSA) for MPI-IO to reduce the number of I/O functions and provide methods to automate the selection of appropriate I/O function for writing HPC simulation applications. The purpose of RFSA is to hide the performance optimization functions from the application developer, and relieve the application developer from deciding on a specific function. The proposed set of functions relies on a selection algorithm to decide among the most common optimizations provided by MPI-IO. Additionally, many application scientists are looking to integrate data-intensive computing into computational-intensive High Performance Computing facilities, particularly for data analytics. We have observed several scientific applications which must migrate their data from an HPC storage system to a data-intensive one. There is a gap between the data semantics of HPC storage and data-intensive system, hence, once migrated, the data must be further refined and reorganized. This reorganization must be performed before existing data-intensive tools such as MapReduce can be effectively used to analyze data. This reorganization requires at least two complete scans through the data set and then at least one MapReduce program to prepare the data before analyzing it. Running multiple MapReduce phases causes significant overhead for the application, in the form of excessive I/O operations. For every MapReduce application that must be run in order to complete the desired data analysis, a distributed read and write operation on the file system must be performed. Our contribution is to extend Map-Reduce to eliminate the multiple scans and also reduce the number of pre-processing MapReduce programs. We have added additional expressiveness to the MapReduce language in our novel framework called MapReduce with Access Patterns (MRAP), which allows users to specify the logical semantics of their data such that 1) the data can be analyzed without running multiple data pre-processing MapReduce programs, and 2) the data can be simultaneously reorganized as it is migrated to the data-intensive file system. We also provide a scheduling mechanism to further improve the performance of these applications. The main contributions of this thesis are, 1) We implement a selection algorithm for I/O functions like read/write, merge a set of functions for data types and file views and optimize the atomicity function by automating the locking mechanism in RFSA. By running different parallel I/O benchmarks on both medium-scale clusters and NERSC supercomputers, we show an improved programmer productivity (35.7% on average). This approach incurs an overhead of 2-5% for one particular optimization, and shows performance improvement of 17% when a combination of different optimizations is required by an application. 2) We provide an augmented Map-Reduce system (MRAP), which consist of an API and corresponding optimizations i.e. data restructuring and scheduling. We have demonstrated up to 33% throughput improvement in one real application (read-mapping in bioinformatics), and up to 70% in an I/O kernel of another application (halo catalogs analytics). Our scheduling scheme shows performance improvement of 18% for an I/O kernel of another application (QCD analytics).
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003236, ucf:48560
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003236
-
-
Title
-
INTEGRATION OF COMPUTER-BASED VIRTUAL CHECK RIDE SYSTEM PRE-TRIP INSPECTION IN COMMERICAL DRIVER LICENSE TRAINING PROGRAM.
-
Creator
-
Makwana, Alpesh, Kincaid, Peter, University of Central Florida
-
Abstract / Description
-
Pre-Trip Inspection of the truck and trailer is one of the components of the current Commercial Driver's License (CDL) test. This part of the CDL test checks the ability of the student to identify the important parts of the commercial vehicle and their potential defects. The Virtual Check Ride System (VCRS), a computer-based application, is an assessment and feedback tool that mirrors the inspection component of the actual CDL. The VCRS has provided an after action review (AAR) via a...
Show morePre-Trip Inspection of the truck and trailer is one of the components of the current Commercial Driver's License (CDL) test. This part of the CDL test checks the ability of the student to identify the important parts of the commercial vehicle and their potential defects. The Virtual Check Ride System (VCRS), a computer-based application, is an assessment and feedback tool that mirrors the inspection component of the actual CDL. The VCRS has provided an after action review (AAR) via a feedback session that helps in identifying and correcting drivers' skill in inspecting parts and for overall safety. The purpose of this research is to determine the effectiveness of the VCRS in truck driving training programs. An experimental study was conducted with truck driving students at Mid Florida Tech, located in Orlando, Florida. The students were divided into control and experimental groups. Students in the both groups received regular training provided by Mid Florida Tech. The experimental group received additional training by making use of the VCRS. A total of three paper-based tests were given to all subjects during first three weeks; one test at the end of a week. Both groups were given the same paper-based tests. A two-way analysis of variance was conducted to evaluate the effect of the VCRS in the experimental group. This analysis found a significant difference between control and experimental groups. This effect showed that the students in the experimental group increased their performance by using VCRS. Moreover, there was a main effect in the scores of each week. However, there was not an interaction between the two factors. Follow up Post Hoc tests were conducted to evaluate the pair-wise differences among the means of the test week factors using a Tukey HSD test. These Post Hoc comparisons indicated that the mean score for the third week's test scores were significantly better than the first week's test score in the experimental group. It was concluded that the VCRS facilitated learning for the experimental group and that learning also occurred for both groups as a result of repeated testing.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002926, ucf:47992
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002926
-
-
Title
-
DUPLICATED LABORATORY TESTS: A HOSPITAL AUDIT AND EVALUATION OF A COMPUTERIZED ALERT INTERVENTION.
-
Creator
-
Bridges, Sharon, Norris, Anne, University of Central Florida
-
Abstract / Description
-
Background Laboratory testing is necessary when it contributes to the overall clinical management of the patient. Redundant testing, however, is often unnecessary and expensive and contributes to overall reductions in healthcare system efficiency. The purpose of this study is two-fold. First, to evaluate the frequency of ordering duplicate laboratory tests in hospitalized patients and the costs associated with this practice. Second, it was designed to determine if the use of a computerized...
Show moreBackground Laboratory testing is necessary when it contributes to the overall clinical management of the patient. Redundant testing, however, is often unnecessary and expensive and contributes to overall reductions in healthcare system efficiency. The purpose of this study is two-fold. First, to evaluate the frequency of ordering duplicate laboratory tests in hospitalized patients and the costs associated with this practice. Second, it was designed to determine if the use of a computerized alert or prompt will reduce the total number of unnecessarily duplicated Acute Hepatitis Profile (AHP) laboratory tests. Methods This two-phase study took place in an inpatient facility that was part of a large tertiary care hospital system in Florida. A retrospective descriptive design was used during Phase 1 was to evaluate six laboratory tests and the frequency of ordering duplicate laboratory tests in hospitalized patients and to determine the associated costs of this practice for a 12-month time period in 2010. A test was considered a duplicate or an unnecessarily repeated test if it followed a previous test of the same type during the patient's length of stay in the hospital and one in which any change in their values likely would not be clinically significant. A quasi-experimental pre- and post-test design was used during phase 2 was to determine the proportion of duplication of the AHP test before and after the implementation of a computerized alert intervention implemented as part of a system quality improvement process on January 5th, 2011. Data were compared for two 3-month time periods, pre- and post-alert implementation. The AHP test was considered redundant if it followed a previous test of the same type within 15 days of the initial test being final and present in the medical record. Results In phase 1, including each of the six tests examined, there were a total amount of 53, 351 test ordered, with 10, 375 (19.4%) of these cancelled. Out of the total amount of result final tests (n = 42,976), including each of the six tests examined, 4.6-8.7% were redundant. Results of the proportion of duplication of the six selected tests are as follows: AHP 196/2514 (7.8%), Antinuclear Antibody (ANA) 120/2594 (4.6%), B12/Folate level 396/5874 (6.7%), Thyroid Stimulating Hormone (TSH) 1893/21595 (8.7%), Ferritin 384/5171 (7.4%), and Iron/Total iron binding capacity (TIBC) 316/5155 (6.1%). The overall associated yearly cost of redundant testing of these six selected tests was an estimated $419, 218. The largest proportion of redundant tests was the Thyroid Stimulating Hormone level, costing a yearly estimated $300, 987. In Phase 2, prior to introduction of the alert, 674 AHP tests were performed. Of these, 53 (7.9%) were redundant. During the intervention period, 692 AHP tests were performed, of these 18 (2.6%) were redundant. The implementation of the computerized alert was shown to significantly reduce the proportion of AHP tests (Chi-Square: [chi]2 = df 1, p [less than or equal to] 0.001). The differences in the associated costs of duplicated AHP were $5238 dollars in 2010 as compared to $1746 in 2011 post-alert and these differences were significant (Mann Whitney U, Z = -4.04, p [less than or equal to]; 0.001). Conclusion Although the proportions of unnecessarily repeated diagnostic tests that were observed during Phase 1 of this study were small, the associated costs could adversely affect hospital revenue and overall healthcare efficiency. The implementation of the AHP computerized alert demonstrated a drop in the proportion of redundant AHP tests and subsequent associated cost savings. It is necessary to perform further research to evaluate computerized alerts on other tests with evidence-based test-specific time intervals, and to determine if such reductions post-implementation of AHP alerts are sustained over time.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003934, ucf:48701
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003934
-
-
Title
-
Probabilistic-Based Computing Transformation with Reconfigurable Logic Fabrics.
-
Creator
-
Alawad, Mohammed, Lin, Mingjie, DeMara, Ronald, Mikhael, Wasfy, Wang, Jun, Das, Tuhin, University of Central Florida
-
Abstract / Description
-
Effectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such...
Show moreEffectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such, the traditional computing means of first-principlemodeling or explicit statistical modeling will very likely be ineffective to achieveflexibility, autonomy, and human interaction. The bottom line is clear: given where we areheaded, the fundamental principle of modern computing(-)deterministic logic circuits canflawlessly emulate propositional logic deduction governed by Boolean algebra(-)has to bereexamined, and transformative changes in the foundation of modern computing must bemade.This dissertation presents a novel stochastic-based computing methodology. It efficientlyrealizes the algorithmatic computing through the proposed concept of Probabilistic DomainTransform (PDT). The essence of PDT approach is to encode the input signal asthe probability density function, perform stochastic computing operations on the signal inthe probabilistic domain, and decode the output signal by estimating the probability densityfunction of the resulting random samples. The proposed methodology possesses manynotable advantages. Specifically, it uses much simplified circuit units to conduct complexoperations, which leads to highly area- and energy-efficient designs suitable for parallel processing.Moreover, it is highly fault-tolerant because the information to be processed isencoded with a large ensemble of random samples. As such, the local perturbations of itscomputing accuracy will be dissipated globally, thus becoming inconsequential to the final overall results. Finally, the proposed probabilistic-based computing can facilitate buildingscalable precision systems, which provides an elegant way to trade-off between computingaccuracy and computing performance/hardware efficiency for many real-world applications.To validate the effectiveness of the proposed PDT methodology, two important signal processingapplications, discrete convolution and 2-D FIR filtering, are first implemented andbenchmarked against other deterministic-based circuit implementations. Furthermore, alarge-scale Convolutional Neural Network (CNN), a fundamental algorithmic building blockin many computer vision and artificial intelligence applications that follow the deep learningprinciple, is also implemented with FPGA based on a novel stochastic-based and scalablehardware architecture and circuit design. The key idea is to implement all key componentsof a deep learning CNN, including multi-dimensional convolution, activation, and poolinglayers, completely in the probabilistic computing domain. The proposed architecture notonly achieves the advantages of stochastic-based computation, but can also solve severalchallenges in conventional CNN, such as complexity, parallelism, and memory storage.Overall, being highly scalable and energy efficient, the proposed PDT-based architecture iswell-suited for a modular vision engine with the goal of performing real-time detection, recognitionand segmentation of mega-pixel images, especially those perception-based computingtasks that are inherently fault-tolerant.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006828, ucf:51768
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006828
-
-
Title
-
Human Detection, Tracking and Segmentation in Surveillance Video.
-
Creator
-
Shu, Guang, Shah, Mubarak, Boloni, Ladislau, Wang, Jun, Lin, Mingjie, Sugaya, Kiminobu, University of Central Florida
-
Abstract / Description
-
This dissertation addresses the problem of human detection and tracking in surveillance videos. Even though this is a well-explored topic, many challenges remain when confronted with data from real world situations. These challenges include appearance variation, illumination changes, camera motion, cluttered scenes and occlusion. In this dissertation several novel methods for improving on the current state of human detection and tracking based on learning scene-specific information in video...
Show moreThis dissertation addresses the problem of human detection and tracking in surveillance videos. Even though this is a well-explored topic, many challenges remain when confronted with data from real world situations. These challenges include appearance variation, illumination changes, camera motion, cluttered scenes and occlusion. In this dissertation several novel methods for improving on the current state of human detection and tracking based on learning scene-specific information in video feeds are proposed.Firstly, we propose a novel method for human detection which employs unsupervised learning and superpixel segmentation. The performance of generic human detectors is usually degraded in unconstrained video environments due to varying lighting conditions, backgrounds and camera viewpoints. To handle this problem, we employ an unsupervised learning framework that improves the detection performance of a generic detector when it is applied to a particular video. In our approach, a generic DPM human detector is employed to collect initial detection examples. These examples are segmented into superpixels and then represented using Bag-of-Words (BoW) framework. The superpixel-based BoW feature encodes useful color features of the scene, which provides additional information. Finally a new scene-specific classifier is trained using the BoW features extracted from the new examples. Compared to previous work, our method learns scene-specific information through superpixel-based features, hence it can avoid many false detections typically obtained by a generic detector. We are able to demonstrate a significant improvement in the performance of the state-of-the-art detector.Given robust human detection, we propose a robust multiple-human tracking framework using a part-based model. Human detection using part models has become quite popular, yet its extension in tracking has not been fully explored. Single camera-based multiple-person tracking is often hindered by difficulties such as occlusion and changes in appearance. We address such problems by developing an online-learning tracking-by-detection method. Our approach learns part-based person-specific Support Vector Machine (SVM) classifiers which capture articulations of moving human bodies with dynamically changing backgrounds. With the part-based model, our approach is able to handle partial occlusions in both the detection and the tracking stages. In the detection stage, we select the subset of parts which maximizes the probability of detection. This leads to a significant improvement in detection performance in cluttered scenes. In the tracking stage, we dynamically handle occlusions by distributing the score of the learned person classifier among its corresponding parts, which allows us to detect and predict partial occlusions and prevent the performance of the classifiers from being degraded. Extensive experiments using the proposed method on several challenging sequences demonstrate state-of-the-art performance in multiple-people tracking.Next, in order to obtain precise boundaries of humans, we propose a novel method for multiple human segmentation in videos by incorporating human detection and part-based detection potential into a multi-frame optimization framework. In the first stage, after obtaining the superpixel segmentation for each detection window, we separate superpixels corresponding to a human and background by minimizing an energy function using Conditional Random Field (CRF). We use the part detection potentials from the DPM detector, which provides useful information for human shape. In the second stage, the spatio-temporal constraints of the video is leveraged to build a tracklet-based Gaussian Mixture Model for each person, and the boundaries are smoothed by multi-frame graph optimization. Compared to previous work, our method could automatically segment multiple people in videos with accurate boundaries, and it is robust to camera motion. Experimental results show that our method achieves better segmentation performance than previous methods in terms of segmentation accuracy on several challenging video sequences.Most of the work in Computer Vision deals with point solution; a specific algorithm for a specific problem. However, putting different algorithms into one real world integrated system is a big challenge. Finally, we introduce an efficient tracking system, NONA, for high-definition surveillance video. We implement the system using a multi-threaded architecture (Intel Threading Building Blocks (TBB)), which executes video ingestion, tracking, and video output in parallel. To improve tracking accuracy without sacrificing efficiency, we employ several useful techniques. Adaptive Template Scaling is used to handle the scale change due to objects moving towards a camera. Incremental Searching and Local Frame Differencing are used to resolve challenging issues such as scale change, occlusion and cluttered backgrounds. We tested our tracking system on a high-definition video dataset and achieved acceptable tracking accuracy while maintaining real-time performance.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005551, ucf:50278
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005551
-
-
Title
-
MODELING, DESIGN AND EVALUATION OF NETWORKING SYSTEMS AND PROTOCOLS THROUGH SIMULATION.
-
Creator
-
Lacks, Daniel, Kocak, Taskin, University of Central Florida
-
Abstract / Description
-
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework...
Show moreComputer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001887, ucf:47399
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001887
-
-
Title
-
Automated Synthesis of Unconventional Computing Systems.
-
Creator
-
Hassen, Amad Ul, Jha, Sumit Kumar, Sundaram, Kalpathy, Fan, Deliang, Ewetz, Rickard, Rahman, Talat, University of Central Florida
-
Abstract / Description
-
Despite decades of advancements, modern computing systems which are based on the von Neumann architecture still carry its shortcomings. Moore's law, which had substantially masked the effects of the inherent memory-processor bottleneck of the von Neumann architecture, has slowed down due to transistor dimensions nearing atomic sizes. On the other hand, modern computational requirements, driven by machine learning, pattern recognition, artificial intelligence, data mining, and IoT, are growing...
Show moreDespite decades of advancements, modern computing systems which are based on the von Neumann architecture still carry its shortcomings. Moore's law, which had substantially masked the effects of the inherent memory-processor bottleneck of the von Neumann architecture, has slowed down due to transistor dimensions nearing atomic sizes. On the other hand, modern computational requirements, driven by machine learning, pattern recognition, artificial intelligence, data mining, and IoT, are growing at the fastest pace ever. By their inherent nature, these applications are particularly affected by communication-bottlenecks, because processing them requires a large number of simple operations involving data retrieval and storage. The need to address the problems associated with conventional computing systems at the fundamental level has given rise to several unconventional computing paradigms. In this dissertation, we have made advancements for automated syntheses of two types of unconventional computing paradigms: in-memory computing and stochastic computing. In-memory computing circumvents the problem of limited communication bandwidth by unifying processing and storage at the same physical locations. The advent of nanoelectronic devices in the last decade has made in-memory computing an energy-, area-, and cost-effective alternative to conventional computing. We have used Binary Decision Diagrams (BDDs) for in-memory computing on memristor crossbars. Specifically, we have used Free-BDDs, a special class of binary decision diagrams, for synthesizing crossbars for flow-based in-memory computing. Stochastic computing is a re-emerging discipline with several times smaller area/power requirements as compared to conventional computing systems. It is especially suited for fault-tolerant applications like image processing, artificial intelligence, pattern recognition, etc. We have proposed a decision procedures-based iterative algorithm to synthesize Linear Finite State Machines (LFSM) for stochastically computing non-linear functions such as polynomials, exponentials, and hyperbolic functions.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007648, ucf:52462
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007648
-
-
Title
-
A MODEL OF HIP DYSPLASIA REDUCTIONS IN INFANTS USING THE PAVLIK HARNESS.
-
Creator
-
Hadri, Wissam, Samsam, Mohtashem, University of Central Florida
-
Abstract / Description
-
Hip dysplasia, also known as congenital dysplasia of the hip (CDH) or Developmental Dysplasia of the Hip (DDH), is a mal-alignment of the hip joint. Left untreated within the first nine months, DDH could lead to permanent disability. Luckily however, this condition is diagnosed at an early age and is usually treated without surgery through the use of the Pavlik harness. In this thesis, a 3D computational model and dynamic finite element analysis of the muscles and tissues involved in hip...
Show moreHip dysplasia, also known as congenital dysplasia of the hip (CDH) or Developmental Dysplasia of the Hip (DDH), is a mal-alignment of the hip joint. Left untreated within the first nine months, DDH could lead to permanent disability. Luckily however, this condition is diagnosed at an early age and is usually treated without surgery through the use of the Pavlik harness. In this thesis, a 3D computational model and dynamic finite element analysis of the muscles and tissues involved in hip dysplasia and the mechanics of the Pavlik harness, as rendered by Dr. Alain J. Kassab's research group in the College of Mechanical and Aerospace Engineering in the University of Central Florida over the past 3 years, were reviewed and discussed to evaluate the accuracy of the hip reduction mechanism. I examine the group's usage of CT-based images to create accurate models of the bony structures, muscle tensions and roles that were generated using biomechanical analyses of maximal and passive strain, and the usage of adult and infant hips. Results, as produced by the group indicated that the effects and force contribution of the muscles studied are functions of severity of hip dislocation. Therefore, I discussed complications with real world-to-computational modeling with regards to structural systems and data interpretations. Although this design could be applied to more anatomical models and mechanistic analyses, more research would have to be completed to create more accurate models and results.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFH0004641, ucf:45317
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004641
-
-
Title
-
COMPARISON OF SQUARE-HOLE AND ROUND-HOLE FILM COOLING: A COMPUTATIONAL STUDY.
-
Creator
-
Durham, Michael Glenn, Kapat, Jay, University of Central Florida
-
Abstract / Description
-
Film cooling is a method used to protect surfaces exposed to high-temperature flows such as those that exist in gas turbines. It involves the injection of secondary fluid (at a lower temperature than that of the main flow) that covers the surface to be protected. This injection is through holes that can have various shapes; simple shapes such as those with a straight circular (by drilling) or straight square (by EDM) cross-section are relatively easy and inexpensive to create. Immediately...
Show moreFilm cooling is a method used to protect surfaces exposed to high-temperature flows such as those that exist in gas turbines. It involves the injection of secondary fluid (at a lower temperature than that of the main flow) that covers the surface to be protected. This injection is through holes that can have various shapes; simple shapes such as those with a straight circular (by drilling) or straight square (by EDM) cross-section are relatively easy and inexpensive to create. Immediately downstream of the exit of a film cooling hole, a so-called horseshoe vortex structure consisting of a pair of counter-rotating vortices is formed. This vortex formation has an effect on the distribution of film coolant over the surface being protected. The fluid dynamics of these vortices is dependent upon the shape of the film cooling holes, and therefore so is the film coolant coverage which determines the film cooling effectiveness distribution and also has an effect on the heat transfer coefficient distribution. Differences in horseshoe vortex structures and in resultant effectiveness distributions are shown for circular and square hole cases for blowing ratios of 0.33, 0.50, 0.67, 1.00, and 1.33. The film cooling effectiveness values obtained are compared with experimental and computational data of Yuen and Martinez-Botas (2003a) and Walters and Leylek (1997). It was found that in the main flow portion of the domain immediately downstream of the cooling hole exit, there is greater lateral separation between the vortices in the horseshoe vortex pair for the case of the square hole. This was found to result in the square hole providing greater centerline film cooling effectiveness immediately downstream of the hole and better lateral film coolant coverage far downstream of the hole.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000044, ucf:46080
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000044
-
-
Title
-
AUTOMATED ADAPTIVE DATA CENTER GENERATION FOR MESHLESS METHODS.
-
Creator
-
Mitteff, Eric, Divo, Eduardo, University of Central Florida
-
Abstract / Description
-
Meshless methods have recently received much attention but are yet to reach their full potential as the required problem setup (i.e. collocation point distribution) is still significant and far from automated. The distribution of points still closely resembles the nodes of finite volume-type meshes and the free parameter, c, of the radial-basis expansion functions (RBF) still must be tailored specifically to a problem. The localized meshless collocation method investigated requires a local...
Show moreMeshless methods have recently received much attention but are yet to reach their full potential as the required problem setup (i.e. collocation point distribution) is still significant and far from automated. The distribution of points still closely resembles the nodes of finite volume-type meshes and the free parameter, c, of the radial-basis expansion functions (RBF) still must be tailored specifically to a problem. The localized meshless collocation method investigated requires a local influence region, or topology, used as the expansion medium to produce the required field derivatives. Tests have shown a regular cartesian point distribution produces optimal results, however, in order to maintain a locally cartesian point distribution a recursive quadtree scheme is herein proposed. The quadtree method allows modeling of irregular geometries and refinement of regions of interest and it lends itself for full automation, thus, reducing problem setup efforts. Furthermore, the construction of the localized expansion regions is closely tied up to the point distribution process and, hence, incorporated into the automated sequence. This also allows for the optimization of the RBF free parameter on a local basis to achieve a desired level of accuracy in the expansion. In addition, an optimized auto-segmentation process is adopted to distribute and balance the problem loads throughout a parallel computational environment while minimizing communication requirements.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001321, ucf:47032
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001321
-
-
Title
-
THE PROTEOMICS APPROACH TO EVOLUTIONARY COMPUTATION: AN ANALYSIS OF PROTEOME-BASED LOCATION INDEPENDENT REPRESENTATIONS BASEDON THE PROPORTIONAL GENETIC ALGORITHM.
-
Creator
-
Garibay, Ivan, Wu, Annie, University of Central Florida
-
Abstract / Description
-
As the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into...
Show moreAs the complexity of our society and computational resources increases, so does the complexity of the problems that we approach using evolutionary search techniques. There are recent approaches to deal with the problem of scaling evolutionary methods to cope with highly complex difficult problems. Many of these approaches are biologically inspired and share an underlying principle: a problem representation based on basic representational building blocks that interact and self-organize into complex functions or designs. The observation from the central dogma of molecular biology that proteins are the basic building blocks of life and the recent advances in proteomics on analysis of structure, function and interaction of entire protein complements, lead us to propose a unifying framework of thought for these approaches: the proteomics approach. This thesis propose to investigate whether the self-organization of protein analogous structures at the representation level can increase the degree of complexity and ``novelty'' of solutions obtainable using evolutionary search techniques. In order to do so, we identify two fundamental aspects of this transition: (1) proteins interact in a three dimensional medium analogous to a multiset; and (2) proteins are functional structures. The first aspect is foundational for understanding of the second. This thesis analyzes the first aspect. It investigates the effects of using a genome to proteome mapping on evolutionary computation. This analysis is based on a genetic algorithm (GA) with a string to multiset mapping that we call the proportional genetic algorithm (PGA), and it focuses on the feasibility and effectiveness of this mapping. This mapping leads to a fundamental departure from typical EC methods: using a multiset of proteins as an intermediate mapping results in a \emph{completely location independent} problem representation where the location of the genes in a genome has no effect on the fitness of the solutions. Completely location independent representations, by definition, do not suffer from traditional EC hurdles associated with the location of the genes or positional effect in a genome. Such representations have the ability to self-organize into a genomic structure that appears to favor positive correlations between form and quality of represented solutions. Completely location independent representations also introduce new problems of their own such as the need for large alphabets of symbols and the theoretical need for larger representation spaces than traditional approaches. Overall, these representations perform as well or better than traditional representations and they appear to be particularly good for the class of problems involving proportions or multisets. This thesis concludes that the use of protein analogous structures as an intermediate representation in evolutionary computation is not only feasible but in some cases advantageous. In addition, it lays the groundwork for further research on proteins as functional self-organizing structures capable of building increasingly complex functionality, and as basic units of problem representation for evolutionary computation.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000311, ucf:46307
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000311
-
-
Title
-
MULTI-VIEW APPROACHES TO TRACKING, 3D RECONSTRUCTION AND OBJECT CLASS DETECTION.
-
Creator
-
khan, saad, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
Multi-camera systems are becoming ubiquitous and have found application in a variety of domains including surveillance, immersive visualization, sports entertainment and movie special effects amongst others. From a computer vision perspective, the challenging task is how to most efficiently fuse information from multiple views in the absence of detailed calibration information and a minimum of human intervention. This thesis presents a new approach to fuse foreground likelihood information...
Show moreMulti-camera systems are becoming ubiquitous and have found application in a variety of domains including surveillance, immersive visualization, sports entertainment and movie special effects amongst others. From a computer vision perspective, the challenging task is how to most efficiently fuse information from multiple views in the absence of detailed calibration information and a minimum of human intervention. This thesis presents a new approach to fuse foreground likelihood information from multiple views onto a reference view without explicit processing in 3D space, thereby circumventing the need for complete calibration. Our approach uses a homographic occupancy constraint (HOC), which states that if a foreground pixel has a piercing point that is occupied by foreground object, then the pixel warps to foreground regions in every view under homographies induced by the reference plane, in effect using cameras as occupancy detectors. Using the HOC we are able to resolve occlusions and robustly determine ground plane localizations of the people in the scene. To find tracks we obtain ground localizations over a window of frames and stack them creating a space time volume. Regions belonging to the same person form contiguous spatio-temporal tracks that are clustered using a graph cuts segmentation approach. Second, we demonstrate that the HOC is equivalent to performing visual hull intersection in the image-plane, resulting in a cross-sectional slice of the object. The process is extended to multiple planes parallel to the reference plane in the framework of plane to plane homologies. Slices from multiple planes are accumulated and the 3D structure of the object is segmented out. Unlike other visual hull based approaches that use 3D constructs like visual cones, voxels or polygonal meshes requiring calibrated views, ours is purely-image based and uses only 2D constructs i.e. planar homographies between views. This feature also renders it conducive to graphics hardware acceleration. The current GPU implementation of our approach is capable of fusing 60 views (480x720 pixels) at the rate of 50 slices/second. We then present an extension of this approach to reconstructing non-rigid articulated objects from monocular video sequences. The basic premise is that due to motion of the object, scene occupancies are blurred out with non-occupancies in a manner analogous to motion blurred imagery. Using our HOC and a novel construct: the temporal occupancy point (TOP), we are able to fuse multiple views of non-rigid objects obtained from a monocular video sequence. The result is a set of blurred scene occupancy images in the corresponding views, where the values at each pixel correspond to the fraction of total time duration that the pixel observed an occupied scene location. We then use a motion de-blurring approach to de-blur the occupancy images and obtain the 3D structure of the non-rigid object. In the final part of this thesis, we present an object class detection method employing 3D models of rigid objects constructed using the above 3D reconstruction approach. Instead of using a complicated mechanism for relating multiple 2D training views, our approach establishes spatial connections between these views by mapping them directly to the surface of a 3D model. To generalize the model for object class detection, features from supplemental views (obtained from Google Image search) are also considered. Given a 2D test image, correspondences between the 3D feature model and the testing view are identified by matching the detected features. Based on the 3D locations of the corresponding features, several hypotheses of viewing planes can be made. The one with the highest confidence is then used to detect the object using feature location matching. Performance of the proposed method has been evaluated by using the PASCAL VOC challenge dataset and promising results are demonstrated.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002073, ucf:47593
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002073
Pages