Current Search: computers (x)
Pages
-
-
Title
-
Solving Constraint Satisfaction Problems with Matrix Product States.
-
Creator
-
Pelton, Sabine, Mucciolo, Eduardo, Ishigami, Masa, Leuenberger, Michael, University of Central Florida
-
Abstract / Description
-
In the past decade, Matrix Product State (MPS) algorithms have emerged as an efficient method of modeling some many-body quantum spin systems. Since spin system Hamiltonians can be considered constraint satisfaction problems (CSPs), it follows that MPS should provide a versatile framework for studying a variety of general CSPs. In this thesis, we apply MPS to two types of CSP. First, use MPS to simulate adiabatic quantum computation (AQC), where the target Hamiltonians are instances of a...
Show moreIn the past decade, Matrix Product State (MPS) algorithms have emerged as an efficient method of modeling some many-body quantum spin systems. Since spin system Hamiltonians can be considered constraint satisfaction problems (CSPs), it follows that MPS should provide a versatile framework for studying a variety of general CSPs. In this thesis, we apply MPS to two types of CSP. First, use MPS to simulate adiabatic quantum computation (AQC), where the target Hamiltonians are instances of a fully connected, random Ising spin glass. Results of the simulations help shed light on why AQC fails for some optimization problems. We then present the novel application of a modified MPS algorithm to classical Boolean satisfiability problems, specifically k-SAT and max k-SAT. By construction, the algorithm also counts solutions to a given Boolean formula (\#-SAT). For easy satisfiable instances, the method is more expensive than other existing algorithms; however, for hard and unsatisfiable instances, the method succeeds in finding satisfying assignments where other algorithms fail to converge.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006902, ucf:51713
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006902
-
-
Title
-
Techniques for automated parameter estimation in computational models of probabilistic systems.
-
Creator
-
Hussain, Faraz, Jha, Sumit, Leavens, Gary, Turgut, Damla, Uddin, Nizam, University of Central Florida
-
Abstract / Description
-
The main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complexstochastic systems such that the resultant model meets user-specified behavioral specifications.These algorithms are designed to operate on probabilistic systems (-) systems that, in general,behave differently under identical conditions. The algorithms work using an approach thatcombines formal verification and...
Show moreThe main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complexstochastic systems such that the resultant model meets user-specified behavioral specifications.These algorithms are designed to operate on probabilistic systems (-) systems that, in general,behave differently under identical conditions. The algorithms work using an approach thatcombines formal verification and mathematical optimization to explore a model's parameterspace.The problem of determining whether a model instantiated with a given set of parametervalues satisfies the desired specification is first defined using formal verification terminology,and then reformulated in terms of statistical hypothesis testing. Parameter space explorationinvolves determining the outcome of the hypothesis testing query for each parameter pointand is guided using simulated annealing. The first algorithm uses the sequential probabilityratio test (SPRT) to solve the hypothesis testing problems, whereas the second algorithmuses an approach based on Bayesian statistical model checking (BSMC).The SPRT-based parameter synthesis algorithm was used to validate that a given model ofglucose-insulin metabolism has the capability of representing diabetic behavior by synthesizingvalues of three parameters that ensure that the glucose-insulin subsystem spends at least 20minutes in a diabetic scenario. The BSMC-based algorithm was used to discover the valuesof parameters in a physiological model of the acute inflammatory response that guarantee aset of desired clinical outcomes.These two applications demonstrate how our algorithms use formal verification, statisticalhypothesis testing and mathematical optimization to automatically synthesize parameters ofcomplex probabilistic models in order to meet user-specified behavioral properties
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006117, ucf:51200
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006117
-
-
Title
-
Improvement of Data-Intensive Applications Running on Cloud Computing Clusters.
-
Creator
-
Ibrahim, Ibrahim, Bassiouni, Mostafa, Lin, Mingjie, Zhou, Qun, Ewetz, Rickard, Garibay, Ivan, University of Central Florida
-
Abstract / Description
-
MapReduce, designed by Google, is widely used as the most popular distributed programmingmodel in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobehave been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in...
Show moreMapReduce, designed by Google, is widely used as the most popular distributed programmingmodel in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobehave been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in cloud computing systems. This impedance is due to the uneven distribution of input data and computation load among cluster nodes, heterogeneous data nodes, data skew in reduce phase, resource contention situations, and network configurations. All these reasons may cause delay failure and the violation of job completion time. One of the key issues that can significantly affect the performance of cloud computing is the computation load balancing among cluster nodes. Replica placement in Hadoop distributed file system plays a significant role in data availability and the balanced utilization of clusters. In the current replica placement policy (RPP) of Hadoop distributed file system (HDFS), the replicas of data blocks cannot be evenly distributed across cluster's nodes. The current HDFS must rely on a load balancing utility for balancing the distribution of replicas, which results in extra overhead for time and resources. This dissertation addresses data load balancing problem and presents an innovative replica placement policy for HDFS. It can perfectly balance the data load among cluster's nodes. The heterogeneity of cluster nodes exacerbates the issue of computational load balancing; therefore, another replica placement algorithm has been proposed in this dissertation for heterogeneous cluster environments. The timing of identifying the straggler map task is very important for straggler mitigation in data-intensive cloud computing. To mitigate the straggler map task, Present progress and Feedback based Speculative Execution (PFSE) algorithm has been proposed in this dissertation. PFSE is a new straggler identification scheme to identify the straggler map tasks based on the feedback information received from completed tasks beside the progress of the current running task. Straggler reduce task aggravates the violation of MapReduce job completion time. Straggler reduce task is typically the result of bad data partitioning during the reduce phase. The Hash partitioner employed by Hadoop may cause intermediate data skew, which results in straggler reduce task. In this dissertation a new partitioning scheme, named Balanced Data Clusters Partitioner (BDCP), is proposed to mitigate straggler reduce tasks. BDCP is based on sampling of input data and feedback information about the current processing task. BDCP can assist in straggler mitigation during the reduce phase and minimize the job completion time in MapReduce jobs. The results of extensive experiments corroborate that the algorithms and policies proposed in this dissertation can improve the performance of data-intensive applications running on cloud platforms.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007818, ucf:52804
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007818
-
-
Title
-
AN ADAPTIVE MODULAR REDUNDANCY TECHNIQUE TO SELF-REGULATE AVAILABILITY, AREA, AND ENERGY CONSUMPTION IN MISSION-CRITICAL APPLICATIONS.
-
Creator
-
Al-Haddad, Rawad, DeMara, Ronald, University of Central Florida
-
Abstract / Description
-
As reconfigurable devices' capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. A Sustainable Modular Adaptive Redundancy Technique (SMART) composed of a dual-layered organic system is proposed, analyzed, implemented, and experimentally evaluated. SMART relies upon a variety of self-regulating properties to control availability, energy consumption, and area used, in dynamically-changing...
Show moreAs reconfigurable devices' capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. A Sustainable Modular Adaptive Redundancy Technique (SMART) composed of a dual-layered organic system is proposed, analyzed, implemented, and experimentally evaluated. SMART relies upon a variety of self-regulating properties to control availability, energy consumption, and area used, in dynamically-changing environments that require high degree of adaptation. The hardware layer is implemented on a Xilinx Virtex-4 Field Programmable Gate Array (FPGA) to provide self-repair using a novel approach called a Reconfigurable Adaptive Redundancy System (RARS). The software layer supervises the organic activities within the FPGA and extends the self-healing capabilities through application-independent, intrinsic, evolutionary repair techniques to leverage the benefits of dynamic Partial Reconfiguration (PR). A SMART prototype is evaluated using a Sobel edge detection application. This prototype is shown to provide sustainability for stressful occurrences of transient and permanent fault injection procedures while still reducing energy consumption and area requirements. An Organic Genetic Algorithm (OGA) technique is shown capable of consistently repairing hard faults while maintaining correct edge detector outputs, by exploiting spatial redundancy in the reconfigurable hardware. A Monte Carlo driven Continuous Markov Time Chains (CTMC) simulation is conducted to compare SMART's availability to industry-standard Triple Modular Technique (TMR) techniques. Based on nine use cases, parameterized with realistic fault and repair rates acquired from publically available sources, the results indicate that availability is significantly enhanced by the adoption of fast repair techniques targeting aging-related hard-faults. Under harsh environments, SMART is shown to improve system availability from 36.02% with lengthy repair techniques to 98.84% with fast ones. This value increases to "five nines" (99.9998%) under relatively more favorable conditions. Lastly, SMART is compared to twenty eight standard TMR benchmarks that are generated by the widely-accepted BL-TMR tools. Results show that in seven out of nine use cases, SMART is the recommended technique, with power savings ranging from 22% to 29%, and area savings ranging from 17% to 24%, while still maintaining the same level of availability.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003993, ucf:48660
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003993
-
-
Title
-
Normally-Off Computing Design Methodology Using Spintronics: from Devices to Architectures.
-
Creator
-
Roohi, Arman, DeMara, Ronald, Abdolvand, Reza, Wang, Jun, Fan, Deliang, Del Barco, Enrique, University of Central Florida
-
Abstract / Description
-
Energy-harvesting-powered computing offers intriguing and vast opportunities to dramatically transform the landscape of Internet of Things (IoT) devices and wireless sensor networks by utilizing ambient sources of light, thermal, kinetic, and electromagnetic energy to achieve battery-free computing. In order to operate within the restricted energy capacity and intermittency profile of battery-free operation, it is proposed to innovate Elastic Intermittent Computation (EIC) as a new duty-cycle...
Show moreEnergy-harvesting-powered computing offers intriguing and vast opportunities to dramatically transform the landscape of Internet of Things (IoT) devices and wireless sensor networks by utilizing ambient sources of light, thermal, kinetic, and electromagnetic energy to achieve battery-free computing. In order to operate within the restricted energy capacity and intermittency profile of battery-free operation, it is proposed to innovate Elastic Intermittent Computation (EIC) as a new duty-cycle-variable computing approach leveraging the non-volatility inherent in post-CMOS switching devices. The foundations of EIC will be advanced from the ground up by extending Spin Hall Effect Magnetic Tunnel Junction (SHE-MTJ) device models to realize SHE-MTJ-based Majority Gate (MG) and Polymorphic Gate (PG) logic approaches and libraries, that leverage intrinsic-non-volatility to realize middleware-coherent, intermittent computation without checkpointing, micro-tasking, or software bloat and energy overheads vital to IoT. Device-level EIC research concentrates on encapsulating SHE-MTJ behavior with a compact model to leverage the non-volatility of the device for intrinsic provision of intermittent computation and lifetime energy reduction. Based on this model, the circuit-level EIC contributions will entail the design, simulation, and analysis of PG-based spintronic logic which is adaptable at the gate-level to support variable duty cycle execution that is robust to brief and extended supply outages or unscheduled dropouts, and development of spin-based research synthesis and optimization routines compatible with existing commercial toolchains. These tools will be employed to design a hybrid post-CMOS processing unit utilizing pipelining and power-gating through state-holding properties within the datapath itself, thus eliminating checkpointing and data transfer operations.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007526, ucf:52619
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007526
-
-
Title
-
Ubiquitous Computing in Public Education: The Effects of One-to-One Computer Initiatives on Student Achievement on Florida Standardized Assessments.
-
Creator
-
Lobeto, Fernando, Murray, Kenneth, Baldwin, Lee, Storey, Valerie A., Cintron Delgado, Rosa, University of Central Florida
-
Abstract / Description
-
The purpose of this study was to examine the effects of one-to-one computer initiatives on student achievement in reading and mathematics. This study compared the differences in FCAT 2.0 Reading and Mathematics scores between schools implementing one-to-one computer initiatives and schools implementing traditional modes of instruction. A second purpose of this study was to determine what effects one-to-one computer initiatives had on student FCAT 2.0 scores overall and by grade level, gender,...
Show moreThe purpose of this study was to examine the effects of one-to-one computer initiatives on student achievement in reading and mathematics. This study compared the differences in FCAT 2.0 Reading and Mathematics scores between schools implementing one-to-one computer initiatives and schools implementing traditional modes of instruction. A second purpose of this study was to determine what effects one-to-one computer initiatives had on student FCAT 2.0 scores overall and by grade level, gender, and socio-economic status. The study used an independent-samples t-test, a repeated measures ANOVA, and a factorial ANCOVA to answer four research questions in order to achieve the purpose stated above. An analysis of the results revealed that the first year of one-to-one initiatives had a slightly negative effect on elementary school students, a small but positive effect on middle school students, and no effect on high school students. Further, the study found that students did not score statistically significantly different after one year of one-to-one digital instruction than they had the previous year.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006349, ucf:51573
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006349
-
-
Title
-
Mahalanobis kernel-based support vector data description for detection of large shifts in mean vector.
-
Creator
-
Nguyen, Vu, Maboudou, Edgard, Nickerson, David, Schott, James, University of Central Florida
-
Abstract / Description
-
Statistical process control (SPC) applies the science of statistics to various process control in order to provide higher-quality products and better services. The K chart is one among the many important tools that SPC offers. Creation of the K chart is based on Support Vector Data Description (SVDD), a popular data classifier method inspired by Support Vector Machine (SVM). As any methods associated with SVM, SVDD benefits from a wide variety of choices of kernel, which determines the...
Show moreStatistical process control (SPC) applies the science of statistics to various process control in order to provide higher-quality products and better services. The K chart is one among the many important tools that SPC offers. Creation of the K chart is based on Support Vector Data Description (SVDD), a popular data classifier method inspired by Support Vector Machine (SVM). As any methods associated with SVM, SVDD benefits from a wide variety of choices of kernel, which determines the effectiveness of the whole model. Among the most popular choices is the Euclidean distance-based Gaussian kernel, which enables SVDD to obtain a flexible data description, thus enhances its overall predictive capability. This thesis explores an even more robust approach by incorporating the Mahalanobis distance-based kernel (hereinafter referred to as Mahalanobis kernel) to SVDD and compare it with SVDD using the traditional Gaussian kernel. Method's sensitivity is benchmarked by Average Run Lengths obtained from multiple Monte Carlo simulations. Data of such simulations are generated from multivariate normal, multivariate Student's (t), and multivariate gamma populations using R, a popular software environment for statistical computing. One case study is also discussed using a real data set received from Halberg Chronobiology Center. Compared to Gaussian kernel, Mahalanobis kernel makes SVDD and thus the K chart significantly more sensitive to shifts in mean vector, and also in covariance matrix.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005676, ucf:50170
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005676
-
-
Title
-
AFFORDANCES IN THE DESIGN OF VIRTUAL ENVIRONMENTS.
-
Creator
-
Gross, David Charles, 4., Stannry, Kay M., University of Central Florida
-
Abstract / Description
-
Human-computer interaction design principles largely focus on static representations and have yet to fully incorporate theories of perception appropriate for the dynamic multimodal interactions inherent to virtual environment (VE) interaction. Theories of direct perception, in particular affordance theory, may prove particularly relevant to enhancing VE interaction design. The present research constructs a conceptual model of how affordances are realized in the natural world and how lack of...
Show moreHuman-computer interaction design principles largely focus on static representations and have yet to fully incorporate theories of perception appropriate for the dynamic multimodal interactions inherent to virtual environment (VE) interaction. Theories of direct perception, in particular affordance theory, may prove particularly relevant to enhancing VE interaction design. The present research constructs a conceptual model of how affordances are realized in the natural world and how lack of sensory stimuli may lead to realization failures in virtual environments. Implications of the model were empirically investigated by examining three affordances: passability, catchability, and flyability. The experimental design involved four factors for each of the three affordances and was implemented as a fractional factorial design. The results demonstrated that providing affording cues led to behavior closely in-line with real-world behavior. More specifically, when given affording cues participants tended to rotate their virtual bodies when entering narrow passageways, accurately judge balls as catchable, and fly when conditions warranted it. The results support the conceptual model and demonstrate 1) that substituting designed cues via sensory stimuli in available sensory modalities for absent or impoverished modalities may enable the perception of affordances in VEs; 2) that sensory stimuli substitutions provide potential approaches for enabling the perception of affordances in a VE which in the real world are cross-modal; and 3) that affordances relating to specific action capabilities may be enabled by designed sensory stimuli. This research lays an empirical foundation for a science of VE design based on choosing and implementing design properties so as to evoke targeted user behavior
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000061, ucf:46108
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000061
-
-
Title
-
EFFECT OF OPERATOR CONTROL CONFIGURATION ON UNMANNED AERIAL SYSTEM TRAINABILITY.
-
Creator
-
Neumann, John, Kincaid, Peter, University of Central Florida
-
Abstract / Description
-
Unmanned aerial systems (UAS) carry no pilot on board, yet they still require live operators to handle critical functions such as mission planning and execution. Humans also interpret the sensor information provided by these platforms. This applies to all classes of unmanned aerial vehicles (UAV's), including the smaller portable systems used for gathering real-time reconnaissance during military operations in urban terrain. The need to quickly and reliably train soldiers to control small...
Show moreUnmanned aerial systems (UAS) carry no pilot on board, yet they still require live operators to handle critical functions such as mission planning and execution. Humans also interpret the sensor information provided by these platforms. This applies to all classes of unmanned aerial vehicles (UAV's), including the smaller portable systems used for gathering real-time reconnaissance during military operations in urban terrain. The need to quickly and reliably train soldiers to control small UAS operations demands that the human-system interface be intuitive and easy to master. In this study, participants completed a series of tests of spatial ability and were then trained (in simulation) to teleoperate a micro-unmanned aerial vehicle equipped with forward and downward fixed cameras. Three aspects of the human-system interface were manipulated to assess the effects on manual control mastery and target detection. One factor was the input device. Participants used either a mouse or a specially programmed game controller (similar to that used with the Sony Playstation 2 video game console). A second factor was the nature of the flight control displays as either continuous or discrete (analog v. digital). The third factor involved the presentation of sensor imagery. The display could either provide streaming video from one camera at a time, or present the imagery from both cameras simultaneously in separate windows. The primary dependent variables included: 1) time to complete assigned missions, 2) number of collisions, 3) number of targets detected, and 4) operator workload. In general, operator performance was better with the game controller than with the mouse, but significant improvement in time to complete occurred over repeated trials regardless of the device used. Time to complete missions was significantly faster with the game controller, and operators also detected more targets without any significant differences in workload compared to mouse users. Workload on repeated trials decreased with practice, and spatial ability was a significant covariate of workload. Lower spatial ability associated with higher workload scores. In addition, demographic data including computer usage and video gaming experience were collected and analyzed, and correlated with performance. Higher video gaming experience was also associated with lower workload.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001496, ucf:47080
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001496
-
-
Title
-
TOWARDS A SELF-CALIBRATING VIDEO CAMERA NETWORK FOR CONTENT ANALYSIS AND FORENSICS.
-
Creator
-
Junejo, Imran, Foroosh, Hassan, University of Central Florida
-
Abstract / Description
-
Due to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space....
Show moreDue to growing security concerns, video surveillance and monitoring has received an immense attention from both federal agencies and private firms. The main concern is that a single camera, even if allowed to rotate or translate, is not sufficient to cover a large area for video surveillance. A more general solution with wide range of applications is to allow the deployed cameras to have a non-overlapping field of view (FoV) and to, if possible, allow these cameras to move freely in 3D space. This thesis addresses the issue of how cameras in such a network can be calibrated and how the network as a whole can be calibrated, such that each camera as a unit in the network is aware of its orientation with respect to all the other cameras in the network. Different types of cameras might be present in a multiple camera network and novel techniques are presented for efficient calibration of these cameras. Specifically: (i) For a stationary camera, we derive new constraints on the Image of the Absolute Conic (IAC). These new constraints are shown to be intrinsic to IAC; (ii) For a scene where object shadows are cast on a ground plane, we track the shadows on the ground plane cast by at least two unknown stationary points, and utilize the tracked shadow positions to compute the horizon line and hence compute the camera intrinsic and extrinsic parameters; (iii) A novel solution to a scenario where a camera is observing pedestrians is presented. The uniqueness of formulation lies in recognizing two harmonic homologies present in the geometry obtained by observing pedestrians; (iv) For a freely moving camera, a novel practical method is proposed for its self-calibration which even allows it to change its internal parameters by zooming; and (v) due to the increased application of the pan-tilt-zoom (PTZ) cameras, a technique is presented that uses only two images to estimate five camera parameters. For an automatically configurable multi-camera network, having non-overlapping field of view and possibly containing moving cameras, a practical framework is proposed that determines the geometry of such a dynamic camera network. It is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the geometry of a dynamic network. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic as well as on real data. Applications to path modeling, GPS coordinate estimation, and configuring mixed-reality environment are explored.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001743, ucf:47296
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001743
-
-
Title
-
INDIVIDUAL IDENTIFICATION OF POLAR BEARS BY WHISKER SPOT PATTERNS.
-
Creator
-
Anderson, Carlos, Waterman, Jane, University of Central Florida
-
Abstract / Description
-
Many types of ecological studies require identification of individual animals. I developed and evaluated an automated identification system for polar bears (Ursus maritimus) based on their whisker spot patterns. First, I measured the reliability of using whisker spot patterns for identification from polar bear photographs taken in western Hudson Bay. This analysis involved estimating the complexity of each whisker spot pattern in terms of its information content. I found that 98% of patterns...
Show moreMany types of ecological studies require identification of individual animals. I developed and evaluated an automated identification system for polar bears (Ursus maritimus) based on their whisker spot patterns. First, I measured the reliability of using whisker spot patterns for identification from polar bear photographs taken in western Hudson Bay. This analysis involved estimating the complexity of each whisker spot pattern in terms of its information content. I found that 98% of patterns contained enough information to be reliable, and this result varied little among three different observers. Based on these results, I implemented a computer-aided identification system for polar bears based on whisker spot pattern recognition. I used standard computer vision techniques to pre-process images and the Chamfer distance transform to compute similary scores between images. In addition, I evaluated the system by testing the effects of photographic quality and angle on system accuracy. I found that excellent and moderate quality/angle provided best results, with system accuracy of 90-95%. These findings suggest that individual identification of polar bears in the field based on whisker spot pattern variation is possible. Researchers studying polar bear behavior or estimating population parameters should benefit from this noninvasive technique.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001671, ucf:47207
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001671
-
-
Title
-
HELICAL PACKING REGULATES STRUCTURAL TRANSITIONS IN BAX.
-
Creator
-
Tschammer, Nuska, Khaled, Annette, University of Central Florida
-
Abstract / Description
-
Apoptosis is essential for development and the maintenance of cellular homeostasis and is frequently dysregulated in disease states. Proteins of the BCL-2 family are key modulators of this process and are thus ideal therapeutic targets. In response to diverse apoptotic stimuli, the pro-apoptotic member of BCL-2 family, BAX, redistributes from the cytosol to the mitochondria or endoplasmic reticulum and primes cells for death. The structural changes that enable this lethal protein to...
Show moreApoptosis is essential for development and the maintenance of cellular homeostasis and is frequently dysregulated in disease states. Proteins of the BCL-2 family are key modulators of this process and are thus ideal therapeutic targets. In response to diverse apoptotic stimuli, the pro-apoptotic member of BCL-2 family, BAX, redistributes from the cytosol to the mitochondria or endoplasmic reticulum and primes cells for death. The structural changes that enable this lethal protein to transition from a cytosolic form to a membrane-bound form remain poorly understood. Elucidating this process is a necessary step in the development of BAX as a novel therapeutic target for the treatment of cancer, as well as autoimmune and neurodegenerative disorders. A three-part study, utilizing computational modeling and biological assays, was used to examine how BAX, and similar proteins, transition to membranes. The first part tested the hypothesis that the C-terminal α9 helix regulates the distribution and activity of BAX by functioning as a "molecular switch" to trigger conformational changes that enable the protein to redistribute from the cytosol to mitochondrial membrane. Computational analysis, tested in biological assays, revealed a new finding: that the α9 helix can dock into a hydrophobic groove of BAX in two opposite directions in a self-associated, forward orientation and a previously, unknown reverse orientation that enables dimerization and apoptosis. Peptides, made to mimic the α9-helix, were able to induce the mitochondrial translocation of BAX, but not when key residues in the hydrophobic groove were mutated. Such findings indicate that the α9 helix of BAX can function as a "molecular switch" to mediate occupancy of the hydrophobic groove and regulate the membrane-binding activity of BAX. This new discovery contributes to the understanding of how BAX functions during apoptosis and can lead to the design of new therapeutic approaches based on manipulating the occupancy of the hydrophobic groove. The second and third parts of the study used computational modeling to examine how the helical stability of proteins relates to their ability to functionally transition. Analysis of BAX, as a prototypical transitioning protein, revealed that it has a broad variation in the distribution of its helical interaction energy. This observation led to the hypothesis tested, that proteins which undergo 3D structural transitions during execution of their function have broad variations in the distribution of their helical interaction energies. The result of this study, after examination of a large group of all-alpha proteins, was the development of a novel, predictive computational method, based on measuring helical interactions energies, which can be used to identify new proteins that undergo structural transitioning in the execution of their function. When this method was used to examine transitioning in other members the BCL-2 family, a strong agreement with the published experimental findings resulted. Further, it was revealed that the binding of a ligand, such as a small peptide, to a protein can have significant stabilizing or destabilizing influences that impact upon the activation and function of the protein. This computational analysis thus contributes to a better understanding of the function and regulation of the BCL-2 family members and also offers the means by which peptide mimics that modulate protein activity can be designed for testing in therapeutic endeavors.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001865, ucf:47392
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001865
-
-
Title
-
STUDIES OF A QUANTUM SCHEDULING ALGORITHM AND ON QUANTUM ERROR CORRECTION.
-
Creator
-
Lu, Feng, Marinescu, Dan, University of Central Florida
-
Abstract / Description
-
Quantum computation has been a rich field of study for decades because it promises possible spectacular advances, some of which may run counter to our classically rooted intuitions. At the same time, quantum computation is still in its infancy in both theoretical and practical areas. Efficient quantum algorithms are very limited in number and scope; no real breakthrough has yet been achieved in physical implementations. Grover's search algorithm can be applied to a wide range of problems;...
Show moreQuantum computation has been a rich field of study for decades because it promises possible spectacular advances, some of which may run counter to our classically rooted intuitions. At the same time, quantum computation is still in its infancy in both theoretical and practical areas. Efficient quantum algorithms are very limited in number and scope; no real breakthrough has yet been achieved in physical implementations. Grover's search algorithm can be applied to a wide range of problems; even problems not generally regarded as searching problems can be reformulated to take advantage of quantum parallelism and entanglement leading to algorithms which show a square root speedup over their classical counterparts. This dissertation discusses a systematic way to formulate such problems and gives as an example a quantum scheduling algorithm for an R||C_max problem. This thesis shows that quantum solution to such problems is not only feasible but in some cases advantageous. The complexity of the error correction circuitry forces us to design quantum error correction codes capable of correcting only a single error per error correction cycle. Yet, time-correlated errors are common for physical implementations of quantum systems; an error corrected during a certain cycle may reoccur in a later cycle due to physical processes specific to each physical implementation of the qubits. This dissertation discusses quantum error correction for a restricted class of time-correlated errors in a spin-boson model. The algorithm proposed allows the correction of two errors per error correction cycle, provided that one of them is time-correlated. The algorithm can be applied to any stabilizer code, perfect or non-perfect, and simplified the circuit complexity significantly comparing to the classic quantum error correction codes.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001873, ucf:47391
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001873
-
-
Title
-
RESOURCE-CONSTRAINT AND SCALABLE DATA DISTRIBUTION MANAGEMENT FOR HIGH LEVEL ARCHITECTURE.
-
Creator
-
Gupta, Pankaj, Guha, Ratan, University of Central Florida
-
Abstract / Description
-
In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six...
Show moreIn this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001949, ucf:47447
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001949
-
-
Title
-
TRANSFORMING LEARNING INTO A CONSTRUCTIVE COGNITIVE AND METACOGNITIVE ACTIVITY:USE OF A GUIDED LEARNER-GENERATED INSTRUCTIONAL STRATEGY WITHIN COMPUTER-BASED TRAINING.
-
Creator
-
Cuevas, Haydee, Bowers, Clint, University of Central Florida
-
Abstract / Description
-
This study explored the effectiveness of embedding a guided, learner-generated instructional strategy (query method), designed to support learners' cognitive and metacognitive processes, within the context of a computer-based complex task training environment (i.e., principles of flight in the aviation domain). The queries were presented as "stop and think" exercises in an open-ended question format that asked learners to generate either simple (low-level elaboration) or complex (high-level...
Show moreThis study explored the effectiveness of embedding a guided, learner-generated instructional strategy (query method), designed to support learners' cognitive and metacognitive processes, within the context of a computer-based complex task training environment (i.e., principles of flight in the aviation domain). The queries were presented as "stop and think" exercises in an open-ended question format that asked learners to generate either simple (low-level elaboration) or complex (high-level elaboration) sentences from a list of key training concepts. Results consistently highlighted the benefit of presenting participants with low-level elaboration queries, as compared to the no-query or high-level elaboration queries. In terms of post-training cognitive outcomes, participants presented with the low-level elaboration queries exhibited significantly more accurate knowledge organization (indicated by similarity to an expert model), better acquisition of perceptual knowledge, and superior performance on integrative knowledge assessment involving the integration and application of task-relevant concepts. Consistent with previous studies, no significant differences in performance were found on basic factual knowledge assessment. Presentation of the low-level elaboration queries also significantly improved the training program's instructional efficiency, that is, greater performance was achieved with less perceived cognitive effort. In terms of post-training metacognitive outcomes, participants presented with the low-level elaboration queries exhibited significantly greater metacomprehension accuracy and more effective metacognitive self-regulation during training. Contrary to predictions, incorporating the high-level elaboration queries into the training consistently failed, with only a few exceptions, to produce significantly better post-training outcomes than the no-query or the low-level elaboration query training conditions. The results of this study are discussed in terms of the theoretical implications for garnering a better understanding of the cognitive and metacognitive factors underlying the learning process. Practical implications for training design are presented within the context of cognitive load theory. Specifically, the increased cognitive processing of the training material associated with the high-level elaboration queries may have imposed too great a cognitive load on participants during training, minimizing the cognitive resources available for achieving a deeper, integrative understanding of the training concepts and hindering successful performance on the cognitive measures. The discussion also highlights the need for a multi-faceted approach to training evaluation.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000265, ucf:46221
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000265
-
-
Title
-
HIGH PERFORMANCE DATA MINING TECHNIQUES FOR INTRUSION DETECTION.
-
Creator
-
Siddiqui, Muazzam Ahmed, Lee, Joohan, University of Central Florida
-
Abstract / Description
-
The rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques...
Show moreThe rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques varying from statistical methods to machine learning algorithms.Intrusion detection systems use audit data generated by operating systems, application softwares or network devices. These sources produce huge amount of datasets with tens of millions of records in them. To analyze this data, data mining is used which is a process to dig useful patterns from a large bulk of information. A major obstacle in the process is that the traditional data mining and learning algorithms are overwhelmed by the bulk volume and complexity of available data. This makes these algorithms impractical for time critical tasks like intrusion detection because of the large execution time.Our approach towards this issue makes use of high performance data mining techniques to expedite the process by exploiting the parallelism in the existing data mining algorithms and the underlying hardware. We will show that how high performance and parallel computing can be used to scale the data mining algorithms to handle large datasets, allowing the data mining component to search a much larger set of patterns and models than traditional computational platforms and algorithms would allow.We develop parallel data mining algorithms by parallelizing existing machine learning techniques using cluster computing. These algorithms include parallel backpropagation and parallel fuzzy ARTMAP neural networks. We evaluate the performances of the developed models in terms of speedup over traditional algorithms, prediction rate and false alarm rate. Our results showed that the traditional backpropagation and fuzzy ARTMAP algorithms can benefit from high performance computing techniques which make them well suited for time critical tasks like intrusion detection.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000056, ucf:46142
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000056
-
-
Title
-
ADAPTIVE INTELLIGENT USER INTERFACES WITH EMOTION RECOGNITION.
-
Creator
-
NASOZ, FATMA, Christine Lisetti, Dr L., University of Central Florida
-
Abstract / Description
-
The focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect...
Show moreThe focus of this dissertation is on creating Adaptive Intelligent User Interfaces to facilitate enhanced natural communication during the Human-Computer Interaction by recognizing users' affective states (i.e., emotions experienced by the users) and responding to those emotions by adapting to the current situation via an affective user model created for each user. Controlled experiments were designed and conducted in a laboratory environment and in a Virtual Reality environment to collect physiological data signals from participants experiencing specific emotions. Algorithms (k-Nearest Neighbor [KNN], Discriminant Function Analysis [DFA], Marquardt-Backpropagation [MBP], and Resilient Backpropagation [RBP]) were implemented to analyze the collected data signals and to find unique physiological patterns of emotions. Emotion Elicitation with Movie Clips Experiment was conducted to elicit Sadness, Anger, Surprise, Fear, Frustration, and Amusement from participants. Overall, the three algorithms: KNN, DFA, and MBP, could recognize emotions with 72.3%, 75.0%, and 84.1% accuracy, respectively. Driving Simulator experiment was conducted to elicit driving-related emotions and states (panic/fear, frustration/anger, and boredom/sleepiness). The KNN, MBP and RBP Algorithms were used to classify the physiological signals by corresponding emotions. Overall, KNN could classify these three emotions with 66.3%, MBP could classify them with 76.7% and RBP could classify them with 91.9% accuracy. Adaptation of the interface was designed to provide multi-modal feedback to the users about their current affective state and to respond to users' negative emotional states in order to decrease the possible negative impacts of those emotions. Bayesian Belief Networks formalization was employed to develop the User Model to enable the intelligent system to appropriately adapt to the current context and situation by considering user-dependent factors, such as: personality traits and preferences.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000126, ucf:46201
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000126
-
-
Title
-
DEPTH FROM DEFOCUSED MOTION.
-
Creator
-
Myles, Zarina, da Vitoria Lobo, Niels, University of Central Florida
-
Abstract / Description
-
Motion in depth and/or zooming causes defocus blur. This work presents a solution to the problem of using defocus blur and optical flow information to compute depth at points that defocus when they move.We first formulate a novel algorithm which recovers defocus blur and affine parameters simultaneously. Next we formulate a novel relationship (the blur-depth relationship) between defocus blur, relative object depth and three parameters based on camera motion and intrinsic camera parameters.We...
Show moreMotion in depth and/or zooming causes defocus blur. This work presents a solution to the problem of using defocus blur and optical flow information to compute depth at points that defocus when they move.We first formulate a novel algorithm which recovers defocus blur and affine parameters simultaneously. Next we formulate a novel relationship (the blur-depth relationship) between defocus blur, relative object depth and three parameters based on camera motion and intrinsic camera parameters.We can handle the situation where a single image has points which have defocused, got sharper or are focally unperturbed. Moreover, our formulation is valid regardless of whether the defocus is due to the image plane being in front of or behind the point of sharp focus.The blur-depth relationship requires a sequence of at least three images taken with the camera moving either towards or away from the object. It can be used to obtain an initial estimate of relative depth using one of several non-linear methods. We demonstrate a solution based on the Extended Kalman Filter in which the measurement equation is the blur-depth relationship.The estimate of relative depth is then used to compute an initial estimate of camera motion parameters. In order to refine depth values, the values of relative depth and camera motion are then input into a second Extended Kalman Filter in which the measurement equations are the discrete motion equations. This set of cascaded Kalman filters can be employed iteratively over a longer sequence of images in order to further refine depth.We conduct several experiments on real scenery in order to demonstrate the range of object shapes that the algorithm can handle. We show that fairly good estimates of depth can be obtained with just three images.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000135, ucf:46179
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000135
-
-
Title
-
PERFORMANCE SUPPORT AND USABILITY:AN EXPERIMENTAL STUDY OFELECTRONIC PERFORMANCE SUPPORT INTERFACES.
-
Creator
-
Rawls, Charles, Hirumi, Atsusi, University of Central Florida
-
Abstract / Description
-
This study evaluated the usability of two types of performance-support interfaces that were designed using informational and experiential approaches. The experiment sought to determine whether there is a relationship between usability and the informational and experiential approaches. The general population under study was undergraduate education major students from the University of Central Florida. From the general population of three educational technology instructor-led classes, 83...
Show moreThis study evaluated the usability of two types of performance-support interfaces that were designed using informational and experiential approaches. The experiment sought to determine whether there is a relationship between usability and the informational and experiential approaches. The general population under study was undergraduate education major students from the University of Central Florida. From the general population of three educational technology instructor-led classes, 83 students were solicited to participate in the study by completing a class activity. From the general population, a total of 63 students participated in the study. By participating in the study, the students completed a task and a questionnaire. Students were predominantly English-speaking Caucasian female education majors between the ages of 19 and 20; most of them were sophomores or juniors working part time. They possessed moderately low to high computer skills and most considered themselves to have intermediate or expert Internet skills. An experimental posttest-only comparison group research design was used to test the hypotheses posited for this study. The participants were randomly assigned to either the informational interface group (X1) or the experiential interface group (X2), and the experiment was conducted electronically via a Web-based Content Management System (CMS). The observed data consisted of five outcome measures: efficiency, errors, intuitiveness, satisfaction, and student performance. Two instruments--a checklist and an online usability questionnaire--were used to measure the five dependent variables: efficiency, intuitiveness, errors, satisfaction, and student performance. The CMS was used as the vehicle to distribute and randomize the two interfaces, obtain informed consent, distribute the instructions, distribute the online questionnaire, and collect data. First, a checklist was used to assess the students' performance completing their task, which was a copyright issue request letter. The checklist was designed as a performance criterion tool for the researcher, instructor, and participants to use. The researcher and instructor constructed the checklist to grade copyright request letters and determine students' performance. The participants had the opportunity to use the checklist as a performance criterion to create the task document (copyright request letter). The checklist consisted of ten basic yet critical sections of a successful copyright request letter. Second, an online usability questionnaire was constructed based on the Purdue Usability Testing Questionnaire (PUTQ) questions to measure interface efficiency, intuitiveness, errors, and satisfaction. While these test items have been deemed important for testing the usability of a particular system, for purposes of this study, test items were modified, deleted, and added to ensure content validity. The new survey, University of Central Florida Usability Questionnaire (UCFUQ), consisting of 20 items, was implemented in a pilot study to ensure reliability and content validity. Changes to the PUTQ were modified to fulfill a blueprint. A pilot study of the instrument yielded a reliability coefficient of .9450, and the final online usability instrument yielded a reliability coefficient of .9321. This study tested two approaches to user interface design for the Electronic Performance Support (EPS) using two HTML interface templates and the information from an existing training module. There were two interventions consisting of two interface types: informational and experiential. The SPSS Graduate Pack 10.0 for Windows was used for data analysis and statistical reporting in this study. A t test was conducted to determine if a difference existed between the two interface means. ANOVA was conducted to determine if there was an interaction between the interface group means and the demographic data factored among the five dependent variables. Results of this study indicated that students at the University of Central Florida reported no differences between the two interface types. It was postulated that the informational interface would yield a higher mean score because of its implementation of HCI guidelines, conventions, and standards. However, it was concluded that the informational interface may not be a more usable interface. Users may be as inclined to use the experiential interface as the informational interface.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000807, ucf:46678
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000807
-
-
Title
-
SUBSTITUTING LIVE TRAINING WITH VIRTUAL TRAINING BY MEANS OF A COMMERCIAL OFF THE SHELF, FIRST PERSON SHOOTER COMPUTER GAME AND THE EFFECT ON PERFORMANCE.
-
Creator
-
Kneuper II, George, Williams, Kent, University of Central Florida
-
Abstract / Description
-
This research measures the change in Army ROTC cadets' tactical performance when up to 75% of their tactical live training is replaced with training done on a computer. An ROTC instructor from any of the 270 programs across the nation can take this research and implement a training plan utilizing a relatively cheap off the shelf computer game and save their program: cadet and cadre time, training dollars, and transportation/equipment/training area resources, while seeing no degradation in...
Show moreThis research measures the change in Army ROTC cadets' tactical performance when up to 75% of their tactical live training is replaced with training done on a computer. An ROTC instructor from any of the 270 programs across the nation can take this research and implement a training plan utilizing a relatively cheap off the shelf computer game and save their program: cadet and cadre time, training dollars, and transportation/equipment/training area resources, while seeing no degradation in their cadets' performance. Little research has been done on the effect of replacing live simulation with virtual simulation. With this in mind, six groups of individuals were run through the experiment for over five months at various levels of virtual/live training and scored across 16 leadership skills. These results were then formulated into a guideline defining how much training should be virtual training and how much live, to optimize an individual's performance.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0000962, ucf:46692
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000962
Pages