Current Search: errors (x)
Pages
-
-
Title
-
The Impact of Elementary Mathematics Workshops on Mathematics Knowledge for Parenting (MKP) and Beliefs About Learning Mathematics.
-
Creator
-
Eisenreich, Heidi, Dixon, Juli, Ortiz, Enrique, Andreasen, Janet, Brooks, Lisa, Hahs-Vaughn, Debbie, University of Central Florida
-
Abstract / Description
-
The purpose of this study was to investigate the extent to which parents of first, second, and third grade students who attended a two-day workshop on mathematics strategies differed on average and over time, as compared to parents who did not attend the workshops. The following areas were measured: mathematics content knowledge, beliefs about learning mathematics, ability to identify correct student responses regarding mathematics, ability to identify student errors in solving mathematics...
Show moreThe purpose of this study was to investigate the extent to which parents of first, second, and third grade students who attended a two-day workshop on mathematics strategies differed on average and over time, as compared to parents who did not attend the workshops. The following areas were measured: mathematics content knowledge, beliefs about learning mathematics, ability to identify correct student responses regarding mathematics, ability to identify student errors in solving mathematics problems, methods used to solve problems, and comfort level with manipulatives.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006101, ucf:52877
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006101
-
-
Title
-
Equivalency Analysis of Sidestick Controller Modes During Manual Flight.
-
Creator
-
Rummel, Alex, Karwowski, Waldemar, Elshennawy, Ahmad, Hancock, Peter, University of Central Florida
-
Abstract / Description
-
Equivalency analysis is a statistical procedure that can enhance the findings of an analysis of variance in the case when non-significant differences are identified. The demonstration of functional equivalence or the absence of practical differences is useful to designers introducing new technologies to the flight deck. Proving functional equivalence is an effective means to justify the implementation of new technologies that must be (")the same or better(") than previous technology. This...
Show moreEquivalency analysis is a statistical procedure that can enhance the findings of an analysis of variance in the case when non-significant differences are identified. The demonstration of functional equivalence or the absence of practical differences is useful to designers introducing new technologies to the flight deck. Proving functional equivalence is an effective means to justify the implementation of new technologies that must be (")the same or better(") than previous technology. This study examines the functional equivalency of three operational modes of a new active control sidestick during normal operations while performing manual piloting tasks. Data from a between-subjects, repeated-measures simulator test was analyzed using analysis of variance and equivalency analysis. Ten pilots participated in the simulator test which was conducted in a fixed-base, business jet simulator. Pilots performed maneuvers such as climbing and descending turns and ILS approaches using three sidestick modes: active, unlinked, and passive. RMS error for airspeed, flight path angle, and bank angle were measured in addition to touchdown points on the runway relative to centerline and runway threshold. Results indicate that the three operational modes are functionally equivalent when performing climbing and descending turns. Active and unlinked modes were found to be functionally equivalent when flying an ILS approach, but the passive mode, by a small margin, was not found to be functionally equivalent.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007242, ucf:52226
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007242
-
-
Title
-
Chemometric Applications to a Complex Classification Problem: Forensic Fire Debris Analysis.
-
Creator
-
Waddell, Erin, Sigman, Michael, Belfield, Kevin, Campiglia, Andres, Yestrebsky, Cherie, Ni, Liqiang, University of Central Florida
-
Abstract / Description
-
Fire debris analysis currently relies on visual pattern recognition of the total ion chromatograms, extracted ion profiles, and target compound chromatograms to identify the presence of an ignitable liquid according to the ASTM International E1618-10 standard method. For large data sets, this methodology can be time consuming and is a subjective method, the accuracy of which is dependent upon the skill and experience of the analyst. This research aimed to develop an automated classification...
Show moreFire debris analysis currently relies on visual pattern recognition of the total ion chromatograms, extracted ion profiles, and target compound chromatograms to identify the presence of an ignitable liquid according to the ASTM International E1618-10 standard method. For large data sets, this methodology can be time consuming and is a subjective method, the accuracy of which is dependent upon the skill and experience of the analyst. This research aimed to develop an automated classification method for large data sets and investigated the use of the total ion spectrum (TIS). The TIS is calculated by taking an average mass spectrum across the entire chromatographic range and has been shown to contain sufficient information content for the identification of ignitable liquids. The TIS of ignitable liquids and substrates, defined as common building materials and household furnishings, were compiled into model data sets. Cross-validation (CV) and fire debris samples, obtained from laboratory-scale and large-scale burns, were used to test the models. An automated classification method was developed using computational software, written in-house, that considers a multi-step classification scheme to detect ignitable liquid residues in fire debris samples and assign these to the classes defined in ASTM E1618-10. Classifications were made using linear discriminant analysis, quadratic discriminant analysis (QDA), and soft independent modeling of class analogy (SIMCA). Overall, the highest correct classification rates were achieved using QDA for the first step of the scheme and SIMCA for the remaining steps. In the first step of the classification scheme, correct classification rates of 95.3% and 89.2% were obtained for the CV test set and fire debris samples, respectively. Correct classifications rates of 100% were achieved for both data sets in the majority of the remaining steps which used SIMCA for classification. In this research, the first statistically valid error rates for fire debris analysis have been developed through cross-validation of large data sets. The error rates reduce the subjectivity associated with the current methods and provide a level of confidence in sample classification that does not currently exist in forensic fire debris analysis.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004954, ucf:49586
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004954
-
-
Title
-
The Effect of Feedback Medium on Accuracy with English Articles.
-
Creator
-
Giltner, Elizabeth, Nutta, Joyce, Purmensky, Kerry, Clark, M. H., Kaplan, Jeffrey, University of Central Florida
-
Abstract / Description
-
Developing and demonstrating English proficiency is a critical skill for non-native English speakers (NNESs) who wish to study in American universities. Unlike their native English speaker (NES) counterparts, NNES students who apply for university admission are required to demonstrate their proficiency in English via tests, such as the Test of English as a Foreign Language (TOEFL), that measure an NNES's ability to understand, speak, read, and write English. Although the number of students...
Show moreDeveloping and demonstrating English proficiency is a critical skill for non-native English speakers (NNESs) who wish to study in American universities. Unlike their native English speaker (NES) counterparts, NNES students who apply for university admission are required to demonstrate their proficiency in English via tests, such as the Test of English as a Foreign Language (TOEFL), that measure an NNES's ability to understand, speak, read, and write English. Although the number of students who have attained those minimum scores is large, there is a large population of adult NNESs enrolled in intensive English programs (IEPs) that are designed to help them improve their proficiency in English and again admission into mainstream university courses. Given that many university instructors require the submission of written work that demonstrates students' understanding of course content, perhaps the most important academic skill developed in IEPs is writing. Furthermore, the lack of attention given to addressing grammatical errors at the tertiary level highlights IEP instructors' need for effective and efficient methods of addressing grammatical errors in NNES writing.The present quantitative study used two experimental designs, a pretest-posttest design and a posttest-only design with proxy pretest (Campbell (&) Stanley, 1963), to investigate the efficacy of two types of indirect corrective feedback (CF) for improving adult, IEP-enrolled, intermediate level NNES writers' (participants) grammatical accuracy in academic papers. Grammatical accuracy for this study was measured by counting the number of errors participants committed when using English definite and indefinite articles in academic papers. The independent variable for this study was the type of CF participants were randomly selected to receive (-) either screencast corrective feedback (SCF) or written corrective feedback (WCF). The dependent variable, which measured the effect of the CF given, was the number of errors participants made with English definite and indefinite articles on three compositions completed to satisfy the requirements of their IEP writing class. The results of the current research demonstrated that participants made similar gains in grammatical accuracy when using CF to revise descriptive compositions. These results are in keeping with the results of previous studies that showed the usefulness of CF for improving grammatical accuracy on revised compositions (Bitchener, 2008, Bitchener (&) Knoch, 2008, 2009a, 2009b, 2010a). However, the improvement observed on the revised descriptive compositions did not transfer to new classification essays, regardless of the type of CF participants received. Participants' lack of grammatical accuracy on new compositions of a different genre effectively illustrated the difficulty English articles pose for NNESs when writing and the need for multiple exposures to CF and writing practice to develop NNESs' ability to consistently use English articles accurately.The main implication of the present study lies in the recommendation of the provision of CF to NNES students and systematic instruction about how to use CF received in order to allow NNESs to become more self-sufficient learners and writers of English.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006106, ucf:51187
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006106
-
-
Title
-
Ultra-wideband Spread Spectrum Communications using Software Defined Radio and Surface Acoustic Wave Correlators.
-
Creator
-
Gallagher, Daniel, Malocha, Donald, Delfyett, Peter, Richie, Samuel, Weeks, Arthur, Youngquist, Robert, University of Central Florida
-
Abstract / Description
-
Ultra-wideband (UWB) communication technology offers inherent advantages such as the ability to coexist with previously allocated Federal Communications Commission (FCC) frequencies, simple transceiver architecture, and high performance in noisy environments. Spread spectrum techniques offer additional improvements beyond the conventional pulse-based UWB communications. This dissertation implements a multiple-access UWB communication system using a surface acoustic wave (SAW) correlator...
Show moreUltra-wideband (UWB) communication technology offers inherent advantages such as the ability to coexist with previously allocated Federal Communications Commission (FCC) frequencies, simple transceiver architecture, and high performance in noisy environments. Spread spectrum techniques offer additional improvements beyond the conventional pulse-based UWB communications. This dissertation implements a multiple-access UWB communication system using a surface acoustic wave (SAW) correlator receiver with orthogonal frequency coding and software defined radio (SDR) base station transmitter.Orthogonal frequency coding (OFC) and pseudorandom noise (PN) coding provide a means for spreading of the UWB data. The use of orthogonal frequency coding (OFC) increases the correlator processing gain (PG) beyond that of code division multiple access (CDMA); providing added code diversity, improved pulse ambiguity, and superior performance in noisy environments. Use of SAW correlators reduces the complexity and power requirements of the receiver architecture by eliminating many of the components needed and reducing the signal processing and timing requirements necessary for digital matched filtering of the complex spreading signal.The OFC receiver correlator code sequence is hard-coded in the device due to the physical SAW implementation. The use of modern SDR forms a dynamic base station architecture which is able to programmatically generate a digitally modulated transmit signal. An embedded Xilinx Zynq (TM) system on chip (SoC) technology was used to implement the SDR system; taking advantage of recent advances in digital-to-analog converter (DAC) sampling rates. SDR waveform samples are generated in baseband in-phase and quadrature (I (&) Q) pairs and upconverted to a 491.52 MHz operational frequency.The development of the OFC SAW correlator ultimately used in the receiver is presented along with a variety of advanced SAW correlator device embodiments. Each SAW correlator device was fabricated on lithium niobate (LiNbO3) with fractional bandwidths in excess of 20%. The SAW correlator device presented for use in system was implemented with a center frequency of 491.52 MHz; matching SDR transmit frequency. Parasitic electromagnetic feedthrough becomes problematic in the packaged SAW correlator after packaging and fixturing due to the wide bandwidths and high operational frequency. The techniques for reduction of parasitic feedthrough are discussedwith before and after results showing approximately 10:1 improvement.Correlation and demodulation results are presented using the SAW correlator receiver under operation in an UWB communication system. Bipolar phase shift keying (BPSK) techniques demonstrate OFC modulation and demodulation for a test binary bit sequence. Matched OFC code reception is compared to a mismatched, or cross-correlated, sequence after correlation and demodulation. Finally, the signal-to-noise power ratio (SNR) performance results for the SAW correlator under corruption of a wideband noise source are presented.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005794, ucf:50054
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005794
-
-
Title
-
On Distributed Estimation for Resource Constrained Wireless Sensor Networks.
-
Creator
-
Sani, Alireza, Vosoughi, Azadeh, Rahnavard, Nazanin, Wei, Lei, Atia, George, Chatterjee, Mainak, University of Central Florida
-
Abstract / Description
-
We study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically...
Show moreWe study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically distributed tiny sensors are tasked with collecting data from the field. Each sensor locally processes its noisy observation (local processing can include compression,dimension reduction, quantization, etc) and transmits the processed observation over communication channels to the FC, where the received data is used to form a global estimate of the unknown source such that the Mean Square Error (MSE) of the DES is minimized. The accuracy of DES depends on many factors such as intensity of observation noises in sensors, quantization errors in sensors, available power and bandwidth of the network, quality of communication channels between sensors and the FC, and the choice of fusion rule in the FC. Taking into account all of these contributing factors and implementing a DES system which minimizes the MSE and satisfies all constraints is a challenging task. In order to probe into different aspects of this challenging task we identify and formulate the following three problems and address them accordingly:1- Consider an inhomogeneous WSN where the sensors' observations is modeled linear with additive Gaussian noise. The communication channels between sensors and FC are orthogonal power and bandwidth-constrained erroneous wireless fading channels. The unknown to be estimated is a Gaussian vector. Sensors employ uniform multi-bit quantizers and BPSK modulation. Given this setup, we ask: what is the best fusion rule in the FC? what is the best transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize the MSE? In order to answer these questions, we derive some upper bounds on global MSE and through minimizing those bounds, we propose various resource allocation schemes for the problem, through which we investigate the effect of contributing factors on the MSE.2- Consider an inhomogeneous WSN with an FC which is tasked with estimating a scalar Gaussian unknown. The sensors are equipped with uniform multi-bit quantizers and the communication channels are modeled as Binary Symmetric Channels (BSC). In contrast to former problem the sensors experience independent multiplicative noises (in addition to additive noise). The natural question in this scenario is: how does multiplicative noise affect the DES system performance? how does it affect the resource allocation for sensors, with respect to the case where there is no multiplicative noise? We propose a linear fusion rule in the FC and derive the associated MSE in closed-form. We propose several rate allocation schemes with different levels of complexity which minimize the MSE. Implementing the proposed schemes lets us study the effect of multiplicative noise on DES system performance and its dynamics. We also derive Bayesian Cramer-Rao Lower Bound (BCRLB) and compare the MSE performance of our porposed methods against the bound.As a dual problem we also answer the question: what is the minimum required bandwidth of thenetwork to satisfy a predetermined target MSE?3- Assuming the framework of Bayesian DES of a Gaussian unknown with additive and multiplicative Gaussian noises involved, we answer the following question: Can multiplicative noise improve the DES performance in any case/scenario? the answer is yes, and we call the phenomena as 'enhancement mode' of multiplicative noise. Through deriving different lower bounds, such as BCRLB,Weiss-Weinstein Bound (WWB), Hybrid CRLB (HCRLB), Nayak Bound (NB), Yatarcos Bound (YB) on MSE, we identify and characterize the scenarios that the enhancement happens. We investigate two situations where variance of multiplicative noise is known and unknown. Wealso compare the performance of well-known estimators with the derived bounds, to ensure practicability of the mentioned enhancement modes.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006913, ucf:51698
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006913
-
-
Title
-
Adaptive Architectural Strategies for Resilient Energy-Aware Computing.
-
Creator
-
Ashraf, Rizwan, DeMara, Ronald, Lin, Mingjie, Wang, Jun, Jha, Sumit, Johnson, Mark, University of Central Florida
-
Abstract / Description
-
Reconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited...
Show moreReconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited to implement Evolvable Hardware (EHW). EHW utilize genetic algorithms to realize logic circuits at runtime, as directed by the objective function. However, the size of problems solved using EHW as compared with traditional approaches has been limited to relatively compact circuits. This is due to the increase in complexity of the genetic algorithm with increase in circuit size. To address this research challenge of scalability, the Netlist-Driven Evolutionary Refurbishment (NDER) technique was designed and implemented herein to enable on-the-fly permanent fault mitigation in FPGA circuits. NDER has been shown to achieve refurbishment of relatively large sized benchmark circuits as compared to related works. Additionally, Design Diversity (DD) techniques which are used to aid such evolutionary refurbishment techniques are also proposed and the efficacy of various DD techniques is quantified and evaluated.Similarly, there exists a growing need for adaptable logic datapaths in custom-designed nanometer-scale ICs, for ensuring operational reliability in the presence of Process, Voltage, and Temperature (PVT) and, transistor-aging variations owing to decreased feature sizes for electronic devices. Without such adaptability, excessive design guardbands are required to maintain the desired integration and performance levels. To address these challenges, the circuit-level technique of Self-Recovery Enabled Logic (SREL) was designed herein. At design-time, vulnerable portions of the circuit identified using conventional Electronic Design Automation tools are replicated to provide post-fabrication adaptability via intelligent techniques. In-situ timing sensors are utilized in a feedback loop to activate suitable datapaths based on current conditions that optimize performance and energy consumption. Primarily, SREL is able to mitigate the timing degradations caused due to transistor aging effects in sub-micron devices by reducing the stress induced on active elements by utilizing power-gating. As a result, fewer guardbands need to be included to achieve comparable performance levels which leads to considerable energy savings over the operational lifetime.The need for energy-efficient operation in current computing systems has given rise to Near-Threshold Computing as opposed to the conventional approach of operating devices at nominal voltage. In particular, the goal of exascale computing initiative in High Performance Computing (HPC) is to achieve 1 EFLOPS under the power budget of 20MW. However, it comes at the cost of increased reliability concerns, such as the increase in performance variations and soft errors. This has given rise to increased resiliency requirements for HPC applications in terms of ensuring functionality within given error thresholds while operating at lower voltages. My dissertation research devised techniques and tools to quantify the effects of radiation-induced transient faults in distributed applications on large-scale systems. A combination of compiler-level code transformation and instrumentation are employed for runtime monitoring to assess the speed and depth of application state corruption as a result of fault injection. Finally, fault propagation models are derived for each HPC application that can be used to estimate the number of corrupted memory locations at runtime. Additionally, the tradeoffs between performance and vulnerability and the causal relations between compiler optimization and application vulnerability are investigated.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006206, ucf:52889
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006206
Pages