Current Search: errors (x)
Pages
-
-
Title
-
STUDIES OF A QUANTUM SCHEDULING ALGORITHM AND ON QUANTUM ERROR CORRECTION.
-
Creator
-
Lu, Feng, Marinescu, Dan, University of Central Florida
-
Abstract / Description
-
Quantum computation has been a rich field of study for decades because it promises possible spectacular advances, some of which may run counter to our classically rooted intuitions. At the same time, quantum computation is still in its infancy in both theoretical and practical areas. Efficient quantum algorithms are very limited in number and scope; no real breakthrough has yet been achieved in physical implementations. Grover's search algorithm can be applied to a wide range of problems;...
Show moreQuantum computation has been a rich field of study for decades because it promises possible spectacular advances, some of which may run counter to our classically rooted intuitions. At the same time, quantum computation is still in its infancy in both theoretical and practical areas. Efficient quantum algorithms are very limited in number and scope; no real breakthrough has yet been achieved in physical implementations. Grover's search algorithm can be applied to a wide range of problems; even problems not generally regarded as searching problems can be reformulated to take advantage of quantum parallelism and entanglement leading to algorithms which show a square root speedup over their classical counterparts. This dissertation discusses a systematic way to formulate such problems and gives as an example a quantum scheduling algorithm for an R||C_max problem. This thesis shows that quantum solution to such problems is not only feasible but in some cases advantageous. The complexity of the error correction circuitry forces us to design quantum error correction codes capable of correcting only a single error per error correction cycle. Yet, time-correlated errors are common for physical implementations of quantum systems; an error corrected during a certain cycle may reoccur in a later cycle due to physical processes specific to each physical implementation of the qubits. This dissertation discusses quantum error correction for a restricted class of time-correlated errors in a spin-boson model. The algorithm proposed allows the correction of two errors per error correction cycle, provided that one of them is time-correlated. The algorithm can be applied to any stabilizer code, perfect or non-perfect, and simplified the circuit complexity significantly comparing to the classic quantum error correction codes.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001873, ucf:47391
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001873
-
-
Title
-
INVESTIGATING THE RELIABILITY AND VALIDITY OF KNOWLEDGE STRUCTURE EVALUATIONS: THE INFLUENCE OF RATER ERROR AND RATER LIMITATIONS.
-
Creator
-
Harper-Sciarini, Michelle, Jentsch, Florian, University of Central Florida
-
Abstract / Description
-
The likelihood of conducting safe operations increases when operators ave effectively integrated their knowledge of the operation into meaningful relationships, referred to as knowledge structures (KSs). Unlike knowing isolated facts about an operation, well integrated KSs reflect a deeper understanding. It is, however, only the isolated facts that are often evaluated in training environments. To know whether an operator has formed well integrated KSs, KS evaluation methods must be employed....
Show moreThe likelihood of conducting safe operations increases when operators ave effectively integrated their knowledge of the operation into meaningful relationships, referred to as knowledge structures (KSs). Unlike knowing isolated facts about an operation, well integrated KSs reflect a deeper understanding. It is, however, only the isolated facts that are often evaluated in training environments. To know whether an operator has formed well integrated KSs, KS evaluation methods must be employed. Many of these methods, however, require subjective, human-rated evaluations. These ratings are often prone to the negative influence of a raterÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂ's limitations such as rater biases and cognitive limitations; therefore, the extent to which KS evaluations are beneficial is dependent on the degree to which the raterÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂ's limitations can be mitigated. The main objective of this study was to identify factors that will mitigate rater limitations and test their influence on the reliability and validity of KS evaluations. These factors were identified through the delineation of a framework that represents how a raterÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂ's limitations will influence the cognitive processes that occur during the evaluation process. From this framework, one factor (i.e., operation knowledge), and three mitigation techniques (i.e., frame-of-reference training, reducing the complexity of the KSs, and providing referent material) were identified. Ninety-two participants rated the accuracy of eight KSs over a period of two days. Results indicated that reliability was higher after training. Furthermore, several interactions indicated that the benefits of domain knowledge, referent material, and reduced complexity existed within subsets of the participants. For example, reduced complexity only increased reliability among evaluators with less knowledge of the operation. Also, referent material increased reliability only for those who scored less complex KSs. Both the practical and theoretical implications of these results are provided.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0002973, ucf:47950
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002973
-
-
Title
-
THE IMPACT OF INTRAORGANIZATIONAL TRUST AND LEARNING ORIENTED CLIMATE ON ERROR REPORTING.
-
Creator
-
Sims, Dana Elizabeth, Salas, Eduardo, University of Central Florida
-
Abstract / Description
-
Insight into opportunities for process improvement provides a competitive advantage through increases in organizational effectiveness and innovation As a result, it is important to understand the conditions under which employees are willing to communicate this information. This study examined the relationship between trust and psychological safety on the willingness to report errors in a medical setting. Trust and psychological safety were measured at the team and leader level. In addition,...
Show moreInsight into opportunities for process improvement provides a competitive advantage through increases in organizational effectiveness and innovation As a result, it is important to understand the conditions under which employees are willing to communicate this information. This study examined the relationship between trust and psychological safety on the willingness to report errors in a medical setting. Trust and psychological safety were measured at the team and leader level. In addition, the moderating effect of a learning orientation climate at three levels of the organization (i.e., team members, team leaders, organizational) was examined on the relationship between trust and psychological safety on willingness to report errors. Traditional surveys and social network analysis were employed to test the research hypotheses. Findings indicate that team trust, when examined using traditional surveys, is not significantly associated with informally reporting errors. However, when the social networks within the team were examined, evidence that team trust is associated with informally discussing errors was found. Results also indicate that trust in leadership is associated with informally discussing errors, especially severe errors. These findings were supported and expanded to include a willingness to report all severity of errors when social network data was explored. Psychological safety, whether within the team or fostered by leadership, was not found to be associated with a willingness to informally report errors. Finally, learning orientation was not found to be a moderating variable between trust and psychological safety on a willingness to report errors. Instead, organizational learning orientation was found to have a main effect on formally reporting errors to risk management and documenting errors in patient charts. Theoretical and practical implications of the study are offered.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002818, ucf:48050
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002818
-
-
Title
-
Soft-Error Resilience Framework For Reliable and Energy-Efficient CMOS Logic and Spintronic Memory Architectures.
-
Creator
-
Alghareb, Faris, DeMara, Ronald, Lin, Mingjie, Zou, Changchun, Jha, Sumit Kumar, Song, Zixia, University of Central Florida
-
Abstract / Description
-
The revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error...
Show moreThe revolution in chip manufacturing processes spanning five decades has proliferated high performance and energy-efficient nano-electronic devices across all aspects of daily life. In recent years, CMOS technology scaling has realized billions of transistors within large-scale VLSI chips to elevate performance. However, these advancements have also continually augmented the impact of Single-Event Transient (SET) and Single-Event Upset (SEU) occurrences which precipitate a range of Soft-Error (SE) dependability issues. Consequently, soft-error mitigation techniques have become essential to improve systems' reliability. Herein, first, we proposed optimized soft-error resilience designs to improve robustness of sub-micron computing systems. The proposed approaches were developed to deliver energy-efficiency and tolerate double/multiple errors simultaneously while incurring acceptable speed performance degradation compared to the prior work. Secondly, the impact of Process Variation (PV) at the Near-Threshold Voltage (NTV) region on redundancy-based SE-mitigation approaches for High-Performance Computing (HPC) systems was investigated to highlight the approach that can realize favorable attributes, such as reduced critical datapath delay variation and low speed degradation. Finally, recently, spin-based devices have been widely used to design Non-Volatile (NV) elements such as NV latches and flip-flops, which can be leveraged in normally-off computing architectures for Internet-of-Things (IoT) and energy-harvesting-powered applications. Thus, in the last portion of this dissertation, we design and evaluate for soft-error resilience NV-latching circuits that can achieve intriguing features, such as low energy consumption, high computing performance, and superior soft errors tolerance, i.e., concurrently able to tolerate Multiple Node Upset (MNU), to potentially become a mainstream solution for the aerospace and avionic nanoelectronics. Together, these objectives cooperate to increase energy-efficiency and soft errors mitigation resiliency of larger-scale emerging NV latching circuits within iso-energy constraints. In summary, addressing these reliability concerns is paramount to successful deployment of future reliable and energy-efficient CMOS logic and spintronic memory architectures with deeply-scaled devices operating at low-voltages.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007884, ucf:52765
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007884
-
-
Title
-
A Systems Approach to Assessing, Interpreting and Applying Human Error Mishap Data to Mitigate Risk of Future Incidents in a Space Exploration Ground Processing Operations Environment.
-
Creator
-
Alexander, Tiffaney, McCauley, Pamela, Rabelo, Luis, Karwowski, Waldemar, Nunez, Jose, University of Central Florida
-
Abstract / Description
-
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. Although many existing incident report systems have been beneficial for identifying engineering failures, most of them are not designed around a theoretical framework of human error, thus failing to address core issues and causes of the mishaps. Therefore, it is imperative to develop a human error assessment framework to identify these causes. This research...
Show moreResearch results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. Although many existing incident report systems have been beneficial for identifying engineering failures, most of them are not designed around a theoretical framework of human error, thus failing to address core issues and causes of the mishaps. Therefore, it is imperative to develop a human error assessment framework to identify these causes. This research focused on identifying causes of human error and leading contributors to historical Launch Vehicle Ground Processing Operations mishaps based on past mishaps, near mishaps, and close calls. Three hypotheses were discussed. The first hypothesis addressed the impact Human Factor Analysis and Classification System (HFACS) contributing factors (unsafe acts of operators, preconditions for unsafe acts, unsafe supervision, and/or organizational influences) have on human error events (i.e. mishaps, close calls, incident or accidents) in NASA Ground Processing Operations. The second hypothesis focused on determining if the HFACS framework conceptual model could be proven to be a viable analysis and classification system to help classify both latent and active underlying contributors and causes of human error in ground processing operations. Lastly, the third hypothesis focused on determining if the development of a model using the Human Error Assessment and Reduction Technique (HEART) could be used as a tool to help determine the probability of human error occurrence in ground processing operations. A model to analyze and classify contributing factors to mishaps or incidents, and generate predicted Human Error Probabilities (HEPs) of future occurrence was developed using the HEART and HFACS tools. The research methodology was applied (retrospectively) to six Ground Processing Operations (GPO) Scenarios and 30 years of Launch Vehicle Related Mishap Data. Surveys were used to provide Subject Matter Experts' (SMEs) subjective assessments of the impact Error Producing Conditions (EPC) had on specific tasks. In this research a Logistic Binary Regression model, which identified the four most significant contributing HFACS human error factors was generated. This model provided predicted probabilities of future occurrence of mishaps when these contributing factors are present. The results showed that the HEART and HFACS methods, when modified, can be used as an analysis tool to identify contributing factors, their impact on human error events, and predict the potential probability of future human error occurrence. This methodology and framework was validated through consistency and comparison to other related research. A contribution methodology for other space operations and similar complex operations to follow was provided from this research. Future research should involve broadening the scope to explore and identify other existing models of human error management systems to integrate into complex space systems beyond what was conducted in this research.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006829, ucf:51795
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006829
-
-
Title
-
The conceptual field of proportional reasoning researched through the lived experiences of nurses.
-
Creator
-
Deichert, Deana, Dixon, Juli, Haciomeroglu, Erhan, Andreasen, Janet, Hunt, Debra, University of Central Florida
-
Abstract / Description
-
Proportional reasoning instruction is prevalent in elementary, secondary, and post-secondary schooling. The concept of proportional reasoning is used in a variety of contexts for solving real-world problems. One of these contexts is the solving of dosage calculation proportional problems in the healthcare field. On the job, nurses perform drug dosage calculations which carry fatal consequences. As a result, nursing students are required to meet minimum competencies in solving proportion...
Show moreProportional reasoning instruction is prevalent in elementary, secondary, and post-secondary schooling. The concept of proportional reasoning is used in a variety of contexts for solving real-world problems. One of these contexts is the solving of dosage calculation proportional problems in the healthcare field. On the job, nurses perform drug dosage calculations which carry fatal consequences. As a result, nursing students are required to meet minimum competencies in solving proportion problems. The goal of this research is to describe the lived experiences of nurses in connection to their use of proportional reasoning in order to impact instruction of the procedures used to solve these problems. The research begins by clarifying and defining the conceptual field of proportional reasoning. Utilizing Vergnaud's theory of conceptual fields and synthesizing the differing organizational frameworks used in the literature on proportional reasoning, the concept is organized and explicated into three components: concepts, procedures, and situations. Through the lens of this organizational structure, data from 44 registered nurses who completed a dosage calculation proportion survey were analyzed and connected to the framework of the conceptual field of proportional reasoning. Four nurses were chosen as a focus of in-depth study based upon their procedural strategies and ability to vividly describe their experiences. These qualitative results are synthesized to describe the lived experiences of nurses related to their education and use of proportional reasoning.Procedural strategies that are supported by textbooks, instruction, and practice are developed and defined. Descriptive statistics show the distribution of procedures used by nurses on a five question dosage calculation survey. The most common procedures used are the nursing formula, cross products, and dimensional analysis. These procedures correspond to the predominate procedures found in nursing dosage calculation texts. Instructional implications focus on the transition between elementary and secondary multiplicative structures, the confusion between equality and proportionality, and the difficulty that like quantities present in dealing with proportions.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005781, ucf:50058
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005781
-
-
Title
-
Impact of Interruption Frequency on Nurses' Performance, Satisfaction, and Cognition During Patient-Controlled Analgesia Use in the Simulated Setting.
-
Creator
-
Campoe, Kristi, Talbert, Steven, Sole, Mary Lou, Andrews, Diane, Jentsch, Florian, University of Central Florida
-
Abstract / Description
-
Problem: Interruption during medication administration is a significant patient safety concern within health care, especially during the administration of high risk medications in nursing. Patient-controlled analgesia (PCA) devices are frequently associated with adverse events and have a four-fold increased risk of patient injury compared to non-PCA related adverse events. While the nature and frequency of interruptions have been established for nurses' medication processes, the impact of...
Show moreProblem: Interruption during medication administration is a significant patient safety concern within health care, especially during the administration of high risk medications in nursing. Patient-controlled analgesia (PCA) devices are frequently associated with adverse events and have a four-fold increased risk of patient injury compared to non-PCA related adverse events. While the nature and frequency of interruptions have been established for nurses' medication processes, the impact of interruption frequency on nurses' PCA interaction has not been fully measured or described.Purpose: The purposes of this study were to quantify the impact of interruption frequency on registered nurses' (RN) performance, satisfaction, and cognitive workload during PCA interaction, and to determine nurses' perceptions of the impact of interruption frequency.Methods: This study employed a mixed-method design. First, an experimental repeated measures design was used to quantify the impact of interruption frequency on a purposive sample of nine medical-surgical RNs. The RNs completed PCA programming tasks in a simulated laboratory nursing environment for each of four conditions where interruption frequency was pre-determined. Four established human factors usability measures were completed for each of the four test conditions. The research questions were answered using repeated measures analysis of variance with (RM-ANOVA), McNamar's test, and Friedman's test. After each experiment, semi-structured interviews were used to collect data that were analyzed using inductive qualitative content analysis to determine RNs' perceptions of the impact of interruption frequency. Results: Results of the RM-ANOVA were significant for the main effect of interruption frequency on efficiency F(3,24)=9.592, p = .000. McNemar's test did not show significance for the impact of interruption frequency on effectiveness (accuracy). Friedman test showed participant satisfaction was significantly impacted by interruption frequency (x2=9.47, df=3, p=0.024). Friedman test showed no significance for the main effect of interruption frequency on cognitive workload scores by condition type (x2=1.88, df=3, p=0.599). Results of the qualitative content analysis revealed two main categories to describe nurses' perception of interruption frequency: the nature of interruptions and nurses' reaction to the interrupted work environment.Discussion/Implications: The results suggested that interruption frequency significantly affected task completion time and satisfaction for participants but not participant accuracy or cognitive workload. A high error rate during PCA programming tasks indicated the need to evaluate the conditions in which RNs complete PCA programming as each error presents potential risk of patient harm. RNs' described the impact of interruption frequency as having a negative impact on the work environment and subsequently implement compensating strategies to counterbalance interruptions. RNs' perceived that patient safety was negatively impacted by frequent interruption. RNs experienced negative intrapersonal consequences as a results of frequent interruption. Additional study is needed to better understand the impact of interruption frequency on RNs' performance accuracy and cognitive workload.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005770, ucf:50099
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005770
-
-
Title
-
FACTORS CONTRIBUTING TO THE COMMISSION OF ERRORS AND OMISSION OF STANDARD NURSING PRACTICE AMONG NEW NURSES.
-
Creator
-
Knowles, Rachel, Gibson-Young, Linda, University of Central Florida
-
Abstract / Description
-
Every year, millions of medical errors are committed, costing not only patient health and satisfaction, but thousands of lives and billions of dollars. Errors occur in many areas of the healthcare environment, including the profession of nursing. Nurses provide and delegate patient care and consequently, standard nursing responsibilities such as medication administration, charting, patient education, and basic life support protocol may be incorrect, inadequate, or omitted. Although there is...
Show moreEvery year, millions of medical errors are committed, costing not only patient health and satisfaction, but thousands of lives and billions of dollars. Errors occur in many areas of the healthcare environment, including the profession of nursing. Nurses provide and delegate patient care and consequently, standard nursing responsibilities such as medication administration, charting, patient education, and basic life support protocol may be incorrect, inadequate, or omitted. Although there is much literature about errors among the general nurse population and there is indication that new nurses commit more errors than experienced nurses, not much literature asks the following question: What are the factors contributing to the commission of errors, including the omission of standard nursing care, among new nurses? Ten studies (quantitative, qualitative, and mixed-mode) were examined to identify these factors. From the 10 studies, the researcher identified the three themes of lack of experience, stressful working conditions, and interpersonal and intrapersonal factors. New nurses may not have had enough clinical time, may develop poor habits, may not turn to more experienced nurses and other professionals, may be fatigued from working too many hours with not enough staffing, may not be able to concentrate at work, and may not give or receive adequate communication. Based on these findings and discussion, suggested implications for nursing practice include extended clinical experience, skills practice, adherence to the nursing process, adherence to medications standards such as the five rights and independent double verification, shorter working hours, adequate staffing, no-interruption and no-phone zones, creating a culture of support, electronically entered orders, translation phones, read-backs, and standardized handoff reports.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFH0004439, ucf:45103
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004439
-
-
Title
-
STATISTICAL ANALYSIS OF VISIBLE ABSORPTION SPECTRA AND MASS SPECTRA OBTAINED FROM DYED TEXTILE FIBERS.
-
Creator
-
White, Katie, Sigman, Michael, University of Central Florida
-
Abstract / Description
-
The National Academy of Sciences recently published a report which calls for improvements to the field of forensic science. Their report criticized many forensic disciplines for failure to establish rigorously-tested methods of comparison, and encouraged more research in these areas to establish limitations and assess error rates. This study applies chemometric and statistical methods to current and developing analytical techniques in fiber analysis. In addition to analysis of commercially...
Show moreThe National Academy of Sciences recently published a report which calls for improvements to the field of forensic science. Their report criticized many forensic disciplines for failure to establish rigorously-tested methods of comparison, and encouraged more research in these areas to establish limitations and assess error rates. This study applies chemometric and statistical methods to current and developing analytical techniques in fiber analysis. In addition to analysis of commercially available dyed textile fibers, two pairs of dyes are selected for custom fabric dyeing based on the similarities of their absorbance spectra and dye molecular structures. Visible absorption spectra for all fiber samples are collected using microspectrophotometry (MSP) and mass spectra are collected using electrospray ionization (ESI) mass spectrometry. Statistical calculations are performed using commercial software packages and software written in-house. Levels of Type I and Type II error are examined for fiber discrimination based on hypothesis testing of visible absorbance spectra profiles using a nonparametric permutation method. This work also explores evaluation of known and questioned fiber populations based on an assessment of statistical p-value distributions from questioned-known fiber comparisons with those of known fiber self-comparisons. Results from the hypothesis testing are compared with principal components analysis (PCA) and discriminant analysis (DA) of visible absorption spectra, as well as PCA and DA of ESI mass spectra. The sensitivity of a statistical approach will also be discussed in terms of how instrumental parameters and sampling methods may influence error rates.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003454, ucf:48396
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003454
-
-
Title
-
THE EFFECTIVENESS OF DATA CODES AND HARDWARE SELECTION TO MITIGATE SCINTILLATION EFFECTS ON FREE SPACE OPTICAL DATA TRANSMISSION.
-
Creator
-
Stein, Keith, Phillips, Ronald, University of Central Florida
-
Abstract / Description
-
The design of an optical communication link must plan for the random effects of atmospheric turbulence. This study analyses data from an experiment which transmitted from a laser located 8 meters above ground over a 13 Km range to coherent detection devices approximately 162 meters above ground. The effects of a fading and surging beam wave were considered in regards to code techniques for error correction, amplitude modulation and hardware architecture schemes. This study simulated the use...
Show moreThe design of an optical communication link must plan for the random effects of atmospheric turbulence. This study analyses data from an experiment which transmitted from a laser located 8 meters above ground over a 13 Km range to coherent detection devices approximately 162 meters above ground. The effects of a fading and surging beam wave were considered in regards to code techniques for error correction, amplitude modulation and hardware architecture schemes. This study simulated the use of arrays and large apertures for the receiving devices, and compared the resultant scintillation index with the theoretical calculations.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001204, ucf:46945
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001204
-
-
Title
-
AN ANALYSIS OF THE RELATIONSHIP BETWEEN ECONOMIC DEVELOPMENT AND DEMOGRAPHIC CHARACTERISTICS IN THE UNITED STATES.
-
Creator
-
Heyne, Chad, Ni, Liqiang, University of Central Florida
-
Abstract / Description
-
Over the past several decades there has been extensive research done in an attempt to determine what demographic characteristics affect economic growth, measured in GDP per capita. Understanding what influences the growth of a country will vastly help policy makers enact policies to lead the country in a positive direction. This research focuses on isolating a new variable, women in the work force. As well as isolating a new variable, this research will modify a preexisting variable that was...
Show moreOver the past several decades there has been extensive research done in an attempt to determine what demographic characteristics affect economic growth, measured in GDP per capita. Understanding what influences the growth of a country will vastly help policy makers enact policies to lead the country in a positive direction. This research focuses on isolating a new variable, women in the work force. As well as isolating a new variable, this research will modify a preexisting variable that was shown to be significant in order to make the variable more robust and sensitive to recessions. The intent of this thesis is to explore the relationship between several demographic characteristics and their effect on the growth rate of GDP per capita. The first step is to reproduce the work done by Barlow (1994) to ensure that the United States follows similar rules as the countries in his research. Afterwards, we will introduce new variables into the model, comparing the goodness of fit through the methods of R-squared, AIC and BIC. There have been several models developed to answer each of the research questions independently.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFH0003837, ucf:44712
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0003837
-
-
Title
-
TARGET ELEMENT SIZES FOR FINITE ELEMENT TIDAL MODELS FROM A DOMAIN-WIDE, LOCALIZED TRUNCATION ERROR ANALYSIS INCORPORATING BOTTOM STRESS AND CORIOLIS FORCE.
-
Creator
-
Parrish, Denwood, Hagen, Scott C., University of Central Florida
-
Abstract / Description
-
A new methodology for the determination of target element sizes for the construction of finite element meshes applicable to the simulation of tidal flow in coastal and oceanic domains is developed and tested. The methodology is consistent with the discrete physics of tidal flow, and includes the effects of bottom stress. The method enables the estimation of the localized truncation error of the nonconservative momentum equations throughout a triangulated data set of water surface elevation...
Show moreA new methodology for the determination of target element sizes for the construction of finite element meshes applicable to the simulation of tidal flow in coastal and oceanic domains is developed and tested. The methodology is consistent with the discrete physics of tidal flow, and includes the effects of bottom stress. The method enables the estimation of the localized truncation error of the nonconservative momentum equations throughout a triangulated data set of water surface elevation and flow velocity. The method's domain-wide applicability is due in part to the formulation of a new localized truncation error estimator in terms of complex derivatives. More conventional criteria that are often used to determine target element sizes are limited to certain bathymetric conditions. The methodology developed herein is applicable over a broad range of bathymetric conditions, and can be implemented efficiently. Since the methodology permits the determination of target element size at points up to and including the coastal boundary, it is amenable to coastal domain applications including estuaries, embayments, and riverine systems. These applications require consideration of spatially varying bottom stress and advective terms, addressed herein. The new method, called LTEA-CD (localized truncation error analysis with complex derivatives), is applied to model solutions over the Western North Atlantic Tidal model domain (the bodies of water lying west of the 60° W meridian). The convergence properties of LTEACD are also analyzed. It is found that LTEA-CD may be used to build a series of meshes that produce converging solutions of the shallow water equations. An enhanced version of the new methodology, LTEA+CD (which accounts for locally variable bottom stress and Coriolis terms) is used to generate a mesh of the WNAT model domain having 25% fewer nodes and elements than an existing mesh upon which it is based; performance of the two meshes, in an average sense, is indistinguishable when considering elevation tidal signals. Finally, LTEA+CD is applied to the development of a mesh for the Loxahatchee River estuary; it is found that application of LTEA+CD provides a target element size distribution that, when implemented, outperforms a high-resolution semi-uniform mesh as well as a manually constructed, existing, documented mesh.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001738, ucf:52860
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001738
-
-
Title
-
CONFORMAL TRACKING FOR VIRTUAL ENVIRONMENTS.
-
Creator
-
Davis, Jr., Larry Dennis, Rolland, Jannick P., University of Central Florida
-
Abstract / Description
-
A virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments.Marker-based tracking is a technique...
Show moreA virtual environment is a set of surroundings that appears to exist to a user through sensory stimuli provided by a computer. By virtual environment, we mean to include environments supporting the full range from VR to pure reality. A necessity for virtual environments is knowledge of the location of objects in the environment. This is referred to as the tracking problem, which points to the need for accurate and precise tracking in virtual environments.Marker-based tracking is a technique which employs fiduciary marks to determine the pose of a tracked object. A collection of markers arranged in a rigid configuration is called a tracking probe. The performance of marker-based tracking systems depends upon the fidelity of the pose estimates provided by tracking probes.The realization that tracking performance is linked to probe performance necessitates investigation into the design of tracking probes for proponents of marker-based tracking. The challenges involved with probe design include prediction of the accuracy and precision of a tracking probe, the creation of arbitrarily-shaped tracking probes, and the assessment of the newly created probes.To address these issues, we present a pioneer framework for designing conformal tracking probes. Conformal in this work means to adapt to the shape of the tracked objects and to the environmental constraints. As part of the framework, the accuracy in position and orientation of a given probe may be predicted given the system noise. The framework is a methodology for designing tracking probes based upon performance goals and environmental constraints. After presenting the conformal tracking framework, the elements used for completing the steps of the framework are discussed. We start with the application of optimization methods for determining the probe geometry. Two overall methods for mapping markers on tracking probes are presented, the Intermediary Algorithm and the Viewpoints Algorithm.Next, we examine the method used for pose estimation and present a mathematical model of error propagation used for predicting probe performance in pose estimation. The model uses a first-order error propagation, perturbing the simulated marker locations with Gaussian noise. The marker locations with error are then traced through the pose estimation process and the effects of the noise are analyzed. Moreover, the effects of changing the probe size or the number of markers are discussed.Finally, the conformal tracking framework is validated experimentally. The assessment methods are divided into simulation and post-fabrication methods. Under simulation, we discuss testing of the performance of each probe design. Then, post-fabrication assessment is performed, including accuracy measurements in orientation and position. The framework is validated with four tracking probes. The first probe is a six-marker planar probe. The predicted accuracy of the probe was 0.06 deg and the measured accuracy was 0.083 plus/minus 0.015 deg. The second probe was a pair of concentric, planar tracking probes mounted together. The smaller probe had a predicted accuracy of 0.206 deg and a measured accuracy of 0.282 plus/minus 0.03 deg. The larger probe had a predicted accuracy of 0.039 deg and a measured accuracy of 0.017 plus/minus 0.02 deg. The third tracking probe was a semi-spherical head tracking probe. The predicted accuracy in orientation and position was 0.54 plus/minus 0.24 deg and 0.24 plus/minus 0.1 mm, respectively. The experimental accuracy in orientation and position was 0.60 plus/minus 0.03 deg and 0.225 plus/minus 0.05 mm, respectively. The last probe was an integrated, head-mounted display probe, created using the conformal design process. The predicted accuracy of this probe was 0.032 plus/minus 0.02 degrees in orientation and 0.14 plus/minus 0.08 mm in position. The measured accuracy of the probe was 0.028 plus/minus 0.01 degrees in orientation and 0.11 plus/minus 0.01 mm in position
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000058, ucf:52856
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000058
-
-
Title
-
Influence of Topographic Elevation Error On Modeled Storm Surge.
-
Creator
-
Bilskie, Matthew, Hagen, Scott, Wang, Dingbao, Chopra, Manoj, University of Central Florida
-
Abstract / Description
-
The following presents a method for determining topographic elevation error for overland unstructured finite element meshes derived from bare earth LiDAR for use in a shallow water equations model. This thesis investigates the development of an optimal interpolation method to produce minimal error for a given element size. In hydrodynamic studies, it is vital to represent the floodplain as accurately as possible since terrain is a critical factor that influences water flow. An essential step...
Show moreThe following presents a method for determining topographic elevation error for overland unstructured finite element meshes derived from bare earth LiDAR for use in a shallow water equations model. This thesis investigates the development of an optimal interpolation method to produce minimal error for a given element size. In hydrodynamic studies, it is vital to represent the floodplain as accurately as possible since terrain is a critical factor that influences water flow. An essential step in the development of a coastal inundation model is processing and resampling dense bare earth LiDAR to a DEM and ultimately to the mesh nodes; however, it is crucial that the correct DEM grid size and interpolation method be employed for an accurate representation of the terrain. The following research serves two purposes: 1) to assess the resolution and interpolation scheme of bare earth LiDAR data points in terms of its ability to describe the bare earth topography and its subsequent performance during relevant tide and storm surge simulations.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004520, ucf:49265
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004520
-
-
Title
-
A Simulation-Based Task Analysis using Agent-Based, Discrete Event and System Dynamics simulation.
-
Creator
-
Angelopoulou, Anastasia, Karwowski, Waldemar, Kincaid, John, Xanthopoulos, Petros, Hancock, Peter, University of Central Florida
-
Abstract / Description
-
Recent advances in technology have increased the need for using simulation models to analyze tasks and obtain human performance data. A variety of task analysis approaches and tools have been proposed and developed over the years. Over 100 task analysis methods have been reported in the literature. However, most of the developed methods and tools allow for representation of the static aspects of the tasks performed by expert system-driven human operators, neglecting aspects of the work...
Show moreRecent advances in technology have increased the need for using simulation models to analyze tasks and obtain human performance data. A variety of task analysis approaches and tools have been proposed and developed over the years. Over 100 task analysis methods have been reported in the literature. However, most of the developed methods and tools allow for representation of the static aspects of the tasks performed by expert system-driven human operators, neglecting aspects of the work environment, i.e. physical layout, and dynamic aspects of the task. The use of simulation can help face the new challenges in the field of task analysis as it allows for simulation of the dynamic aspects of the tasks, the humans performing them, and their locations in the environment. Modeling and/or simulation task analysis tools and techniques have been proven to be effective in task analysis, workload, and human reliability assessment. However, most of the existing task analysis simulation models and tools lack features that allow for consideration of errors, workload, level of operator's expertise and skills, among others. In addition, the current task analysis simulation tools require basic training on the tool to allow for modeling the flow of task analysis process and/or error and workload assessment. The modeling process is usually achieved using drag and drop functionality and, in some cases, programming skills.This research focuses on automating the modeling process and simulating individuals (or groups of individuals) performing tasks in a dynamic work environment in any domain. The main objective of this research is to develop a universal tool that allows for modeling and simulation of task analysis models in a short amount of time with limited need for training or knowledge of modeling and simulation theory. A Universal Task Analysis Simulation Modeling (UTASiMo) tool can be used for automatically generating simulation models that analyze the tasks performed by human operators. UTASiMo is a multi-method modeling and simulation tool developed as a combination of agent-based, discrete event, and system dynamics simulation models. A generic multi-method modeling and simulation framework, named 3M(&)S Framework, as well as the Unified Modeling Language have been used for the design of the conceptual model and the implementation of the simulation tool. UTASiMo-generated models are dynamically created during run-time based on user inputs. The simulation results include estimations of operator workload, task completion time, and probability of human errors based on human operator variability and task structure.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006252, ucf:51040
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006252
-
-
Title
-
THEORETICAL AND NUMERICAL STUDIES OF PHASE TRANSITIONS AND ERROR THRESHOLDS IN TOPOLOGICAL QUANTUM MEMORIES.
-
Creator
-
Jouzdani, Pejman, Mucciolo, Eduardo, Chang, Zenghu, Leuenberger, Michael, Abouraddy, Ayman, University of Central Florida
-
Abstract / Description
-
This dissertation is the collection of a progressive research on the topic of topological quantum computation and information with the focus on the error threshold of the well-known models such as the unpaired Majorana, the toric code, and the planar code.We study the basics of quantum computation and quantum information, and in particular quantum error correction. Quantum error correction provides a tool for enhancing the quantum computation fidelity in the noisy environment of a real world....
Show moreThis dissertation is the collection of a progressive research on the topic of topological quantum computation and information with the focus on the error threshold of the well-known models such as the unpaired Majorana, the toric code, and the planar code.We study the basics of quantum computation and quantum information, and in particular quantum error correction. Quantum error correction provides a tool for enhancing the quantum computation fidelity in the noisy environment of a real world. We begin with a brief introduction to stabilizer codes. The stabilizer formalism of the theory of quantum error correction gives a well-defined description of quantum codes that is used throughout this dissertation. Then, we turn our attention to a quite new subject, namely, topological quantum codes. Topological quantum codes take advantage of the topological characteristics of a physical many-body system. The physical many-body systems studied in the context of topological quantum codes are of two essential natures: they either have intrinsic interaction that self-corrects errors, or are actively corrected to be maintainedin a desired quantum state. Examples of the former are the toric code and the unpaired Majorana, while an example for the latter is the surface code.A brief introduction and history of topological phenomena in condensed matter is provided. The unpaired Majorana and the Kitaev toy model are briefly explained. Later we introduce a spin model that maps onto the Kitaev toy model through a sequence of transformations. We show how this model is robust and tolerates local perturbations. The research on this topic, at the time of writing this dissertation, is still incomplete and only preliminary results are represented.As another example of passive error correcting codes with intrinsic Hamiltonian, the toric code is introduced. We also analyze the dynamics of the errors in the toric code known as anyons. We show numerically how the addition of disorder to the physical system underlying the toric code slows down the dynamics of the anyons. We go further and numerically analyze the presence of time-dependent noise and the consequent delocalization of localized errors.The main portion of this dissertation is dedicated to the surface code. We study the surface code coupled to a non-interacting bosonic bath. We show how the interaction between the code and the bosonic bath can effectively induce correlated errors. These correlated errors may be corrected up to some extend. The extension beyond which quantum error correction seems impossible is the error threshold of the code. This threshold is analyzed by mapping the effective correlated error model onto a statistical model. We then study the phase transition in the statistical model. The analysis is in two parts. First, we carry out derivation of the effective correlated model, its mapping onto a statistical model, and perform an exact numerical analysis. Second, we employ a Monte Carlo method to extend the numerical analysis to large system size.We also tackle the problem of surface code with correlated and single-qubit errors by an exact mapping onto a two-dimensional Ising model with boundary fields. We show how the phase transition point in one model, the Ising model, coincides with the intrinsic error threshold of the other model, the surface code.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005512, ucf:50314
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005512
-
-
Title
-
OPTIMIZATION OF ZONAL WAVEFRONT ESTIMATION AND CURVATURE MEASUREMENTS.
-
Creator
-
Zou, Weiyao, Rolland, Jannick, University of Central Florida
-
Abstract / Description
-
Optical testing in adverse environments, ophthalmology and applications where characterization by curvature is leveraged all have a common goal: accurately estimate wavefront shape. This dissertation investigates wavefront sensing techniques as applied to optical testing based on gradient and curvature measurements. Wavefront sensing involves the ability to accurately estimate shape over any aperture geometry, which requires establishing a sampling grid and estimation scheme, quantifying...
Show moreOptical testing in adverse environments, ophthalmology and applications where characterization by curvature is leveraged all have a common goal: accurately estimate wavefront shape. This dissertation investigates wavefront sensing techniques as applied to optical testing based on gradient and curvature measurements. Wavefront sensing involves the ability to accurately estimate shape over any aperture geometry, which requires establishing a sampling grid and estimation scheme, quantifying estimation errors caused by measurement noise propagation, and designing an instrument with sufficient accuracy and sensitivity for the application. Starting with gradient-based wavefront sensing, a zonal least-squares wavefront estimation algorithm for any irregular pupil shape and size is presented, for which the normal matrix equation sets share a pre-defined matrix. A GerchbergSaxton iterative method is employed to reduce the deviation errors in the estimated wavefront caused by the pre-defined matrix across discontinuous boundary. The results show that the RMS deviation error of the estimated wavefront from the original wavefront can be less than λ/130~ λ/150 (for λ equals 632.8nm) after about twelve iterations and less than λ/100 after as few as four iterations. The presented approach to handling irregular pupil shapes applies equally well to wavefront estimation from curvature data. A defining characteristic for a wavefront estimation algorithm is its error propagation behavior. The error propagation coefficient can be formulated as a function of the eigenvalues of the wavefront estimation-related matrices, and such functions are established for each of the basic estimation geometries (i.e. Fried, Hudgin and Southwell) with a serial numbering scheme, where a square sampling grid array is sequentially indexed row by row. The results show that with the wavefront piston-value fixed, the odd-number grid sizes yield lower error propagation than the even-number grid sizes for all geometries. The Fried geometry either allows sub-sized wavefront estimations within the testing domain or yields a two-rank deficient estimation matrix over the full aperture; but the latter usually suffers from high error propagation and the waffle mode problem. Hudgin geometry offers an error propagator between those of the Southwell and the Fried geometries. For both wavefront gradient-based and wavefront difference-based estimations, the Southwell geometry is shown to offer the lowest error propagation with the minimum-norm least-squares solution. Noll's theoretical result, which was extensively used as a reference in the previous literature for error propagation estimate, corresponds to the Southwell geometry with an odd-number grid size. For curvature-based wavefront sensing, a concept for a differential Shack-Hartmann (DSH) curvature sensor is proposed. This curvature sensor is derived from the basic Shack-Hartmann sensor with the collimated beam split into three output channels, along each of which a lenslet array is located. Three Hartmann grid arrays are generated by three lenslet arrays. Two of the lenslets shear in two perpendicular directions relative to the third one. By quantitatively comparing the Shack-Hartmann grid coordinates of the three channels, the differentials of the wavefront slope at each Shack-Hartmann grid point can be obtained, so the Laplacian curvatures and twist terms will be available. The acquisition of the twist terms using a Hartmann-based sensor allows us to uniquely determine the principal curvatures and directions more accurately than prior methods. Measurement of local curvatures as opposed to slopes is unique because curvature is intrinsic to the wavefront under test, and it is an absolute as opposed to a relative measurement. A zonal least-squares-based wavefront estimation algorithm was developed to estimate the wavefront shape from the Laplacian curvature data, and validated. An implementation of the DSH curvature sensor is proposed and an experimental system for this implementation was initiated. The DSH curvature sensor shares the important features of both the Shack-Hartmann slope sensor and Roddier's curvature sensor. It is a two-dimensional parallel curvature sensor. Because it is a curvature sensor, it provides absolute measurements which are thus insensitive to vibrations, tip/tilts, and whole body movements. Because it is a two-dimensional sensor, it does not suffer from other sources of errors, such as scanning noise. Combined with sufficient sampling and a zonal wavefront estimation algorithm, both low and mid frequencies of the wavefront may be recovered. Notice that the DSH curvature sensor operates at the pupil of the system under test, therefore the difficulty associated with operation close to the caustic zone is avoided. Finally, the DSH-curvature-sensor-based wavefront estimation does not suffer from the 2-ambiguity problem, so potentially both small and large aberrations may be measured.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001566, ucf:47145
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001566
-
-
Title
-
ANALYSIS OF TIME SYNCHRONIZATION ERRORS IN HIGH DATA RATE ULTRAWIDEBAND ORTHOGONAL FREQUENCY DIVISION MULTIPLEXING DATA LINKS.
-
Creator
-
Bates, Lakesha, Jones, W. Linwood, University of Central Florida
-
Abstract / Description
-
Emerging Ultra Wideband (UWB) Orthogonal Frequency Division Multiplexing (OFDM) systems hold the promise of delivering wireless data at high speeds, exceeding hundreds of megabits per second over typical distances of 10 meters or less. The purpose of this Thesis is to estimate the timing accuracies required with such systems in order to achieve Bit Error Rates (BER) of the order of magnitude of 10-12 and thereby avoid overloading the correction of irreducible errors due to misaligned timing...
Show moreEmerging Ultra Wideband (UWB) Orthogonal Frequency Division Multiplexing (OFDM) systems hold the promise of delivering wireless data at high speeds, exceeding hundreds of megabits per second over typical distances of 10 meters or less. The purpose of this Thesis is to estimate the timing accuracies required with such systems in order to achieve Bit Error Rates (BER) of the order of magnitude of 10-12 and thereby avoid overloading the correction of irreducible errors due to misaligned timing errors to a small absolute number of bits in error in real-time relative to a data rate of hundreds of megabits per second. Our research approach involves managing bit error rates through identifying maximum timing synchronization errors. Thus, it became our research goal to determine the timing accuracies required to avoid operation of communication systems within the asymptotic region of BER flaring at low BERs in the resultant BER curves. We propose pushing physical layer bit error rates to below 10-12 before using forward error correction (FEC) codes. This way, the maximum reserve is maintained for the FEC hardware to correct for burst as well as recurring bit errors due to corrupt bits caused by other than timing synchronization errors.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000197, ucf:46173
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000197
-
-
Title
-
INTERNATIONAL SPACE STATION REMOTE SENSING POINTING ANALYSIS.
-
Creator
-
Jacobson, Craig, Leonessa, Alexander, University of Central Florida
-
Abstract / Description
-
This paper analyzes the geometric and disturbance aspects of utilizing the International Space Station for remote sensing of earth targets. The proposed instrument is SHORE (Station High-Sensitivity Ocean Research Experiment), a multi-band optical spectrometer with 15 m pixel resolution. The analysis investigates the contribution of the error effects to the quality of data collected by the instrument. The analysis begins with the discussion of the coordinate systems involved and then...
Show moreThis paper analyzes the geometric and disturbance aspects of utilizing the International Space Station for remote sensing of earth targets. The proposed instrument is SHORE (Station High-Sensitivity Ocean Research Experiment), a multi-band optical spectrometer with 15 m pixel resolution. The analysis investigates the contribution of the error effects to the quality of data collected by the instrument. The analysis begins with the discussion of the coordinate systems involved and then conversion from the target coordinate system to the instrument coordinate system. Next the geometry of remote observations from the Space Station is investigated including the effects of the instrument location in Space Station and the effects of the line of sight to the target. The disturbance and error environment on Space Station is discussed covering factors contributing to drift and jitter, accuracy of pointing data and target and instrument accuracies. Finally, there is a brief discussion of image processing to address any post error correction options.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000855, ucf:46661
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000855
-
-
Title
-
Secondary and Postsecondary Calculus Instructors' Expectations of Student Knowledge of Functions: A Multiple-case Study.
-
Creator
-
Avila, Cheryl, Ortiz, Enrique, Dixon, Juli, Hynes, Michael, Andreasen, Janet, Mohapatra, Ram, University of Central Florida
-
Abstract / Description
-
This multiple-case study examines the explicit and implicit assumptions of six veteran calculus instructors from three types of educational institutions, comparing and contrasting their views on the iteration of conceptual understanding and procedural fluency of pre-calculus topics. There were three components to the research data recording process. The first component was a written survey, the second component was a (")think-aloud(") activity of the instructors analyzing the results of a...
Show moreThis multiple-case study examines the explicit and implicit assumptions of six veteran calculus instructors from three types of educational institutions, comparing and contrasting their views on the iteration of conceptual understanding and procedural fluency of pre-calculus topics. There were three components to the research data recording process. The first component was a written survey, the second component was a (")think-aloud(") activity of the instructors analyzing the results of a function diagnostic instrument administered to a calculus class, and for the third component, the instructors responded to two quotations. As a result of this activity, themes were found between and among instructors at the three types of educational institutions related to their expectations of their incoming students' prior knowledge of pre-calculus topics related to functions. Differences between instructors of the three types of educational institutions included two identifiable areas: (1) the teachers' expectations of their incoming students and (2) the methods for planning instruction. In spite of these differences, the veteran instructors were in agreement with other studies' findings that an iterative approach to conceptual understanding and procedural fluency are necessary for student understanding of pre-calculus concepts.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004809, ucf:49758
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004809
Pages