Current Search: simulation (x)
View All Items
Pages
- Title
- Predictive Modeling of Functional Materials for Catalytic and Sensor Applications.
- Creator
-
Rawal, Takat, Rahman, Talat, Chang, Zenghu, Leuenberger, Michael, Zou, Shengli, University of Central Florida
- Abstract / Description
-
The research conducted in my dissertation focuses on theoretical and computational studies of the electronic and geometrical structures, and the catalytic and optical properties of functional materials in the form of nano-structures, extended surfaces, two-dimensional systems and hybrid structures. The fundamental aspect of my research is to predict nanomaterial properties through ab-initio calculations using methods such as quantum mechanical density functional theory (DFT) and kinetic Monte...
Show moreThe research conducted in my dissertation focuses on theoretical and computational studies of the electronic and geometrical structures, and the catalytic and optical properties of functional materials in the form of nano-structures, extended surfaces, two-dimensional systems and hybrid structures. The fundamental aspect of my research is to predict nanomaterial properties through ab-initio calculations using methods such as quantum mechanical density functional theory (DFT) and kinetic Monte Carlo simulation, which help rationalize experimental observations, and ultimately lead to the rational design of materials for the electronic and energy-related applications. Focusing on the popular single-layer MoS2, I first show how its hybrid structure with 29-atom transition metal nanoparticles (M29 where M=Cu, Ag, and Au) can lead to composite catalysts suitable for oxidation reactions. Interestingly, the effect is found to be most pronounced for Au29 when MoS2 is defect-laden (S vacancy row). Second, I show that defect-laden MoS2 can be functionalized either by deposited Au nanoparticles or when supported on Cu(111) to serve as a cost-effective catalyst for methanol synthesis via CO hydrogenation reactions. The charge transfer and electronic structural changes in these sub systems lead to the presence of 'frontier' states near the Fermi level, making the systems catalytically active. Next, in the emerging area of single metal atom catalysis, I provide rationale for the viability of single Pd sites stabilized on ZnO(101 ?0) as the active sites for methanol partial oxidation, an important reaction for the production of H2. We trace its excellent activity to the modified electronic structure of the single Pd site as well as neighboring Zn cationic sites. With the DFT-calculated activation energy barriers for a large set of reactions, we perform ab-initio kMC simulations to determine the selectivity of the products (CO2 and H2). These findings offer an opportunity for maximizing the efficiency of precious metal atoms, and optimizing their activity and selectivity (for desired products). In related work on extended surfaces while trying to explain the Scanning Tunneling Microscopy images observed by our experimental collaborators, I discovered a new mechanism involved in the process of Ag vacancy formation on Ag(110), in the presence of O atoms which leads to the reconstruction and eventually oxidation of the Ag surface. In a similar vein, I was able to propose a mechanism for the orange photoluminescence (PL), observed by our experimental collaborators, of a coupled system of benzylpiperazine (BZP) molecule and iodine on a copper surface. Our results show that the adsorbed BZP and iodine play complimentary roles in producing the PL in the visible range. Upon photo-excitation of the BZP-I/CuI(111) system, excited electrons are transferred into the conduction band (CB) of CuI, and holes are trapped by the adatoms. The relaxation of holes into BZP HOMO is facilitated by its realignment. Relaxed holes subsequently recombine with excited electrons in the CB of the CuI film, thus producing a luminescence peak at ~2.1 eV. These results can be useful for forensic applications in detecting illicit substances.
Show less - Date Issued
- 2017
- Identifier
- CFE0006783, ucf:51813
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006783
- Title
- FUNDAMENTAL UNDERSTANDING OF INTERACTIONS AMONG FLOW, TURBULENCE, AND HEAT TRANSFER IN JET IMPINGEMENT COOLING.
- Creator
-
Hossain, Md. Jahed, Kapat, Jayanta, Ahmed, Kareem, Gordon, Ali, Wiegand, Rudolf, University of Central Florida
- Abstract / Description
-
The flow physics of impinging jet is very complex and is not fully understood yet. The flow field in an impingement problem comprised of three different distinct regions: a free jet with a potential core, a stagnation region where the velocity goes to zero as the jet impinges onto the wall and a creation of wall jet region where the boundary layer grows radially outward after impinging. Since impingement itself is a broad topic, effort is being made in the current study to narrow down on...
Show moreThe flow physics of impinging jet is very complex and is not fully understood yet. The flow field in an impingement problem comprised of three different distinct regions: a free jet with a potential core, a stagnation region where the velocity goes to zero as the jet impinges onto the wall and a creation of wall jet region where the boundary layer grows radially outward after impinging. Since impingement itself is a broad topic, effort is being made in the current study to narrow down on three particular geometric configurations (a narrow wall, an array impingement configuration and a curved surface impingement configuration) that shows up in a typical gas turbine impingement problem in relation to heat transfer. Impingement problems are difficult to simulate numerically using conventional RANS models. It is worth noting that the typical RANS model contains a number of calibrated constants and these have been formulated with respect to relatively simple shear flows. As a result typically these isotropic eddy viscosity models fail in predicting the correct heat transfer value and trend in impingement problem where the flow is highly anisotropic. The common RANS-based models over predict stagnation heat transfer coefficients by as much as 300% when compared to measured values. Even the best of the models, the v^2-f model, can be inaccurate by up to 30%. Even though there is myriad number of experimental and numerical work published on single jet impingement; the knowledge gathered from these works cannot be applied to real engineering impingement cooling application as the dynamics of flow changes completely. This study underlines the lack of experimental flow physics data in published literature on multiple jet impingement and the author emphasized how important it is to have experimental data to validate CFD tools and to determine the suitability of Large Eddy Simulation (LES) in industrial application. In the open literature there is not enough study where experimental heat transfer and flow physics data are combined to explain the behavior for gas turbine impingement cooling application. Often it is hard to understand the heat transfer behavior due to lack of time accurate flow physics data hence a lot of conjecture has been made to explain the phenomena. The problem is further exacerbated for array of impingement jets where the flow is much more complex than a single round jet. The experimental flow field obtained from Particle Image Velocimetry (PIV) and heat transfer data obtained from Temperature Sensitive Paint (TSP) from this work will be analyzed to understand the relationship between flow characteristics and heat transfer for the three types of novel geometry mentioned above.There has not been any effort made on implementing LES technique on array impingement problem in the published literature. Nowadays with growing computational power and resources CFD are widely used as a design tool. To support the data gathered from the experiment, LES is carried out in narrow wall impingement cooling configuration. The results will provide more accurate information on impingement flow physics phenomena where experimental techniques are limited and the typical RANS models yield erroneous resultThe objective of the current study is to provide a better understanding of impingement heat transfer in relation to flow physics associated with it. As heat transfer is basically a manifestation of the flow and most of the flow in real engineering applications is turbulent, it is very important to understand the dynamics of flow physics in an impingement problem. The work emphasis the importance of understanding mean velocities, turbulence, jet shear layer instability and its importance in heat transfer application. The present work shows detailed information of flow phenomena using Particle Image Velocimetry (PIV) in a single row narrow impingement channel. Results from the RANS and LES simulations are compared with Particle Image Velocimetry (PIV) data. The accuracy of LES in predicting the flow field and heat transfer of an impingement problem is also presented the in the current work as it is validated against experimental flow field measured through PIV.Results obtained from the PIV and LES shows excellent agreement for predicting both heat transfer and flow physics data. Some of the key findings from the study highlight the shortcomings of the typical RANS models used for the impingement heat transfer problem. It was found that the stagnation point heat transfer was over predicted by as much as 48% from RANS simulations when compared to the experimental data. A lot of conjecture has been made in the past for RANS' ability to predict the stagnation point heat transfer correctly. The length of the potential core for the first jet was found to be ~ 2D in RANS simulations as oppose to 1D in PIV and LES, confirm the possible underlying reason for this discrepancy. The jet shear layer thickness was underpredicted by ~ 40% in RANS simulations proving the model is not diffusive enough for a flow like jet impingement. Turbulence production due to shear stress was over predicted by ~130% and turbulence production due to normal stresses were underpredicted by ~40 % in RANS simulation very close to the target wall showing RANS models fail where both strain rate and shear stress plays a pivotal role in the dynamics of the flow. In the closing, turbulence is still one of the most difficult problems to solve accurately, as has been the case for about a century. A quote below from the famous mathematician, Horace Lamb (1849-1934) express the level of difficulty and frustration associated with understanding turbulence in fluid mechanics. (")I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic.(")Source: http://scienceworld.wolfram.com/biography/Lamb.htmlThis dissertation is expected to shed some light onto one specific example of turbulent flows.
Show less - Date Issued
- 2016
- Identifier
- CFE0006463, ucf:51424
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006463
- Title
- Design and modeling of a heat exchanger for porous combustor powered steam generators in automotive industry.
- Creator
-
Dasgupta, Apratim, Orlovskaya, Nina, Gou, Jihua, Vasu Sumathi, Subith, University of Central Florida
- Abstract / Description
-
A major challenge faced by automobile manufacturers is to achieve reduction of particulate emission to acceptable standards, as the emission standards become more and more stringent. One of the ecologically friendly options to reduce emissions is to develop external combustion in a steam engine as a replacement of the internal combustion engine. There are multiple factors, other than pollution that need to be considered for developing a substitute for Internal Combustion Engine, like specific...
Show moreA major challenge faced by automobile manufacturers is to achieve reduction of particulate emission to acceptable standards, as the emission standards become more and more stringent. One of the ecologically friendly options to reduce emissions is to develop external combustion in a steam engine as a replacement of the internal combustion engine. There are multiple factors, other than pollution that need to be considered for developing a substitute for Internal Combustion Engine, like specific power, throttle response, torque speed curve, fuel consumption and refueling infrastructure. External combustion in a steam engine seems to be a bright idea, for a cleaner and more environment friendly alternative to the IC engine that can satisfy the multiple technology requirements mentioned. One way of performing external heterogeneous combustion is to use porous ceramic media, which is a modern and innovative technique, used in many practical applications. The heterogeneous combustion inside ceramic porous media provides numerous advantages, as the ceramic, acts as a regenerator that distributes heat from the flue gases to the upstream reactants, resulting in the extended flammability limits of the reactants. The heat exchanger design is the major challenge in developing an external combustion engine because of the space, such systems consume in an automobile. The goal of the research is to develop a compact and efficient heat exchanger for the application. The proposed research uses natural gas as a fuel that is mixed with air for combustion and the generated flue gases are fed to a heat exchanger to generate superheated system for performing engine work to the vehicle. The performed research is focused on designing and modeling of the boiler heat exchanger section. The justification for selection of working fluid and power plant technology is presented as part of the research, where the proposed system consists of an Air and Flue Gas Path and a Water and Steam Path. Models are developed for coupled thermal and fluid analysis of a heat exchanger, consisting of three sections. The first section converts water to a saturated liquid. The second portion consists of a section where water is converted to saturated steam. The third section is the superheater, where saturated steam is converted to superheated steam. The Finite Element Model is appropriately meshed and boundary conditions set up to solve the mass, momentum and energy conservation equations. The k-epsilon model is implemented to take care of turbulence. Analytical calculations following the established codes and standards are also executed to develop the design.
Show less - Date Issued
- 2017
- Identifier
- CFE0006579, ucf:51308
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006579
- Title
- Modeling social norms in real-world agent-based simulations.
- Creator
-
Beheshti, Rahmatollah, Sukthankar, Gita, Boloni, Ladislau, Wu, Annie, Swarup, Samarth, University of Central Florida
- Abstract / Description
-
Studying and simulating social systems including human groups and societies can be a complex problem. In order to build a model that simulates humans' actions, it is necessary to consider the major factors that affect human behavior. Norms are one of these factors: social norms are the customary rules that govern behavior in groups and societies. Norms are everywhere around us, from the way people handshake or bow to the clothes they wear. They play a large role in determining our behaviors....
Show moreStudying and simulating social systems including human groups and societies can be a complex problem. In order to build a model that simulates humans' actions, it is necessary to consider the major factors that affect human behavior. Norms are one of these factors: social norms are the customary rules that govern behavior in groups and societies. Norms are everywhere around us, from the way people handshake or bow to the clothes they wear. They play a large role in determining our behaviors. Studies on norms are much older than the age of computer science, since normative studies have been a classic topic in sociology, psychology, philosophy and law. Various theories have been put forth about the functioning of social norms. Although an extensive amount of research on norms has been performed during the recent years, there remains a significant gap between current models and models that can explain real-world normative behaviors. Most of the existing work on norms focuses on abstract applications, and very few realistic normative simulations of human societies can be found. The contributions of this dissertation include the following: 1) a new hybrid technique based on agent-based modeling and Markov Chain Monte Carlo is introduced. This method is used to prepare a smoking case study for applying normative models. 2) This hybrid technique is described using category theory, which is a mathematical theory focusing on relations rather than objects. 3) The relationship between norm emergence in social networks and the theory of tipping points is studied. 4) A new lightweight normative architecture for studying smoking cessation trends is introduced. This architecture is then extended to a more general normative framework that can be used to model real-world normative behaviors. The final normative architecture considers cognitive and social aspects of norm formation in human societies. Normative architectures based on only one of these two aspects exist in the literature, but a normative architecture that effectively includes both of these two is missing.
Show less - Date Issued
- 2015
- Identifier
- CFE0005577, ucf:50244
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005577
- Title
- Effects of a Mathematics Graphic Organizer and Virtual Video Modeling on the Word Problem Solving Abilities of Students with Disabilities.
- Creator
-
Delisio, Lauren, Dieker, Lisa, Vasquez, Eleazar, Hines, Rebecca, Dixon, Juli, University of Central Florida
- Abstract / Description
-
Over the last decade, the inclusion of students with disabilities (SWD) in the general education classroom has increased. Currently, 60% of SWD spend 80% or more of their school day in the general education classroom (U.S. Department of Education, 2013). This includes students with autism spectrum disorders (ASD), a developmental disability characterized by impairments in behavior, language, and social skills (American Psychological Association, 2013). Many of these SWD struggle with...
Show moreOver the last decade, the inclusion of students with disabilities (SWD) in the general education classroom has increased. Currently, 60% of SWD spend 80% or more of their school day in the general education classroom (U.S. Department of Education, 2013). This includes students with autism spectrum disorders (ASD), a developmental disability characterized by impairments in behavior, language, and social skills (American Psychological Association, 2013). Many of these SWD struggle with mathematics in the elementary grades; fewer than 20% of SWD are proficient in mathematics when they begin middle school, compared to 45% of their peers without disabilities. Furthermore, 83% of SWD are performing at the basic or below basic level in mathematics in the fourth grade (U.S. Department of Education, 2013). As the rate of ASD continues to increase (Centers for Disease Control, 2013), the number of students with this disability who are included in the general education classroom also continues to rise. These SWD and students with ASD are expected to meet the same rigorous mathematics standards as their peers without disabilities. This study was an attempt to address the unique needs of SWD and students with ASD by combining practices rooted in the literature, strategy instruction and video modeling.The purpose of this study was to determine the effects of an intervention on the ability of students with and without disabilities in inclusive fourth and fifth grade classrooms to solve word problems in mathematics. The intervention package was comprised of a graphic organizer, the K-N-W-S, video models of the researcher teaching the strategy to a student avatar from a virtual simulated classroom, TeachLivE, and daily word problems for students to practice the strategy. The researcher used a quasi-experimental group design with a treatment and a control group to determine the impact of the intervention. Students were assessed on their performance via a pretest and posttest. Analyses of data were conducted on individual test items to assess patterns in performance by mathematical word problem type.The effects of the intervention on SWD, students with ASD, and students without disabilities varied widely between groups as well as amongst individual students, indicating a need for further studies on the effects of mathematics strategy instruction on students with varying needs and abilities.
Show less - Date Issued
- 2015
- Identifier
- CFE0005782, ucf:50065
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005782
- Title
- Modeling and Simulation of All-electric Aircraft Power Generation and Actuation.
- Creator
-
Woodburn, David, Wu, Xinzhang, Batarseh, Issa, Georgiopoulos, Michael, Haralambous, Michael, Chow, Louis, University of Central Florida
- Abstract / Description
-
Modern aircraft, military and commercial, rely extensively on hydraulic systems. However, there is great interest in the avionics community to replace hydraulic systems with electric systems. There are physical challenges to replacing hydraulic actuators with electromechanical actuators (EMAs), especially for flight control surface actuation. These include dynamic heat generation and power management.Simulation is seen as a powerful tool in making the transition to all-electric aircraft by...
Show moreModern aircraft, military and commercial, rely extensively on hydraulic systems. However, there is great interest in the avionics community to replace hydraulic systems with electric systems. There are physical challenges to replacing hydraulic actuators with electromechanical actuators (EMAs), especially for flight control surface actuation. These include dynamic heat generation and power management.Simulation is seen as a powerful tool in making the transition to all-electric aircraft by predicting the dynamic heat generated and the power flow in the EMA. Chapter 2 of this dissertation describes the nonlinear, lumped-element, integrated modeling of a permanent magnet (PM) motor used in an EMA. This model is capable of representing transient dynamics of an EMA, mechanically, electrically, and thermally.Inductance is a primary parameter that links the electrical and mechanical domains and, therefore, is of critical importance to the modeling of the whole EMA. In the dynamic mode of operation of an EMA, the inductances are quite nonlinear. Chapter 3 details the careful analysis of the inductances from finite element software and the mathematical modeling of these inductances for use in the overall EMA model.Chapter 4 covers the design and verification of a nonlinear, transient simulation model of a two-step synchronous generator with three-phase rectifiers. Simulation results are shown.
Show less - Date Issued
- 2013
- Identifier
- CFE0005074, ucf:49975
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005074
- Title
- THE TLC METHOD FOR MODELING CREEP DEFORMATION AND RUPTURE.
- Creator
-
May, David, Gordon, Ali, University of Central Florida
- Abstract / Description
-
This thesis describes a novel new method, termed the Tangent-Line-Chord (TLC) method, that can be used to more efficiently model creep deformation dominated by the tertiary regime. Creep deformation is a widespread mechanical mode of failure found in high-stress and temperature mechanical systems. To accurately simulate creep and its effect on structures, researchers utilize finite element analysis (FEA). General purpose FEA packages require extensive amounts of time and computer resources to...
Show moreThis thesis describes a novel new method, termed the Tangent-Line-Chord (TLC) method, that can be used to more efficiently model creep deformation dominated by the tertiary regime. Creep deformation is a widespread mechanical mode of failure found in high-stress and temperature mechanical systems. To accurately simulate creep and its effect on structures, researchers utilize finite element analysis (FEA). General purpose FEA packages require extensive amounts of time and computer resources to simulate creep softening in components because of the large deformation rates that continuously evolve. The goal of this research is to employ multi-regime creep models, such as the Kachanov-Rabotnov model, to determine a set of equations that will allow creep to be simulated using as few iterations as possible. The key outcome is the freeing up of computational resources and the saving of time. Because both the number of equations and the value of material constants within the model change depending on the approach used, programming software will be utilized to automate this analytical process. The materials being considered in this research are mainly generic Ni-based superalloys, as they exhibit creep responses that are dominated by secondary and tertiary creep.
Show less - Date Issued
- 2014
- Identifier
- CFH0004560, ucf:45196
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004560
- Title
- A Psychophysical Approach to Standardizing Texture Compression for Virtual Environments.
- Creator
-
Flynn, Jeremy, Szalma, James, Fidopiastis, Cali, Jentsch, Florian, Shah, Mubarak, University of Central Florida
- Abstract / Description
-
Image compression is a technique to reduce overall data size, but its effects on human perception have not been clearly established. The purpose of this effort was to determine the most effective psychophysical method for subjective image quality assessment, and to apply those findings to an objective algorithm. This algorithm was used to identify the minimum level of texture compression noticeable to the human, in order to determine whether compression-induced texture distortion impacted...
Show moreImage compression is a technique to reduce overall data size, but its effects on human perception have not been clearly established. The purpose of this effort was to determine the most effective psychophysical method for subjective image quality assessment, and to apply those findings to an objective algorithm. This algorithm was used to identify the minimum level of texture compression noticeable to the human, in order to determine whether compression-induced texture distortion impacted game-play outcomes. Four experiments tested several hypotheses. The first hypothesis evaluated which of three magnitude estimation (ME) methods (absolute ME, absolute ME plus, or ME with a standard) for image quality assessment was the most reliable. The just noticeable difference (JND) point for textures compression against the Feature Similarity Index for color was determined The second hypothesis tested whether human participants perceived the same amount of distortion differently when textures were presented in three ways: when textures were displayed as flat images; when textures were wrapped around a model; and when textures were wrapped around models and in a virtual environment. The last set of hypotheses examined whether compression affected both subjective (immersion, technology acceptance, usability) and objective (performance) gameplay outcomes. The results were: the absolute magnitude estimation method was the most reliable; no difference was observed in the JND threshold between flat textures and textures placed on models, but textured embedded within the virtual environment were more noticeable than in the other two presentation formats. There were no differences in subjective gameplay outcomes when textures were compressed to below the JND thresholds; and those who played a game with uncompressed textures performed better on in-game tasks than those with the textures compressed, but only on the first in-game day. Practitioners and researchers can use these findings to guide their approaches to texture compression and experimental design.
Show less - Date Issued
- 2018
- Identifier
- CFE0007178, ucf:52250
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007178
- Title
- Equivalency Analysis of Sidestick Controller Modes During Manual Flight.
- Creator
-
Rummel, Alex, Karwowski, Waldemar, Elshennawy, Ahmad, Hancock, Peter, University of Central Florida
- Abstract / Description
-
Equivalency analysis is a statistical procedure that can enhance the findings of an analysis of variance in the case when non-significant differences are identified. The demonstration of functional equivalence or the absence of practical differences is useful to designers introducing new technologies to the flight deck. Proving functional equivalence is an effective means to justify the implementation of new technologies that must be (")the same or better(") than previous technology. This...
Show moreEquivalency analysis is a statistical procedure that can enhance the findings of an analysis of variance in the case when non-significant differences are identified. The demonstration of functional equivalence or the absence of practical differences is useful to designers introducing new technologies to the flight deck. Proving functional equivalence is an effective means to justify the implementation of new technologies that must be (")the same or better(") than previous technology. This study examines the functional equivalency of three operational modes of a new active control sidestick during normal operations while performing manual piloting tasks. Data from a between-subjects, repeated-measures simulator test was analyzed using analysis of variance and equivalency analysis. Ten pilots participated in the simulator test which was conducted in a fixed-base, business jet simulator. Pilots performed maneuvers such as climbing and descending turns and ILS approaches using three sidestick modes: active, unlinked, and passive. RMS error for airspeed, flight path angle, and bank angle were measured in addition to touchdown points on the runway relative to centerline and runway threshold. Results indicate that the three operational modes are functionally equivalent when performing climbing and descending turns. Active and unlinked modes were found to be functionally equivalent when flying an ILS approach, but the passive mode, by a small margin, was not found to be functionally equivalent.
Show less - Date Issued
- 2018
- Identifier
- CFE0007242, ucf:52226
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007242
- Title
- INTERACTION BETWEEN SECONDARY FLOW AND FILM COOLING JETS OF A REALISTIC ANNULAR AIRFOIL CASCADE (HIGH MACH NUMBER).
- Creator
-
Nguyen, Cuong, Kapat, Jayanta, University of Central Florida
- Abstract / Description
-
Film cooling is investigated on a flat plate both numerically and experimentally. Conical shaped film hole are investigated extensively and contribute to the current literature data, which is extremely rare in the open public domain. Both configuration of the cylindrical film holes, with and without a trench, are investigated in detail. Design of experiment technique was performed to find an optimum combination of both geometrical and fluid parameters to achieve the best film cooling...
Show moreFilm cooling is investigated on a flat plate both numerically and experimentally. Conical shaped film hole are investigated extensively and contribute to the current literature data, which is extremely rare in the open public domain. Both configuration of the cylindrical film holes, with and without a trench, are investigated in detail. Design of experiment technique was performed to find an optimum combination of both geometrical and fluid parameters to achieve the best film cooling performance. From this part of the study, it shows that film cooling performance can be enhanced up to 250% with the trenched film cooling versus non-trenched case provided the same amount of coolant. Since most of the relevant open literature is about film cooling on flat plate endwall cascade with linear extrusion airfoil, the purpose of the second part of this study is to examine the interaction of the secondary flow inside a 3D cascade and the injected film cooling jets. This is employed on the first stage of the aircraft gas turbine engine to protect the curvilinear (annular) endwall platform. The current study investigates the interaction between injected film jets and the secondary flow both experimentally and numerically at high Mach number (M=0.7). Validation shows good agreement between obtained data with the open literature. In general, it can be concluded that with an appropriate film coolant to mainstream blowing ratio, one can not only achieve the best film cooling effectiveness (FCE or η) on the downstream endwall but also maintain almost the same aerodynamic loss as in the un-cooled baseline case. Film performance acts nonlinearly with respect to blowing ratios as with film cooling on flat plate, in the other hand, with a right blowing ratio, film cooling performance is not affect much by secondary flow. In turn, film cooling jets do not increase pressure loss at the downstream wake area of the blades.
Show less - Date Issued
- 2010
- Identifier
- CFE0003546, ucf:48944
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003546
- Title
- CMOS RF CITUITS VARIABILITY AND RELIABILITY RESILIENT DESIGN, MODELING, AND SIMULATION.
- Creator
-
Liu, Yidong, Yuan, Jiann-Shiun, University of Central Florida
- Abstract / Description
-
The work presents a novel voltage biasing design that helps the CMOS RF circuits resilient to variability and reliability. The biasing scheme provides resilience through the threshold voltage (VT) adjustment, and at the mean time it does not degrade the PA performance. Analytical equations are established for sensitivity of the resilient biasing under various scenarios. Power Amplifier (PA) and Low Noise Amplifier (LNA) are investigated case by case through modeling and experiment. PTM 65nm...
Show moreThe work presents a novel voltage biasing design that helps the CMOS RF circuits resilient to variability and reliability. The biasing scheme provides resilience through the threshold voltage (VT) adjustment, and at the mean time it does not degrade the PA performance. Analytical equations are established for sensitivity of the resilient biasing under various scenarios. Power Amplifier (PA) and Low Noise Amplifier (LNA) are investigated case by case through modeling and experiment. PTM 65nm technology is adopted in modeling the transistors within these RF blocks. A traditional class-AB PA with resilient design is compared the same PA without such design in PTM 65nm technology. Analytical equations are established for sensitivity of the resilient biasing under various scenarios. A traditional class-AB PA with resilient design is compared the same PA without such design in PTM 65nm technology. The results show that the biasing design helps improve the robustness of the PA in terms of linear gain, P1dB, Psat, and power added efficiency (PAE). Except for post-fabrication calibration capability, the design reduces the majority performance sensitivity of PA by 50% when subjected to threshold voltage (VT) shift and 25% to electron mobility (¼n) degradation. The impact of degradation mismatches is also investigated. It is observed that the accelerated aging of MOS transistor in the biasing circuit will further reduce the sensitivity of PA. In the study of LNA, a 24 GHz narrow band cascade LNA with adaptive biasing scheme under various aging rate is compared to LNA without such biasing scheme. The modeling and simulation results show that the adaptive substrate biasing reduces the sensitivity of noise figure and minimum noise figure subject to process variation and device aging such as threshold voltage shift and electron mobility degradation. Simulation of different aging rate also shows that the sensitivity of LNA is further reduced with the accelerated aging of the biasing circuit. Thus, for majority RF transceiver circuits, the adaptive body biasing scheme provides overall performance resilience to the device reliability induced degradation. Also the tuning ability designed in RF PA and LNA provides the circuit post-process calibration capability.
Show less - Date Issued
- 2011
- Identifier
- CFE0003595, ucf:48861
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003595
- Title
- THE EARLY MODERN SPACE: (CARTOGRAPHIC) LITERATURE AND THE AUTHOR IN PLACE.
- Creator
-
Myers, Michael, Gleyzon, Francois-Xavier, University of Central Florida
- Abstract / Description
-
In geography, maps are a tool of placement which locate both the cartographer and the territory made cartographic. In order to place objects in space, the cartographer inserts his own judgment into the scheme of his design. During the Early Modern period, maps were no longer suspicious icons as they were in the Middle Ages and not yet products of science, but subjects of discourse and works of art. The image of a cartographer's territory depended on his vision�both the nature and placement of...
Show moreIn geography, maps are a tool of placement which locate both the cartographer and the territory made cartographic. In order to place objects in space, the cartographer inserts his own judgment into the scheme of his design. During the Early Modern period, maps were no longer suspicious icons as they were in the Middle Ages and not yet products of science, but subjects of discourse and works of art. The image of a cartographer's territory depended on his vision�both the nature and placement of his gaze�and the product reflected that author's judgment. This is not a study of maps as such but of Early Modern literature, cartographic by nature�the observations of the author were the motif of its design. However, rather than concretize observational judgment through art, the Early Modern literature discussed asserts a reverse relation�the generation of the material which may be observed, the reality, by the views of authors. Spatiality is now an emerging philosophical field of study, taking root in the philosophy of Deleuze & Guattari. Using the notion prevalent in both Postmodern and Early Modern spatiality, which makes of perception a collective delusion with its roots in the critique of Kant, this thesis draws a through-line across time, as texts such as Robert Burton's An Anatomy of Melancholy, Thomas More's Utopia, and selections from William Shakespeare display a tendency to remove value from the standard of representation, to replace meaning with cognition and prioritize a view of views over an observable world. Only John Milton approaches perception as possibly referential to objective reality, by re-inserting his ability to observe and exist in that reality, in a corpus which becomes less generative simulations of material than concrete signposts to his judgment in the world.
Show less - Date Issued
- 2015
- Identifier
- CFH0004899, ucf:53148
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004899
- Title
- Investigation of infrared thermography for subsurface damage detection of concrete structures.
- Creator
-
Hiasa, Shuhei, Catbas, Necati, Tatari, Omer, Nam, Boo Hyun, Zaurin, Ricardo, Xanthopoulos, Petros, University of Central Florida
- Abstract / Description
-
Deterioration of road infrastructure arises from aging and various other factors. Consequently, inspection and maintenance have been a serious worldwide problem. In the United States, degradation of concrete bridge decks is a widespread problem among several bridge components. In order to prevent the impending degradation of bridges, periodic inspection and proper maintenance are indispensable. However, the transportation system faces unprecedented challenges because the number of aging...
Show moreDeterioration of road infrastructure arises from aging and various other factors. Consequently, inspection and maintenance have been a serious worldwide problem. In the United States, degradation of concrete bridge decks is a widespread problem among several bridge components. In order to prevent the impending degradation of bridges, periodic inspection and proper maintenance are indispensable. However, the transportation system faces unprecedented challenges because the number of aging bridges is increasing under limited resources, both in terms of budget and personnel. Therefore, innovative technologies and processes that enable bridge owners to inspect and evaluate bridge conditions more effectively and efficiently with less human and monetary resources are desired. Traditionally, qualified engineers and inspectors implemented hammer sounding and/or chain drag, and visual inspection for concrete bridge deck evaluations, but these methods require substantial field labor, experience, and lane closures for bridge deck inspections. Under these circumstances, Non-Destructive Evaluation (NDE) techniques such as computer vision-based crack detection, impact echo (IE), ground-penetrating radar (GPR) and infrared thermography (IRT) have been developed to inspect and monitor aging and deteriorating structures rapidly and effectively. However, no single method can detect all kinds of defects in concrete structures as well as the traditional inspection combination of visual and sounding inspections; hence, there is still no international standard NDE methods for concrete bridges, although significant progress has been made up to the present.This research presents the potential to reduce a burden of bridge inspections, especially for bridge decks, in place of traditional chain drag and hammer sounding methods by IRT with the combination of computer vision-based technology. However, there were still several challenges and uncertainties in using IRT for bridge inspections. This study revealed those challenges and uncertainties, and explored those solutions, proper methods and ideal conditions for applying IRT in order to enhance the usability, reliability and accuracy of IRT for concrete bridge inspections. Throughout the study, detailed investigations of IRT are presented. Firstly, three different types of infrared (IR) cameras were compared under active IRT conditions in the laboratory to examine the effect of photography angle on IRT along with the specifications of cameras. The results showed that when IR images are taken from a certain angle, each camera shows different temperature readings. However, since each IR camera can capture temperature differences between sound and delaminated areas, they have a potential to detect delaminated areas under a given condition in spite of camera specifications even when they are utilized from a certain angle. Furthermore, a more objective data analysis method than just comparing IR images was explored to assess IR data. Secondly, coupled structural mechanics and heat transfer models of concrete blocks with artificial delaminations used for a field test were developed and analyzed to explore sensitive parameters for effective utilization of IRT. After these finite element (FE) models were validated, critical parameters and factors of delamination detectability such as the size of delamination (area, thickness and volume), ambient temperature and sun loading condition (different season), and the depth of delamination from the surface were explored. This study presents that the area of delamination is much more influential in the detectability of IRT than thickness and volume. It is also found that there is no significant difference depending on the season when IRT is employed. Then, FE model simulations were used to obtain the temperature differences between sound and delaminated areas in order to process IR data. By using this method, delaminated areas of concrete slabs could be detected more objectively than by judging the color contrast of IR images. However, it was also found that the boundary condition affects the accuracy of this method, and the effect varies depending on the data collection time. Even though there are some limitations, integrated use of FE model simulation with IRT showed that the combination can be reduce other pre-tests on bridges, reduce the need to have access to the bridge and also can help automate the IRT data analysis process for concrete bridge deck inspections. After that, the favorable time windows for concrete bridge deck inspections by IRT were explored through field experiment and FE model simulations. Based on the numerical simulations and experimental IRT results, higher temperature differences in the day were observed from both results around noontime and nighttime, although IRT is affected by sun loading during the daytime heating cycle resulting in possible misdetections. Furthermore, the numerical simulations show that the maximum effect occurs at night during the nighttime cooling cycle, and the temperature difference decreases gradually from that time to a few hours after sunrise of the next day. Thus, it can be concluded that the nighttime application of IRT is the most suitable time window for bridge decks. Furthermore, three IR cameras with different specifications were compared to explore several factors affecting the utilization of IRT in regards to subsurface damage detection in concrete structures, specifically when the IRT is utilized for high-speed bridge deck inspections at normal driving speeds under field laboratory conditions. The results show that IRT can detect up to 2.54 cm delamination from the concrete surface at any time period. This study revealed two important factors of camera specifications for high-speed inspection by IRT as shorter integration time and higher pixel resolution.Finally, a real bridge was scanned by three different types of IR cameras and the results were compared with other NDE technologies that were implemented by other researchers on the same bridge. When compared at fully documented locations with 8 concrete cores, a high-end IR camera with cooled detector distinguished sound and delaminated areas accurately. Furthermore, indicated location and shape of delaminations by three IR cameras were compared to other NDE methods from past research, and the result revealed that the cooled camera showed almost identical shapes to other NDE methods including chain drag. It should be noted that the data were collected at normal driving speed without any lane closures, making it a more practical and faster method than other NDE technologies. It was also presented that the factor most likely to affect high-speed application is integration time of IR camera as well as the conclusion of the field laboratory test.The notable contribution of this study for the improvement of IRT is that this study revealed the preferable conditions for IRT, specifically for high-speed scanning of concrete bridge decks. This study shows that IRT implementation under normal driving speeds has high potential to evaluate concrete bridge decks accurately without any lane closures much more quickly than other NDE methods, if a cooled camera equipped with higher pixel resolution is used during nighttime. Despite some limitations of IRT, the data collection speed is a great advantage for periodic bridge inspections compared to other NDE methods. Moreover, there is a high possibility to reduce inspection time, labor and budget drastically if high-speed bridge deck scanning by the combination of IRT and computer vision-based technology becomes a standard bridge deck inspection method. Therefore, the author recommends combined application of the high-speed scanning combination and other NDE methods to optimize bridge deck inspections.
Show less - Date Issued
- 2016
- Identifier
- CFE0006323, ucf:51575
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006323
- Title
- An Examination of Novice and Expert Teachers' Pedagogy in a Mixed-Reality Simulated Inclusive Secondary Classroom Including a Student Avatar With Autism Spectrum Disorders.
- Creator
-
Bousfield, Taylor, Dieker, Lisa, Marino, Matthew, Hines, Rebecca, Hynes, Mike, University of Central Florida
- Abstract / Description
-
Teachers, special and general educators alike, are required to teach a variety of students including students with ASD. With a rise in the prevalence of autism by 119.4% since 2000 (Centers for Disease Control and Prevention [CDC], 2016) and 39% of students with ASD being served in general education classrooms for over 80% of the school day (U.S. Department of Education, 2015), teachers need to be prepared to effectively teach this population. To better prepare teachers, the researcher...
Show moreTeachers, special and general educators alike, are required to teach a variety of students including students with ASD. With a rise in the prevalence of autism by 119.4% since 2000 (Centers for Disease Control and Prevention [CDC], 2016) and 39% of students with ASD being served in general education classrooms for over 80% of the school day (U.S. Department of Education, 2015), teachers need to be prepared to effectively teach this population. To better prepare teachers, the researcher conducted a two-phase study, situated in the framework of the Skill Acquisition Model (Dreyfus (&) Dreyfus, 1986) to explore the behaviors of novice and expert teachers in a simulated secondary inclusive environment. This classroom included a virtual student with autism. In phase one, the researcher conducted a Delphi Study to determine the best practices, perceived by experts in the field, for teachers who serve students with ASD in inclusive secondary environments. During phase two, the researcher used the list of skills identified as a framework to observe and interview 10 teachers, five novices and five experts, in a simulated secondary inclusive environment. The researcher identified 11 high leverage simulation practices (HLSP) that expert teachers should use while teaching in a simulated secondary inclusive environment. Observations and reflections of expert and novice teachers were analyzed, finding only 4 HLSP among experts and 5 HLSP among novice teachers. Additional HLSP were seen through the teachers' reflections. Data were analyzed and discussed in detail. Implications for practice and recommendations for future research in teacher preparation is provided.
Show less - Date Issued
- 2017
- Identifier
- CFE0006722, ucf:51877
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006722
- Title
- MEASURING THE EFFECT OF ERRATIC DEMANDON SIMULATED MULTI-CHANNEL MANUFACTURINGSYSTEM PERFORMANCE.
- Creator
-
Kohan, Nancy, Kulonda, Dennis, University of Central Florida
- Abstract / Description
-
ABSTRACT To handle uncertainties and variabilities in production demands, many manufacturing companies have adopted different strategies, such as varying quoted lead time, rejecting orders, increasing stock or inventory levels, and implementing volume flexibility. Make-to-stock (MTS) systems are designed to offer zero lead time by providing an inventory buffer for the organizations, but they are costly and involve risks such as obsolescence and wasted expenditures. The main concern of make-to...
Show moreABSTRACT To handle uncertainties and variabilities in production demands, many manufacturing companies have adopted different strategies, such as varying quoted lead time, rejecting orders, increasing stock or inventory levels, and implementing volume flexibility. Make-to-stock (MTS) systems are designed to offer zero lead time by providing an inventory buffer for the organizations, but they are costly and involve risks such as obsolescence and wasted expenditures. The main concern of make-to-order (MTO) systems is eliminating inventories and reducing the non-value-added processes and wastes; however, these systems are based on the assumption that the manufacturing environments and customers' demand are deterministic. Research shows that in MTO systems variability and uncertainty in the demand levels causes instability in the production flow, resulting in congestion in the production flow, long lead times, and low throughput. Neither strategy is wholly satisfactory. A new alternative approach, multi-channel manufacturing (MCM) systems are designed to manage uncertainties and variabilities in demands by first focusing on customers' response time. The products are divided into different product families, each with its own manufacturing stream or sub-factory. MCM also allocates the production capacity needed in each sub-factory to produce each product family. In this research, the performance of an MCM system is studied by implementing MCM in a real case scenario from textile industry modeled via discrete event simulation. MTS and MTO systems are implemented for the same case scenario and the results are studied and compared. The variables of interest for this research are the throughput of products, the level of on-time deliveries, and the inventory level. The results conducted from the simulation experiments favor the simulated MCM system for all mentioned criteria. Further research activities, such as applying MCM to different manufacturing contexts, is highly recommended.
Show less - Date Issued
- 2004
- Identifier
- CFE0000240, ucf:46275
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000240
- Title
- On Distributed Estimation for Resource Constrained Wireless Sensor Networks.
- Creator
-
Sani, Alireza, Vosoughi, Azadeh, Rahnavard, Nazanin, Wei, Lei, Atia, George, Chatterjee, Mainak, University of Central Florida
- Abstract / Description
-
We study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically...
Show moreWe study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically distributed tiny sensors are tasked with collecting data from the field. Each sensor locally processes its noisy observation (local processing can include compression,dimension reduction, quantization, etc) and transmits the processed observation over communication channels to the FC, where the received data is used to form a global estimate of the unknown source such that the Mean Square Error (MSE) of the DES is minimized. The accuracy of DES depends on many factors such as intensity of observation noises in sensors, quantization errors in sensors, available power and bandwidth of the network, quality of communication channels between sensors and the FC, and the choice of fusion rule in the FC. Taking into account all of these contributing factors and implementing a DES system which minimizes the MSE and satisfies all constraints is a challenging task. In order to probe into different aspects of this challenging task we identify and formulate the following three problems and address them accordingly:1- Consider an inhomogeneous WSN where the sensors' observations is modeled linear with additive Gaussian noise. The communication channels between sensors and FC are orthogonal power and bandwidth-constrained erroneous wireless fading channels. The unknown to be estimated is a Gaussian vector. Sensors employ uniform multi-bit quantizers and BPSK modulation. Given this setup, we ask: what is the best fusion rule in the FC? what is the best transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize the MSE? In order to answer these questions, we derive some upper bounds on global MSE and through minimizing those bounds, we propose various resource allocation schemes for the problem, through which we investigate the effect of contributing factors on the MSE.2- Consider an inhomogeneous WSN with an FC which is tasked with estimating a scalar Gaussian unknown. The sensors are equipped with uniform multi-bit quantizers and the communication channels are modeled as Binary Symmetric Channels (BSC). In contrast to former problem the sensors experience independent multiplicative noises (in addition to additive noise). The natural question in this scenario is: how does multiplicative noise affect the DES system performance? how does it affect the resource allocation for sensors, with respect to the case where there is no multiplicative noise? We propose a linear fusion rule in the FC and derive the associated MSE in closed-form. We propose several rate allocation schemes with different levels of complexity which minimize the MSE. Implementing the proposed schemes lets us study the effect of multiplicative noise on DES system performance and its dynamics. We also derive Bayesian Cramer-Rao Lower Bound (BCRLB) and compare the MSE performance of our porposed methods against the bound.As a dual problem we also answer the question: what is the minimum required bandwidth of thenetwork to satisfy a predetermined target MSE?3- Assuming the framework of Bayesian DES of a Gaussian unknown with additive and multiplicative Gaussian noises involved, we answer the following question: Can multiplicative noise improve the DES performance in any case/scenario? the answer is yes, and we call the phenomena as 'enhancement mode' of multiplicative noise. Through deriving different lower bounds, such as BCRLB,Weiss-Weinstein Bound (WWB), Hybrid CRLB (HCRLB), Nayak Bound (NB), Yatarcos Bound (YB) on MSE, we identify and characterize the scenarios that the enhancement happens. We investigate two situations where variance of multiplicative noise is known and unknown. Wealso compare the performance of well-known estimators with the derived bounds, to ensure practicability of the mentioned enhancement modes.
Show less - Date Issued
- 2017
- Identifier
- CFE0006913, ucf:51698
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006913
- Title
- PAUL VERHOEVEN, MEDIA MANIPULATION, AND HYPER-REALITY.
- Creator
-
Malchiodi, Emmanuel, Janz, Bruce, University of Central Florida
- Abstract / Description
-
Dutch director Paul Verhoeven is a polarizing figure. Although many of his American made films have received considerable praise and financial success, he has been lambasted on countless occasions for his gratuitous use of sex, violence, and contentious symbolism - 1995s Showgirls was overwhelmingly dubbed the worst film of all time and 1997s Starship Troopers earned him a reputation as a fascist. Regardless of the controversy surrounding him, his science fiction films are a move beyond the...
Show moreDutch director Paul Verhoeven is a polarizing figure. Although many of his American made films have received considerable praise and financial success, he has been lambasted on countless occasions for his gratuitous use of sex, violence, and contentious symbolism - 1995s Showgirls was overwhelmingly dubbed the worst film of all time and 1997s Starship Troopers earned him a reputation as a fascist. Regardless of the controversy surrounding him, his science fiction films are a move beyond the conventions of the big blockbuster science fiction films of the 1980s (E.T. and the Star Wars trilogy are prime examples), revealing a deeper exploration of both sociopolitical issues and the human condition. Much like the novels of Philip K. Dick (and Verhoeven's 1990 film Total Recall - an adaptation of a Dick short story), Verhoeven's science fiction work explores worlds where paranoia is a constant and determining whether an individual maintains any liberty is regularly questionable. In this thesis I am basically exploring issues regarding power. Although I barely bring up the term power in it, I feel it is central. Power is an ambiguous term; are we discussing physical power, state power, objective power, subjective power, or any of the other possible manifestations of the word? The original Anglo-French version of power means to be able,asking whether it is possible for one to do something. In relation to Verhoeven's science fiction work each demonstrates the limitations placed upon an individual's autonomy, asking are the protagonists capable of independent agency or rather just environmental constructs reflecting the myriad influences surrounding them. Does the individual really matter in the post-modern world, brimming with countless signs and signifiers? My main objective in this writing is to demonstrate how this happens in Verhoeven's films, exploring his central themes and subtext and doing what science fiction does: hold a mirror up to the contemporary world and critique it, asking whether our species' current trajectory is beneficial or hazardous.
Show less - Date Issued
- 2011
- Identifier
- CFH0003844, ucf:44697
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0003844
- Title
- Secondary World: The Limits of Ludonarrative.
- Creator
-
Dannelly, David, Adams, JoAnne, Price, Mark, Poindexter, Carla, Kovach, Keith, University of Central Florida
- Abstract / Description
-
Secondary World: The Limits of Ludonarrative is a series of short narrative animations that are a theoretical treatise on the limitations of western storytelling in video games. The series covers specific topics relating to film theory, game design and art theory: specifically those associated with Gilles Deleuze, Jean Baudrillard, Jay Bolter, Richard Grusin and Andy Clark. The use of imagery, editing and presentation is intended to physically represent an extension of myself and my thinking...
Show moreSecondary World: The Limits of Ludonarrative is a series of short narrative animations that are a theoretical treatise on the limitations of western storytelling in video games. The series covers specific topics relating to film theory, game design and art theory: specifically those associated with Gilles Deleuze, Jean Baudrillard, Jay Bolter, Richard Grusin and Andy Clark. The use of imagery, editing and presentation is intended to physically represent an extension of myself and my thinking process and which are united through the common thread of my personal feelings, thoughts and experiences in the digital age.
Show less - Date Issued
- 2014
- Identifier
- CFE0005155, ucf:50704
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005155
- Title
- Evaluating Improvisation as a Technique for Training Pre-Service Teachers for Inclusive Classrooms.
- Creator
-
Becker, Theresa, Hines, Rebecca, Beverly, Monifa, Hopp, Carolyn, Hamed, Kastro, University of Central Florida
- Abstract / Description
-
Improvisation is a construct that uses a set of minimal heuristic guidelines to create a highly flexible scaffold that fosters extemporaneous communication. Scholars from diverse domains: such as psychology, business, negotiation, and education have suggested its use as a method for preparing professionals to manage complexity and think on their feet. A review of the literature revealed that while there is substantial theoretical scholarship on using improvisation in diverse domains, little...
Show moreImprovisation is a construct that uses a set of minimal heuristic guidelines to create a highly flexible scaffold that fosters extemporaneous communication. Scholars from diverse domains: such as psychology, business, negotiation, and education have suggested its use as a method for preparing professionals to manage complexity and think on their feet. A review of the literature revealed that while there is substantial theoretical scholarship on using improvisation in diverse domains, little research has verified these assertions. This dissertation evaluated whether improvisation, a specific type of dramatic technique, was effective for training pre-service teachers in specific characteristics of teacher-child classroom interaction, communication and affective skills development. It measured the strength and direction of any potential changes such training might effect on pre-service teacher's self-efficacy for teaching and for implementing the communication skills common to improvisation and teaching while interacting with student in an inclusive classroom setting. A review of the literature on teacher self-efficacy and improvisation clarified and defined key terms, and illustrated relevant studies. This study utilized a mixed-method research design based on instructional design and development research. Matched pairs t-tests were used to analyze the self-efficacy and training skills survey data and pre-service teacher reflections and interview transcripts were used to triangulate the qualitative data. Results of the t-tests showed a significant difference in participants' self-efficacy for teaching measured before and after the improvisation training. A significant difference in means was also measured in participants' aptitude for improvisation strategies and for self-efficacy for their implementation pre-/post- training. Qualitative results from pre-service teacher class artifacts and interviews showed participants reported beneficial personal outcomes as well as confirmed using skills from the training while interacting with students. Many of the qualitative themes parallel individual question items on the teacher self-efficacy TSES scale as well as the improvisation self-efficacy scale CSAI. The self-reported changes in affective behavior such as increased self-confidence and ability to foster positive interaction with students are illustrative of changes in teacher agency. Self-reports of being able to better understand student perspectives demonstrate a change in participant ability to empathize with students. Participants who worked with both typically developing students as well as with students with disabilities reported utilizing improvisation strategies such as Yes, and..., mirroring emotions and body language, vocal prosody and establishing a narrative relationship to put the students at ease, establish a positive learning environment, encourage student contributions and foster teachable moments. The improvisation strategies showed specific benefit for participants working with nonverbal students or who had commutation difficulties, by providing the pre-service teachers with strategies for using body language, emotional mirroring, vocal prosody and acceptance to foster interaction and communication with the student.Results from this investigation appear to substantiate the benefit of using improvisation training as part of a pre-service teacher methods course for preparing teachers for inclusive elementary classrooms. Replication of the study is encouraged with teachers of differing populations to confirm and extend results.
Show less - Date Issued
- 2012
- Identifier
- CFE0004516, ucf:49273
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004516