Current Search: Modeling (x)
Pages
-
-
Title
-
Experimental Study of Sinkhole Failure Related to Groundwater Level Drops.
-
Creator
-
Alrowaimi, Mohamed, Chopra, Manoj, Nam, Boo Hyun, Yun, Hae-Bum, Sallam, Amr, University of Central Florida
-
Abstract / Description
-
Sinkholes are natural geohazard phenomena that cause damage to property and may lead to loss of life. They can also cause added pollution to the aquifer by draining unfiltered water from streams, wetland, and lakes into the aquifer. Sinkholes occur in a very distinctive karst geology where carbonate, limestone, dolomite, or gypsum, are encountered as the bedrock that can naturally be dissolved by groundwater circulating through them. Sinkholes can occur gradually or suddenly with catastrophic...
Show moreSinkholes are natural geohazard phenomena that cause damage to property and may lead to loss of life. They can also cause added pollution to the aquifer by draining unfiltered water from streams, wetland, and lakes into the aquifer. Sinkholes occur in a very distinctive karst geology where carbonate, limestone, dolomite, or gypsum, are encountered as the bedrock that can naturally be dissolved by groundwater circulating through them. Sinkholes can occur gradually or suddenly with catastrophic impact depending on the geology and hydrology of the area. Predicting the formation and the collapse of a sinkhole based on the current ground investigation technologies is limited by the high levels of uncertainties in the soil properties and behavior. It is possible that progressing sinkholes can be missed by geotechnical site investigations especially during the development of a very wide area. In this study, a laboratory-scale sinkhole model was constructed to physically simulate the sinkhole phenomenon. The physical model was designed to monitor a network of groundwater table over time around a predetermined sinkhole location. This model was designed to establish a correlation between the groundwater table drops and the sinkhole development. The experimental small-scale model showed that there is a groundwater cone of depression that forms prior the surface collapse of the sinkhole. The cone of water depression can be used to identify the potential location of the sinkhole at early stage of the overburden underground cavities formation in a reverse manner. In addition, monitoring of single groundwater well showed that groundwater level signal has some sudden water drops (progressive drops) which occur at different times (time lags) during the sinkhole development. A time frequency analysis was also used in this study to detect the pattern of these progressive drops of the groundwater table readings. It is observed, based on the model, that the development and growth of sinkhole can be correlated to progressive drops of the groundwater table since the drops start at the monitoring wells that are closer radially to the center of the sinkhole. Subsequently, with time, these drops get transferred to more distant monitoring wells. The time frequency analysis is used to decompose and detect the progressive drops by using a Pattern Detection Algorithm called Auto Modulating Detection Pattern Algorithm (AMD), which was developed by Yun (2013). The results of this analysis showed that the peaks of these progressive drops in the raw groundwater readings are a good indicator of the potential location of sinkholes at early stage when there are no any visible depression of the ground surface. Finally, the effect of several soil parameters on the cone of the water depression during the sinkhole formation is studied. The parametric study showed that both of overburden soil thickness and the initial (encountered) groundwater table level have a clear impact on the time of the sinkhole collapse. While this model used a predetermined crack location to study the groundwater level response around it, the concept of groundwater drops as an indicator of sinkhole progression and collapse may be used to determine the ultimate location of the sinkhole. By monitoring the changes in natural groundwater levels in the field from either an existing network of groundwater monitoring wells or additional installation, the methodology discussed in this dissertation may be used for possible foreseeing of the surface collapse of sinkholes.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006249, ucf:51060
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006249
-
-
Title
-
A Hybrid Simulation Framework of Consumer-to-Consumer Ecommerce Space.
-
Creator
-
Joledo, Oloruntomi, Rabelo, Luis, Lee, Gene, Elshennawy, Ahmad, Ajayi, Richard, University of Central Florida
-
Abstract / Description
-
In the past decade, ecommerce transformed the business models of many organizations. Information Technology leveled the playing field for new participants, who were capable of causing disruptive changes in every industry. (")Web 2.0(") or (")Social Web(") further redefined ways users enlist for services. It is now easy to be influenced to make choices of services based on recommendations of friends and popularity amongst peers. This research proposes a simulation framework to investigate how...
Show moreIn the past decade, ecommerce transformed the business models of many organizations. Information Technology leveled the playing field for new participants, who were capable of causing disruptive changes in every industry. (")Web 2.0(") or (")Social Web(") further redefined ways users enlist for services. It is now easy to be influenced to make choices of services based on recommendations of friends and popularity amongst peers. This research proposes a simulation framework to investigate how actions of stakeholders at this level of complexity affect system performance as well as the dynamics that exist between different models using concepts from the fields of operations engineering, engineering management, and multi-model simulation. Viewing this complex model from a systems perspective calls for the integration of different levels of behaviors. Complex interactions exist among stakeholders, the environment and available technology. The presence of continuous and discrete behaviors coupled with stochastic and deterministic behaviors present challenges for using standalone simulation tools to simulate the business model.We propose a framework that takes into account dynamic system complexity and risk from a hybrid paradigm. The SCOR model is employed to map the business processes and it is implemented using agent based simulation and system dynamics. By combining system dynamics at the strategy level with agent based models of consumer behaviors, an accurate yet efficient representation of the business model that makes for sound basis of decision making can be achieved to maximize stakeholders' utility.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006122, ucf:51171
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006122
-
-
Title
-
White Light Continuum for Broadband Nonlinear Spectroscopy.
-
Creator
-
Ensley, Trenton, Hagan, David, Vanstryland, Eric, Zeldovich, Boris, Christodoulides, Demetrios, Schulte, Alfons, University of Central Florida
-
Abstract / Description
-
Supercontinuum (SC) generation, oftentimes referred to as white-light continuum (WLC), has been a subject of interest for more than 40 years. From the first observation of WLC in condensed media in the early 1970s to the first observation of WLC in gases in the mid-1980s, much work has been devoted to developing a framework for understanding the complex nature of this phenomenon as well as discovering its utility in various applications. The main effort of this dissertation is to develop a...
Show moreSupercontinuum (SC) generation, oftentimes referred to as white-light continuum (WLC), has been a subject of interest for more than 40 years. From the first observation of WLC in condensed media in the early 1970s to the first observation of WLC in gases in the mid-1980s, much work has been devoted to developing a framework for understanding the complex nature of this phenomenon as well as discovering its utility in various applications. The main effort of this dissertation is to develop a WLC for the purpose of broadband nonlinear spectroscopy and use it in spectroscopic measurements. The ability to generate a high-quality, high-spectral-irradiance source of radiation confined in a single beam that spans the visible and near-infrared spectral regimes has great utility for nonlinear measurement methods such as the Z-scan technique. Using a broadband WLC instead of conventional tunable sources of radiation such as optical parametric generators/amplifiers has been shown to increase the efficiency of such measurements by nearly an order of magnitude. Although WLC generation has many complex processes involved, and complete models of the process involve highly complex numerical modeling, simple models can still guide us in the optimization of systems for WLC generation. In this dissertation the effects of two key mechanisms behind WLC generation in gaseous media are explored: self-phase modulation (SPM) and ionization leading to plasma production. The effects of SPM are largely dependent upon the third-order nonlinear refractive index, n2, of the gaseous medium whereas the effects of plasma production are dependent upon many parameters including the initial number density, ionization potential/energy, and the rate of ionization production. It is found that in order to generate a stable WLC suitable for nonlinear spectroscopy, the phase contributions from SPM and plasma production should be nearly equal. This guided our experiments in inert gases using mJ level, 150 fs-FWHM (full-width at half-maximum) pulses at 780 nm as well as 40 fs-FWHM pulses primarily at 1800 nm to create a stable, high-spectral-irradiance WLC. The generated WLC is shown to have sufficient spectral energy and spatial quality suitable for nonlinear spectroscopic measurements. In addition to extending the WLC bandwidth by using a long wavelength (1800 nm) pump source, it is found that by using a secondary weak seed pulse with a peak irradiance three orders of magnitude less than the main pulse, the spectral energy density is enhanced by more than a factor of 3 in Krypton gas for a WLC spectrum that spans over 2 octaves. Numerical simulations are presented which qualitatively describe the experimental results. The spectral enhancement of the WLC by seeding is also demonstrated for other inert gases and condensed media. Other efforts described in this dissertation include the development of the Dual-Arm Z-scan technique and its extension to measuring thin film nonlinearities in the presence of large substrate signals as well as predicting the n2 spectra of organic molecules (where we can approximate their behavior as if they were centrosymmetric) from knowledge of the one-photon and two-photon absorption spectra using a simplified sum-over-states quantum perturbative model by utilizing a quasi 3-level and quasi 4-level system.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005608, ucf:50264
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005608
-
-
Title
-
Theoretical Studies of Nanostructure Formation and Transport on Surfaces.
-
Creator
-
Aminpour, Maral, Rahman, Talat, Stolbov, Sergey, Roldan Cuenya, Beatriz, Blair, Richard, University of Central Florida
-
Abstract / Description
-
This dissertation undertakes theoretical and computational research to characterize and understand in detail atomic configurations and electronic structural properties of surfaces and interfaces at the nano-scale, with particular emphasis on identifying the factors that control atomic-scale diffusion and transport properties. The overarching goal is to outline, with examples, a predictive modeling procedure of stable structures of novel materials that, on the one hand, facilitates a better...
Show moreThis dissertation undertakes theoretical and computational research to characterize and understand in detail atomic configurations and electronic structural properties of surfaces and interfaces at the nano-scale, with particular emphasis on identifying the factors that control atomic-scale diffusion and transport properties. The overarching goal is to outline, with examples, a predictive modeling procedure of stable structures of novel materials that, on the one hand, facilitates a better understanding of experimental results, and on the other hand, provide guidelines for future experimental work. The results of this dissertation are useful in future miniaturization of electronic devices, predicting and engineering functional novel nanostructures. A variety of theoretical and computational tools with different degrees of accuracy is used to study problems in different time and length scales. Interactions between the atoms are derived using both ab-initio methods based on Density Functional Theory (DFT), as well as semi-empirical approaches such as those embodied in the Embedded Atom Method (EAM), depending on the scale of the problem at hand. The energetics for a variety of surface phenomena (adsorption, desorption, diffusion, and reactions) are calculated using either DFT or EAM, as feasible. For simulating dynamic processes such as diffusion of ad-atoms on surfaces with dislocations the Molecular Dynamics (MD) method is applied. To calculate vibrational mode frequencies, the infinitesimal displacement method is employed. The combination of non-equilibrium Green's function (NEGF) and DFT is used to calculate electronic transport properties of molecular devices as well as interfaces and junctions.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005298, ucf:50504
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005298
-
-
Title
-
An Investigation of Boaters' Attitudes toward and Usage of Targeted Mobile Apps.
-
Creator
-
Bowerman, Kamra, Delorme, Denise, Brown, Timothy, Neuberger, Lindsay, University of Central Florida
-
Abstract / Description
-
The purpose of this study was to understand boaters' adoption and usage of smartphones and mobile apps as well as to obtain their opinion on potential features of a targeted mobile app being developed as part of a broader interdisciplinary Florida Sea Grant outreach project. Data were gathered from an online survey of a sample of 164 boaters from the surrounding Central Florida area. In contrast with previous empirical mobile app studies, many respondents reported using mobile apps for...
Show moreThe purpose of this study was to understand boaters' adoption and usage of smartphones and mobile apps as well as to obtain their opinion on potential features of a targeted mobile app being developed as part of a broader interdisciplinary Florida Sea Grant outreach project. Data were gathered from an online survey of a sample of 164 boaters from the surrounding Central Florida area. In contrast with previous empirical mobile app studies, many respondents reported using mobile apps for information-seeking versus escape gratifications. Further more than half of the respondents' age sixty-five and over indicated using smartphones and mobile apps. These findings reflected recent national trend data showing shifting gratifications and an increase in technology use among older American adults. In regards to the planned mobile app, the study's respondents had favorable reactions to its potential features and indicated an above average intent toward downloading the app.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004655, ucf:49902
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004655
-
-
Title
-
Development of Treatment Train Techniques for the Evaluation of Low Impact Development in Urban Regions.
-
Creator
-
Hardin, Mike, Wanielista, Martin, Cooper, David, Randall, Andrew, University of Central Florida
-
Abstract / Description
-
Stormwater runoff from urban areas is a major source of pollution to surface water bodies. The discharge of nutrients such as nitrogen and phosphorus is particularly damaging as it results in harmful algal blooms which can limit the beneficial use of a water body. Stormwater best management practices (BMPs) have been developed over the years to help address this issue. While BMPs have been investigated for years, their use has been somewhat limited due to the fact that much of the data...
Show moreStormwater runoff from urban areas is a major source of pollution to surface water bodies. The discharge of nutrients such as nitrogen and phosphorus is particularly damaging as it results in harmful algal blooms which can limit the beneficial use of a water body. Stormwater best management practices (BMPs) have been developed over the years to help address this issue. While BMPs have been investigated for years, their use has been somewhat limited due to the fact that much of the data collected is for specific applications, in specific regions, and it is unknown how these systems will perform in other regions and for other applications. Additionally, the research was spread across the literature and performance data was not easily accessible or organized in a convenient way. Recently, local governments and the USEPA have begun to collect this data in BMP manuals to help designers implement this technology. That being said, many times a single BMP is insufficient to meet water quality and flood control needs in urban areas. A treatment train approach is required in these regions. In this dissertation, the development of methodologies to evaluate the performance of two BMPs, namely green roofs and pervious pavements is presented. Additionally, based on an extensive review of the literature, a model was developed to assist in the evaluation of site stormwater plans using a treatment train approach for the removal of nutrients due to the use of BMPs. This model is called the Best Management Practices Treatment for Removal on an Annual basis Involving Nutrients in Stormwater (BMPTRAINS) model.The first part of this research examined a previously developed method for designing green roofs for hydrologic efficiency. The model had not been tested for different designs and assumed that evapotranspiration was readily available for all regions. This work tested this methodology against different designs, both lab scale and full scale. Additionally, the use of the Blaney-Criddle equation was examined as a simple way to determine the ET for regions where data was not readily available. It was shown that the methods developed for determination of green roof efficiency had good agreement with collected data. Additionally, the use of the Blaney-Criddle equation for estimation of ET had good agreement with collected and measured data.The next part of this research examined a method to design pervious pavements. The water storage potential is essential to the successful design of these BMPs. This work examined the total and effective porosities under clean, sediment clogged, and rejuvenated conditions. Additionally, a new type of porosity was defined called operating porosity. This new porosity was defined as the average of the clean effective porosity and the sediment clogged effective porosity. This porosity term was created due to the fact that these systems exist in the exposed environment and subject to sediment loading due to site erosion, vehicle tracking, and spills. Due to this, using the clean effective porosity for design purposes would result in system failure for design type storm events towards the end of its service life. While rejuvenation techniques were found to be somewhat effective, it was also observed that often sediment would travel deep into the pavement system past the effective reach of vacuum sweeping. This was highly dependent on the pore structure of the pavement surface layer. Based on this examination, suggested values for operating porosity were presented which could be used to calculate the storage potential of these systems and subsequent curve number for design purposes.The final part of this work was the development of a site evaluation model using treatment train techniques. The BMPTRAINS model relied on an extensive literature review to gather data on performance of 15 different BMPs, including the two examined as part of this work. This model has 29 different land uses programmed into it and a user defined option, allowing for wide applicability. Additionally, this model allows a watershed to be split into up to four different catchments, each able to have their own distinct pre- and post-development conditions. Based on the pre- and post-development conditions specified by the user, event mean concentrations (EMCs) are assigned. These EMCs can also be overridden by the user. Each catchment can also contain up to three BMPs in series. If BMPs are to be in parallel, they must be in a separate catchment. The catchments can be configured in up to 15 different configurations, including series, parallel, and mixed. Again, this allows for wide applicability of site designs. The evaluation of cost is also available in this model, either in terms of capital cost or net present worth. The model allows for up to 25 different scenarios to be run comparing cost, presenting results in overall capital cost, overall net present worth, or cost per kg of nitrogen and phosphorus. The wide array of BMPs provided and the flexibility provided to the user makes this model a powerful tool for designers and regulators to help protect surface waters.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005503, ucf:50338
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005503
-
-
Title
-
THEORETICAL AND NUMERICAL STUDIES OF PHASE TRANSITIONS AND ERROR THRESHOLDS IN TOPOLOGICAL QUANTUM MEMORIES.
-
Creator
-
Jouzdani, Pejman, Mucciolo, Eduardo, Chang, Zenghu, Leuenberger, Michael, Abouraddy, Ayman, University of Central Florida
-
Abstract / Description
-
This dissertation is the collection of a progressive research on the topic of topological quantum computation and information with the focus on the error threshold of the well-known models such as the unpaired Majorana, the toric code, and the planar code.We study the basics of quantum computation and quantum information, and in particular quantum error correction. Quantum error correction provides a tool for enhancing the quantum computation fidelity in the noisy environment of a real world....
Show moreThis dissertation is the collection of a progressive research on the topic of topological quantum computation and information with the focus on the error threshold of the well-known models such as the unpaired Majorana, the toric code, and the planar code.We study the basics of quantum computation and quantum information, and in particular quantum error correction. Quantum error correction provides a tool for enhancing the quantum computation fidelity in the noisy environment of a real world. We begin with a brief introduction to stabilizer codes. The stabilizer formalism of the theory of quantum error correction gives a well-defined description of quantum codes that is used throughout this dissertation. Then, we turn our attention to a quite new subject, namely, topological quantum codes. Topological quantum codes take advantage of the topological characteristics of a physical many-body system. The physical many-body systems studied in the context of topological quantum codes are of two essential natures: they either have intrinsic interaction that self-corrects errors, or are actively corrected to be maintainedin a desired quantum state. Examples of the former are the toric code and the unpaired Majorana, while an example for the latter is the surface code.A brief introduction and history of topological phenomena in condensed matter is provided. The unpaired Majorana and the Kitaev toy model are briefly explained. Later we introduce a spin model that maps onto the Kitaev toy model through a sequence of transformations. We show how this model is robust and tolerates local perturbations. The research on this topic, at the time of writing this dissertation, is still incomplete and only preliminary results are represented.As another example of passive error correcting codes with intrinsic Hamiltonian, the toric code is introduced. We also analyze the dynamics of the errors in the toric code known as anyons. We show numerically how the addition of disorder to the physical system underlying the toric code slows down the dynamics of the anyons. We go further and numerically analyze the presence of time-dependent noise and the consequent delocalization of localized errors.The main portion of this dissertation is dedicated to the surface code. We study the surface code coupled to a non-interacting bosonic bath. We show how the interaction between the code and the bosonic bath can effectively induce correlated errors. These correlated errors may be corrected up to some extend. The extension beyond which quantum error correction seems impossible is the error threshold of the code. This threshold is analyzed by mapping the effective correlated error model onto a statistical model. We then study the phase transition in the statistical model. The analysis is in two parts. First, we carry out derivation of the effective correlated model, its mapping onto a statistical model, and perform an exact numerical analysis. Second, we employ a Monte Carlo method to extend the numerical analysis to large system size.We also tackle the problem of surface code with correlated and single-qubit errors by an exact mapping onto a two-dimensional Ising model with boundary fields. We show how the phase transition point in one model, the Ising model, coincides with the intrinsic error threshold of the other model, the surface code.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005512, ucf:50314
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005512
-
-
Title
-
Virtual Interactions with Real-Agents for Sustainable Natural Resource Management.
-
Creator
-
Pierce, Tyler, Madani Larijani, Kaveh, Wang, Dingbao, Jacques, Peter, University of Central Florida
-
Abstract / Description
-
Common pool resource management systems are complex to manage due to the absence of a clear understanding of the effects of users' behavioral characteristics. Non-cooperative decision making based on individual rationality (as opposed to group rationality) and a tendency to free ride due to lack of trust and information about other users' behavior creates externalities and can lead to tragedy of the commons without intervention by a regulator. Nevertheless, even regulatory institutions often...
Show moreCommon pool resource management systems are complex to manage due to the absence of a clear understanding of the effects of users' behavioral characteristics. Non-cooperative decision making based on individual rationality (as opposed to group rationality) and a tendency to free ride due to lack of trust and information about other users' behavior creates externalities and can lead to tragedy of the commons without intervention by a regulator. Nevertheless, even regulatory institutions often fail to sustain natural common pool resources in the absence of clear understanding of the responses of multiple heterogeneous decision makers to different regulation schemes. While modeling can help with our understanding of complex coupled human-natural systems, past research has not been able to realistically simulate these systems for two major limitations: 1) lack of computational capacity and proper mathematical models for solving distributed systems with self-optimizing agents; and 2) lack of enough information about users' characteristics in common pool resource systems due to absence of reliable monitoring information. Recently, different studies have tried to address the first limitation by developing agent-based models, which can be appropriately handled with today's computational capacity. While these models are more realistic than the social planner's models which have been traditionally used in the field, they normally rely on different heuristics for characterizing users' behavior and incorporating heterogeneity. This work is a step-forward in addressing the second limitation, suggesting an efficient method for collecting information on diverse behavioral characteristics of real agents for incorporation in distributed agent-based models. Gaming in interactive virtual environments is suggested as a reliable method for understanding different variables that promote sustainable resource use through observation of decision making and behavior of the resource system beneficiaries under various institutional frameworks and policies. A review of educational or "serious" games for environmental management was undertaken to determine an appropriate game for collecting information on real-agents and also to investigate the state of environmental management games and their potential as an educational tool. A web-based groundwater sharing simulation game(-)Irrigania(-)was selected to analyze the behavior of real agents under different common pool resource management institutions. Participants included graduate and undergraduate students from the University of Central Florida and Lund University. Information was collected on participants' resource use, behavior and mindset under different institutional settings through observation and discussion with participants. Preliminary use of water resources gaming suggests communication, cooperation, information disclosure, trust, credibility and social learning between beneficiaries as factors promoting a shift towards sustainable resource use. Additionally, Irrigania was determined to be an effective tool for complementing traditional lecture-based teaching of complex concepts related to sustainable natural resource management. The different behavioral groups identified in the study can be used for improved simulation of multi-agent groundwater management systems.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005045, ucf:49953
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005045
-
-
Title
-
Design Optimization of LLC Topology and Phase Skipping Control of Three Phase Inverter for PV Applications.
-
Creator
-
Somani, Utsav, Batarseh, Issa, Wu, Xinzhang, Seyedi-Esfahani, Seyed-alireza, University of Central Florida
-
Abstract / Description
-
The world is heading towards an energy crisis and desperate efforts are being made to find an alternative, reliable and clean source of energy. Solar Energy is one of the most clean and reliable source of renewable energy on earth. Conventionally, extraction of solar power for electricity generation was limited to PV farms, however lately Distributed Generation form of Solar Power has emerged in the form of residential and commercial Grid Tied Micro-Inverters. Grid Tied Micro-Inverters are...
Show moreThe world is heading towards an energy crisis and desperate efforts are being made to find an alternative, reliable and clean source of energy. Solar Energy is one of the most clean and reliable source of renewable energy on earth. Conventionally, extraction of solar power for electricity generation was limited to PV farms, however lately Distributed Generation form of Solar Power has emerged in the form of residential and commercial Grid Tied Micro-Inverters. Grid Tied Micro-Inverters are costly when compared to their string type counterparts because one inverter module is required for every single or every two PV panels whereas a string type micro-inverter utilizes a single inverter module over a string of PV panels. Since in micro-inverter every panel has a dedicated inverter module, more power per panel can be extracted by performing optimal maximum power tracking over single panel rather than over an entire string of panels. Power per panel extracted by string inverters may be lower than its maximum value as few of the panels in the string may or may not be shaded and thereby forming the weaker links of the system.In order to justify the higher costs of Micro-Inverters, it is of utmost importance to convert the available power with maximum possible efficiency. Typically, a micro-inverter consists of two important blocks; a Front End DC-DC Converter and Output DC-AC Inverter. This thesis proposes efficiency optimization techniques for both the blocks of the micro-inverter. Efficiency Optimization of Front End DC-DC Converter-This thesis aims to optimize the efficiency of the front end stage by proposing optimal design procedure for resonant parameters of LLC Topology as a Front End DC-DC Converter for PV Applications. It exploits the I-V characteristics of a solar panel to design the resonant parameters such that resonant LLC topology operates near its resonant frequency operating point which is the highest efficiency operating point of LLC Converter.Efficiency Optimization of Output DC-AC Inverter-Due to continuously variable irradiance levels of solar energy, available power for extraction is constantly varying which causes the PV Inverter operates at its peak load capacity for less than 15% of the day time. Every typical power converter suffers through poor light load efficiency performance because of the load independent losses present in a power converter. In order to improve the light load efficiency performance of Three Phase Inverters, this thesis proposes Phase Skipping Control technique for Three Phase Grid Tied Micro-Inverters. The proposed technique is a generic control technique and can be applied to any inverter topology, however, in order to establish the proof of concept this control technique has been implemented on Three Phase Half Bridge PWM Inverter and its analysis is provided. Improving light load efficiency helps to improve the CEC efficiency of the inverter.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005265, ucf:50573
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005265
-
-
Title
-
Temperament, emotion regulation, and distress tolerance as related correlates of psychological symptoms.
-
Creator
-
Pearte, Catherine, Negy, Charles, Renk, Kimberly, Bedwell, Jeffrey, University of Central Florida
-
Abstract / Description
-
Researchers have postulated that those with difficult temperament are at risk for difficulties with regulating emotions, are less tolerant of distressing stimuli, have characteristic difficulty coping with distress, and are (at some periods of development) more apt to experience clinically significant psychological symptoms. This study used exploratory factor analyses and structural equation modeling to compose and test a model that explained how emotion regulation, distress tolerance, and...
Show moreResearchers have postulated that those with difficult temperament are at risk for difficulties with regulating emotions, are less tolerant of distressing stimuli, have characteristic difficulty coping with distress, and are (at some periods of development) more apt to experience clinically significant psychological symptoms. This study used exploratory factor analyses and structural equation modeling to compose and test a model that explained how emotion regulation, distress tolerance, and coping skills interact to explain how certain temperament features translate into psychological symptoms. Because those with difficult temperament were thought to be at a unique risk for psychological maladjustment, mean-based criterion were used to identify those with relatively difficult, typical, or easy temperament and then test whether the degree of between-group differences on study variables was statistically significant. Results of correlational and EFA analyses suggested that there were statistically significant differences between constructs that were correlated highly (i.e., distress tolerance, emotion regulation, and emotion dysregulation). Results of SEM analyses indicated that the relationship between difficult temperament and psychological maladjustment was explained partially by the way in which emotion regulation, emotion dysregulation, distress tolerance, and coping skills interact, with the strength of each mediating variable differing considerably. There were also differences in the power of the relationship between variables when correlational power was considered alone rather than in the context of the larger measurement and structural models. Future directions and implications are discussed.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005686, ucf:50120
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005686
-
-
Title
-
REFRACTIVE INDICES OF LIQUID CRYSTALS AND THEIR APPLICATIONS IN DISPLAY AND PHOTONIC DEVICES.
-
Creator
-
Li, Jun, Wu, Shin-Tson, University of Central Florida
-
Abstract / Description
-
Liquid crystals (LCs) are important materials for flat panel display and photonic devices. Most LC devices use electrical field-, magnetic field-, or temperature-induced refractive index change to modulate the incident light. Molecular constituents, wavelength, and temperature are the three primary factors determining the liquid crystal refractive indices: ne and no for the extraordinary and ordinary rays, respectively. In this dissertation, we derive several physical models for describing...
Show moreLiquid crystals (LCs) are important materials for flat panel display and photonic devices. Most LC devices use electrical field-, magnetic field-, or temperature-induced refractive index change to modulate the incident light. Molecular constituents, wavelength, and temperature are the three primary factors determining the liquid crystal refractive indices: ne and no for the extraordinary and ordinary rays, respectively. In this dissertation, we derive several physical models for describing the wavelength and temperature effects on liquid crystal refractive indices, average refractive index, and birefringence. Based on these models, we develop some high temperature gradient refractive index LC mixtures for photonic applications, such as thermal tunable liquid crystal photonic crystal fibers and thermal solitons. Liquid crystal refractive indices decrease as the wavelength increase. Both ne and no saturate in the infrared region. Wavelength effect on LC refractive indices is important for the design of direct-view displays. In Chapter 2, we derive the extended Cauchy models for describing the wavelength effect on liquid crystal refractive indices in the visible and infrared spectral regions based on the three-band model. The three-coefficient Cauchy model could be used for describing the refractive indices of liquid crystals with low, medium, and high birefringence, whereas the two-coefficient Cauchy model is more suitable for low birefringence liquid crystals. The critical value of the birefringence is deltan~0.12. Temperature is another important factor affecting the LC refractive indices. The thermal effect originated from the lamp of projection display would affect the performance of the employed liquid crystal. In Chapter 3, we derive the four-parameter and three-parameter parabolic models for describing the temperature effect on the LC refractive indices based on Vuks model and Haller equation. We validate the empirical Haller equation quantitatively. We also validate that the average refractive index of liquid crystal decreases linearly as the temperature increases. Liquid crystals exhibit a large thermal nonlinearity which is attractive for new photonic applications using photonic crystal fibers. We derive the physical models for describing the temperature gradient of the LC refractive indices, ne and no, based on the four-parameter model. We find that LC exhibits a crossover temperature To at which dno/dT is equal to zero. The physical models of the temperature gradient indicate that ne, the extraordinary refractive index, always decreases as the temperature increases since dne/dT is always negative, whereas no, the ordinary refractive index, decreases as the temperature increases when the temperature is lower than the crossover temperature (dno/dT<0 when the temperature is lower than To) and increases as the temperature increases when the temperature is higher than the crossover temperature (dno/dT>0 when the temperature is higher than To ). Measurements of LC refractive indices play an important role for validating the physical models and the device design. Liquid crystal is anisotropic and the incident linearly polarized light encounters two different refractive indices when the polarization is parallel or perpendicular to the optic axis. The measurement is more complicated than that for an isotropic medium. In Chapter 4, we use a multi-wavelength Abbe refractometer to measure the LC refractive indices in the visible light region. We measured the LC refractive indices at six wavelengths, lamda=450, 486, 546, 589, 633 and 656 nm by changing the filters. We use a circulating constant temperature bath to control the temperature of the sample. The temperature range is from 10 to 55 oC. The refractive index data measured include five low-birefringence liquid crystals, MLC-9200-000, MLC-9200-100, MLC-6608 (delta_epsilon=-4.2), MLC-6241-000, and UCF-280 (delta_epsilon=-4); four middle-birefringence liquid crystals, 5CB, 5PCH, E7, E48 and BL003; four high-birefringence liquid crystals, BL006, BL038, E44 and UCF-35, and two liquid crystals with high dno/dT at room temperature, UCF-1 and UCF-2. The refractive indices of E7 at two infrared wavelengths lamda=1.55 and 10.6 um are measured by the wedged-cell refractometer method. The UV absorption spectra of several liquid crystals, MLC-9200-000, MLC-9200-100, MLC-6608 and TL-216 are measured, too. In section 6.5, we also measure the refractive index of cured optical films of NOA65 and NOA81 using the multi-wavelength Abbe refractometer. In Chapter 5, we use the experimental data measured in Chapter 4 to validate the physical models we derived, the extended three-coefficient and two-coefficient Cauchy models, the four-parameter and three-parameter parabolic models. For the first time, we validate the Vuks model using the experimental data of liquid crystals directly. We also validate the empirical Haller equation for the LC birefringence delta_n and the linear equation for the LC average refractive index . The study of the LC refractive indices explores several new photonic applications for liquid crystals such as high temperature gradient liquid crystals, high thermal tunable liquid crystal photonic crystal fibers, the laser induced 2D+1 thermal solitons in nematic crystals, determination for the infrared refractive indices of liquid crystals, comparative study for refractive index between liquid crystals and photopolymers for polymer dispersed liquid crystal (PDLC) applications, and so on. In Chapter 6, we introduce these applications one by one. First, we formulate two novel liquid crystals, UCF-1 and UCF-2, with high dno/dT at room temperature. The dno/dT of UCF-1 is about 4X higher than that of 5CB at room temperature. Second, we infiltrate UCF-1 into the micro holes around the silica core of a section of three-rod core PCF and set up a highly thermal tunable liquid crystal photonic crystal fiber. The guided mode has an effective area of 440 Ým2 with an insertion loss of less than 0.5dB. The loss is mainly attributed to coupling losses between the index-guided section and the bandgap-guided section. The thermal tuning sensitivity of the spectral position of the bandgap was measured to be 27 nm/degree around room temperature, which is 4.6 times higher than that using the commercial E7 LC mixture operated at a temperature above 50 degree C. Third, the novel liquid crystals UCF-1 and UCF-2 are preferred to trigger the laser-induced thermal solitons in nematic liquid crystal confined in a capillary because of the high positive temperature gradient at room temperature. Fourth, we extrapolate the refractive index data measured at the visible light region to the near and far infrared region basing on the extended Cauchy model and four-parameter model. The extrapolation method is validated by the experimental data measured at the visible light and infrared light regions. Knowing the LC refractive indices at the infrared region is important for some photonic devices operated in this light region. Finally, we make a completely comparative study for refractive index between two photocurable polymers (NOA65 and NOA81) and two series of Merck liquid crystals, E-series (E44, E48, and E7) and BL-series (BL038, BL003 and BL006) in order to optimize the performance of polymer dispersed liquid crystals (PDLC). Among the LC materials we studied, BL038 and E48 are good candidates for making PDLC system incorporating NOA65. The BL038 PDLC cell shows a higher contrast ratio than the E48 cell because BL038 has a better matched ordinary refractive index, higher birefringence, and similar miscibility as compared to E48. Liquid crystals having a good miscibility with polymer, matched ordinary refractive index, and higher birefringence help to improve the PDLC contrast ratio for display applications. In Chapter 7, we give a general summary for the dissertation.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000808, ucf:46677
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000808
-
-
Title
-
Development of Traffic Safety Zones and Integrating Macroscopic and Microscopic Safety Data Analytics for Novel Hot Zone Identification.
-
Creator
-
Lee, JaeYoung, Abdel-Aty, Mohamed, Radwan, Ahmed, Nam, Boo Hyun, Kuo, Pei-Fen, Choi, Keechoo, University of Central Florida
-
Abstract / Description
-
Traffic safety has been considered one of the most important issues in the transportation field. With consistent efforts of transportation engineers, Federal, State and local government officials, both fatalities and fatality rates from road traffic crashes in the United States have steadily declined from 2006 to 2011.Nevertheless, fatalities from traffic crashes slightly increased in 2012 (NHTSA, 2013). We lost 33,561 lives from road traffic crashes in the year 2012, and the road traffic...
Show moreTraffic safety has been considered one of the most important issues in the transportation field. With consistent efforts of transportation engineers, Federal, State and local government officials, both fatalities and fatality rates from road traffic crashes in the United States have steadily declined from 2006 to 2011.Nevertheless, fatalities from traffic crashes slightly increased in 2012 (NHTSA, 2013). We lost 33,561 lives from road traffic crashes in the year 2012, and the road traffic crashes are still one of the leading causes of deaths, according to the Centers for Disease Control and Prevention (CDC). In recent years, efforts to incorporate traffic safety into transportation planning has been made, which is termed as transportation safety planning (TSP). The Safe, Affordable, Flexible Efficient, Transportation Equity Act (-) A Legacy for Users (SAFETEA-LU), which is compliant with the United States Code, compels the United States Department of Transportation to consider traffic safety in the long-term transportation planning process. Although considerable macro-level studies have been conducted to facilitate the implementation of TSP, still there are critical limitations in macroscopic safety studies are required to be investigated and remedied. First, TAZ (Traffic Analysis Zone), which is most widely used in travel demand forecasting, has crucial shortcomings for macro-level safety modeling. Moreover, macro-level safety models have accuracy problem. The low prediction power of the model may be caused by crashes that occur near the boundaries of zones, high-level aggregation, and neglecting spatial autocorrelation.In this dissertation, several methodologies are proposed to alleviate these limitations in the macro-level safety research. TSAZ (Traffic Safety Analysis Zone) is developed as a new zonal system for the macroscopic safety analysis and nested structured modeling method is suggested to improve the model performance. Also, a multivariate statistical modeling method for multiple crash types is proposed in this dissertation. Besides, a novel screening methodology for integrating two levels is suggested. The integrated screening method is suggested to overcome shortcomings of zonal-level screening, since the zonal-level screening cannot take specific sites with high risks into consideration. It is expected that the integrated screening approach can provide a comprehensive perspective by balancing two aspects: macroscopic and microscopic approaches.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005195, ucf:50653
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005195
-
-
Title
-
Nonlinear dynamic modeling, simulation and characterization of the mesoscale neuron-electrode interface.
-
Creator
-
Thakore, Vaibhav, Hickman, James, Mucciolo, Eduardo, Rahman, Talat, Johnson, Michael, Behal, Aman, Molnar, Peter, University of Central Florida
-
Abstract / Description
-
Extracellular neuroelectronic interfacing has important applications in the fields of neural prosthetics, biological computation and whole-cell biosensing for drug screening and toxin detection. While the field of neuroelectronic interfacing holds great promise, the recording of high-fidelity signals from extracellular devices has long suffered from the problem of low signal-to-noise ratios and changes in signal shapes due to the presence of highly dispersive dielectric medium in the neuron...
Show moreExtracellular neuroelectronic interfacing has important applications in the fields of neural prosthetics, biological computation and whole-cell biosensing for drug screening and toxin detection. While the field of neuroelectronic interfacing holds great promise, the recording of high-fidelity signals from extracellular devices has long suffered from the problem of low signal-to-noise ratios and changes in signal shapes due to the presence of highly dispersive dielectric medium in the neuron-microelectrode cleft. This has made it difficult to correlate the extracellularly recorded signals with the intracellular signals recorded using conventional patch-clamp electrophysiology. For bringing about an improvement in the signal-to-noise ratio of the signals recorded on the extracellular microelectrodes and to explore strategies for engineering the neuron-electrode interface there exists a need to model, simulate and characterize the cell-sensor interface to better understand the mechanism of signal transduction across the interface. Efforts to date for modeling the neuron-electrode interface have primarily focused on the use of point or area contact linear equivalent circuit models for a description of the interface with an assumption of passive linearity for the dynamics of the interfacial medium in the cell-electrode cleft. In this dissertation, results are presented from a nonlinear dynamic characterization of the neuroelectronic junction based on Volterra-Wiener modeling which showed that the process of signal transduction at the interface may have nonlinear contributions from the interfacial medium. An optimization based study of linear equivalent circuit models for representing signals recorded at the neuron-electrode interface subsequently proved conclusively that the process of signal transduction across the interface is indeed nonlinear. Following this a theoretical framework for the extraction of the complex nonlinear material parameters of the interfacial medium like the dielectric permittivity, conductivity and diffusivity tensors based on dynamic nonlinear Volterra-Wiener modeling was developed. Within this framework, the use of Gaussian bandlimited white noise for nonlinear impedance spectroscopy was shown to offer considerable advantages over the use of sinusoidal inputs for nonlinear harmonic analysis currently employed in impedance characterization of nonlinear electrochemical systems. Signal transduction at the neuron-microelectrode interface is mediated by the interfacial medium confined to a thin cleft with thickness on the scale of 20-110 nm giving rise to Knudsen numbers (ratio of mean free path to characteristic system length) in the range of 0.015 and 0.003 for ionic electrodiffusion. At these Knudsen numbers, the continuum assumptions made in the use of Poisson-Nernst-Planck system of equations for modeling ionic electrodiffusion are not valid. Therefore, a lattice Boltzmann method (LBM) based multiphysics solver suitable for modeling ionic electrodiffusion at the mesoscale neuron-microelectrode interface was developed. Additionally, a molecular speed dependent relaxation time was proposed for use in the lattice Boltzmann equation. Such a relaxation time holds promise for enhancing the numerical stability of lattice Boltzmann algorithms as it helped recover a physically correct description of microscopic phenomena related to particle collisions governed by their local density on the lattice. Next, using this multiphysics solver simulations were carried out for the charge relaxation dynamics of an electrolytic nanocapacitor with the intention of ultimately employing it for a simulation of the capacitive coupling between the neuron and the planar microelectrode on a microelectrode array (MEA). Simulations of the charge relaxation dynamics for a step potential applied at t = 0 to the capacitor electrodes were carried out for varying conditions of electric double layer (EDL) overlap, solvent viscosity, electrode spacing and ratio of cation to anion diffusivity. For a large EDL overlap, an anomalous plasma-like collective behavior of oscillating ions at a frequency much lower than the plasma frequency of the electrolyte was observed and as such it appears to be purely an effect of nanoscale confinement. Results from these simulations are then discussed in the context of the dynamics of the interfacial medium in the neuron-microelectrode cleft. In conclusion, a synergistic approach to engineering the neuron-microelectrode interface is outlined through a use of the nonlinear dynamic modeling, simulation and characterization tools developed as part of this dissertation research.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004797, ucf:49718
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004797
-
-
Title
-
TOWARD A THEORY OF PRACTICAL DRIFT IN TEAMS.
-
Creator
-
Bisbey, Tiffany, Salas, Eduardo, University of Central Florida
-
Abstract / Description
-
Practical drift is defined as the unintentional adaptation of routine behaviors from written procedure. The occurrence of practical drift can result in catastrophic disaster in high-reliability organizations (e.g. the military, emergency medicine, space exploration). Given the lack of empirical research on practical drift, this research sought to develop a better understanding by investigating ways to assess and stop the process in high-reliability organizations. An introductory literature...
Show morePractical drift is defined as the unintentional adaptation of routine behaviors from written procedure. The occurrence of practical drift can result in catastrophic disaster in high-reliability organizations (e.g. the military, emergency medicine, space exploration). Given the lack of empirical research on practical drift, this research sought to develop a better understanding by investigating ways to assess and stop the process in high-reliability organizations. An introductory literature review was conducted to investigate the variables that play a role in the occurrence of practical drift in teams. Research was guided by the input-throughput-output model of team adaptation posed by Burke, Stagl, Salas, Pierce, and Kendall (2006). It demonstrates relationships supported by the results of the literature review and the Burke and colleagues (2006) model denoting potential indicators of practical drift in teams. Research centralized on the core processes and emergent states of the adaptive cycle; namely, shared mental models, team situation awareness, and coordination. The resulting model shows the relationship of procedure—practice coupling demands misfit and maladaptive violations of procedure being mediated by shared mental models, team situation awareness, and coordination. Shared mental models also lead to team situation awareness, and both depict a mutual, positive relationship with coordination. The cycle restarts when an error caused by maladaptive violations of procedure creates a greater misfit between procedural demands and practical demands. This movement toward a theory of practical drift in teams provides a conceptual framework and testable propositions for future research to build from, giving practical avenues to predict and prevent accidents resulting from drift in high-reliability organizations. Suggestions for future research are also discussed, including possible directions to explore. By examining the relationships reflected in the new model, steps can be taken to counteract organizational failures in the process of practical drift in teams.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFH0004636, ucf:45300
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004636
-
-
Title
-
MULTI-VIEW GEOMETRIC CONSTRAINTS FOR HUMAN ACTION RECOGNITION AND TRACKING.
-
Creator
-
GRITAI, ALEXEI, Shah, Mubarak, University of Central Florida
-
Abstract / Description
-
Human actions are the essence of a human life and a natural product of the human mind. Analysis of human activities by a machine has attracted the attention of many researchers. This analysis is very important in a variety of domains including surveillance, video retrieval, human-computer interaction, athlete performance investigation, etc. This dissertation makes three major contributions to automatic analysis of human actions. First, we conjecture that the relationship between body joints...
Show moreHuman actions are the essence of a human life and a natural product of the human mind. Analysis of human activities by a machine has attracted the attention of many researchers. This analysis is very important in a variety of domains including surveillance, video retrieval, human-computer interaction, athlete performance investigation, etc. This dissertation makes three major contributions to automatic analysis of human actions. First, we conjecture that the relationship between body joints of two actors in the same posture can be described by a 3D rigid transformation. This transformation simultaneously captures different poses and various sizes and proportions. As a consequence of this conjecture, we show that there exists a fundamental matrix between the imaged positions of the body joints of two actors, if they are in the same posture. Second, we propose a novel projection model for cameras moving at a constant velocity in 3D space, \emph cameras, and derive the Galilean fundamental matrix and apply it to human action recognition. Third, we propose a novel use for the invariant ratio of areas under an affine transformation and utilizing the epipolar geometry between two cameras for 2D model-based tracking of human body joints. In the first part of the thesis, we propose an approach to match human actions using semantic correspondences between human bodies. These correspondences are used to provide geometric constraints between multiple anatomical landmarks ( e.g. hands, shoulders, and feet) to match actions observed from different viewpoints and performed at different rates by actors of differing anthropometric proportions. The fact that the human body has approximate anthropometric proportion allows for innovative use of the machinery of epipolar geometry to provide constraints for analyzing actions performed by people of different anthropometric sizes, while ensuring that changes in viewpoint do not affect matching. A novel measure in terms of rank of matrix constructed only from image measurements of the locations of anatomical landmarks is proposed to ensure that similar actions are accurately recognized. Finally, we describe how dynamic time warping can be used in conjunction with the proposed measure to match actions in the presence of nonlinear time warps. We demonstrate the versatility of our algorithm in a number of challenging sequences and applications including action synchronization , odd one out, following the leader, analyzing periodicity etc. Next, we extend the conventional model of image projection to video captured by a camera moving at constant velocity. We term such moving camera Galilean camera. To that end, we derive the spacetime projection and develop the corresponding epipolar geometry between two Galilean cameras. Both perspective imaging and linear pushbroom imaging form specializations of the proposed model and we show how six different ``fundamental" matrices including the classic fundamental matrix, the Linear Pushbroom (LP) fundamental matrix, and a fundamental matrix relating Epipolar Plane Images (EPIs) are related and can be directly recovered from a Galilean fundamental matrix. We provide linear algorithms for estimating the parameters of the the mapping between videos in the case of planar scenes. For applying fundamental matrix between Galilean cameras to human action recognition, we propose a measure that has two important properties. First property makes it possible to recognize similar actions, if their execution rates are linearly related. Second property allows recognizing actions in video captured by Galilean cameras. Thus, the proposed algorithm guarantees that actions can be correctly matched despite changes in view, execution rate, anthropometric proportions of the actor, and even if the camera moves with constant velocity. Finally, we also propose a novel 2D model based approach for tracking human body parts during articulated motion. The human body is modeled as a 2D stick figure of thirteen body joints and an action is considered as a sequence of these stick figures. Given the locations of these joints in every frame of a model video and the first frame of a test video, the joint locations are automatically estimated throughout the test video using two geometric constraints. First, invariance of the ratio of areas under an affine transformation is used for initial estimation of the joint locations in the test video. Second, the epipolar geometry between the two cameras is used to refine these estimates. Using these estimated joint locations, the tracking algorithm determines the exact location of each landmark in the test video using the foreground silhouettes. The novelty of the proposed approach lies in the geometric formulation of human action models, the combination of the two geometric constraints for body joints prediction, and the handling of deviations in anthropometry of individuals, viewpoints, execution rate, and style of performing action. The proposed approach does not require extensive training and can easily adapt to a wide variety of articulated actions.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001692, ucf:47199
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001692
-
-
Title
-
VERIFICATION OF PILOT-SCALE IRON RELEASE MODELS.
-
Creator
-
Glatthorn, Stephen, Taylor, James, University of Central Florida
-
Abstract / Description
-
A model for the prediction of color release from a pilot distribution system was created in 2003 by Imran. This model allows prediction of the release of color from aged cast iron and galvanized steel pipes as a function of water quality and hydraulic residence time. Color was used as a surrogate measurement for iron, which exhibited a strong linear correlation. An anomaly of this model was an absence of a term to account for pH, due to the influent water being well stabilized. A new study...
Show moreA model for the prediction of color release from a pilot distribution system was created in 2003 by Imran. This model allows prediction of the release of color from aged cast iron and galvanized steel pipes as a function of water quality and hydraulic residence time. Color was used as a surrogate measurement for iron, which exhibited a strong linear correlation. An anomaly of this model was an absence of a term to account for pH, due to the influent water being well stabilized. A new study was completed to evaluate the effectiveness of corrosion inhibitors against traditional adjustment. Two control lines were supplied with nearly same water qualities, one at pH close to pHs and one at pH well above pHs. The resulting data showed that effluent iron values were typically greater in the line with lower pH. The non-linear color model by Imran shows good agreement when the LSI was largely positive, but underpredicted the color release from the lower LSI line. A modification to the Larson Ratio proposed by Imran was able to give a reasonable agreement to the data at lower LSI values. LSI showed no definite relation to iron release, although a visual trend of higher LSI mitigating iron release can be seen. An iron flux model was also developed on the same pilot system by Mutoti. This model was based on a steady state mass balance of iron in a pipe. The constants for the model were empirically derived from experiments at different hydraulic conditions with a constant water quality. Experiments were assumed to reach steady state at 3 pipe volumes due to the near constant effluent turbidity achieved at this point. The model proposes that the iron flux under laminar flow conditions is constant, while the iron flux is linearly related to the Reynolds Number under turbulent conditions. This model incorporates the color release models developed by Imran to calculate flux values from different water qualities. A limited number of experiments were performed in the current study using desalinated and ground water sources at Reynolds Numbers ranging from 50 to 200. The results of these limited experiments showed that the iron flux for cast iron pipe was approximately one-half of the predicted values from Mutoti. This discrepancy may be caused by the more extensive flushing of the pipes performed on the current experiments which allowed attainment of a true steady state. Model changes were proposed to distinguish between near stagnant flow and the upper laminar region, with the upper laminar region showing a slight linear increase. Predictions using the galvanized flux model were not accurate due to an inferior color release model that was developed for galvanized pipes. The model exhibits a high dependence on sulfate concentrations, but concentrations of sulfates in the current experiments were low. This led to low predicted flux values when the actual data showed otherwise. A new galvanized model was developed from a combination of data from the original and current experiments. The predicted flux values using the new model showed great improvement over the old model, but the new model database was limited and the resulting model was not able to be independently tested.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001704, ucf:47332
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001704
-
-
Title
-
DESIGN AND MODELING OF RADIATION HARDENED LDMOSFET FOR SPACE CRAFT POWER SYSTEMS.
-
Creator
-
Shea, Patrick, Shen, John, University of Central Florida
-
Abstract / Description
-
NASA missions require innovative power electronics system and component solutions with long life capability, high radiation tolerance, low mass and volume, and high reliability in space environments. Presently vertical double-diffused MOSFETs (VDMOS) are the most widely used power switching device for space power systems. It is proposed that a new lateral double-diffused MOSFET (LDMOS) designed at UCF can offer improvements in total dose and single event radiation hardness, switching...
Show moreNASA missions require innovative power electronics system and component solutions with long life capability, high radiation tolerance, low mass and volume, and high reliability in space environments. Presently vertical double-diffused MOSFETs (VDMOS) are the most widely used power switching device for space power systems. It is proposed that a new lateral double-diffused MOSFET (LDMOS) designed at UCF can offer improvements in total dose and single event radiation hardness, switching performance, development and manufacturing costs, and total mass of power electronics systems. Availability of a hardened fast-switching power MOSFET will allow space-borne power electronics to approach the current level of terrestrial technology, thereby facilitating the use of more modern digital electronic systems in space. It is believed that the use of a p+/p-epi starting material for the LDMOS will offer better hardness against single-event burnout (SEB) and single-event gate rupture (SEGR) when compared to vertical devices fabricated on an n+/n-epi material. By placing a source contact on the bottom-side of the p+ substrate, much of the hole current generated by a heavy ion strike will flow away from the dielectric gate, thereby reducing electrical stress on the gate and decreasing the likelihood of SEGR. Similarly, the device is hardened against SEB by the redirection of hole current away from the base of the device's parasitic bipolar transistor. Total dose hardness is achieved by the use of a standard complementary metal-oxide semiconductor (CMOS) process that has shown proven hardness against total dose radiation effects.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001966, ucf:47468
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001966
-
-
Title
-
A GASOLINE DEMAND MODEL FOR THE UNITED STATES LIGHT VEHICLE FLEET.
-
Creator
-
Rey, Diana, Al-Deek, Haitham, University of Central Florida
-
Abstract / Description
-
ABSTRACT The United States is the world's largest oil consumer demanding about twenty five percent of the total world oil production. Whenever there are difficulties to supply the increasing quantities of oil demanded by the market, the price of oil escalates leading to what is known as oil price spikes or oil price shocks. The last oil price shock which was the longest sustained oil price run up in history, began its course in year 2004, and ended in 2008. This last oil price shock...
Show moreABSTRACT The United States is the world's largest oil consumer demanding about twenty five percent of the total world oil production. Whenever there are difficulties to supply the increasing quantities of oil demanded by the market, the price of oil escalates leading to what is known as oil price spikes or oil price shocks. The last oil price shock which was the longest sustained oil price run up in history, began its course in year 2004, and ended in 2008. This last oil price shock initiated recognizable changes in transportation dynamics: transit operators realized that commuters switched to transit as a way to save gasoline costs, consumers began to search the market for more efficient vehicles leading car manufactures to close "gas guzzlers" plants, and the government enacted a new law entitled the Energy Independence Act of 2007, which called for the progressive improvement of the fuel efficiency indicator of the light vehicle fleet up to 35 miles per gallon in year 2020. The past trend of gasoline consumption will probably change; so in the context of the problem a gasoline consumption model was developed in this thesis to ascertain how some of the changes will impact future gasoline demand. Gasoline demand was expressed in oil equivalent million barrels per day, in a two steps Ordinary Least Square (OLS) explanatory variable model. In the first step, vehicle miles traveled expressed in trillion vehicle miles was regressed on the independent variables: vehicles expressed in million vehicles, and price of oil expressed in dollars per barrel. In the second step, the fuel consumption in million barrels per day was regressed on vehicle miles traveled, and on the fuel efficiency indicator expressed in miles per gallon. The explanatory model was run in EVIEWS that allows checking for normality, heteroskedasticty, and serial correlation. Serial correlation was addressed by inclusion of autoregressive or moving average error correction terms. Multicollinearity was solved by first differencing. The 36 year sample series set (1970-2006) was divided into a 30 years sub-period for calibration and a 6 year "hold-out" sub-period for validation. The Root Mean Square Error or RMSE criterion was adopted to select the "best model" among other possible choices, although other criteria were also recorded. Three scenarios for the size of the light vehicle fleet in a forecasting period up to 2020 were created. These scenarios were equivalent to growth rates of 2.1, 1.28, and about 1 per cent per year. The last or more optimistic vehicle growth scenario, from the gasoline consumption perspective, appeared consistent with the theory of vehicle saturation. One scenario for the average miles per gallon indicator was created for each one of the size of fleet indicators by distributing the fleet every year assuming a 7 percent replacement rate. Three scenarios for the price of oil were also created: the first one used the average price of oil in the sample since 1970, the second was obtained by extending the price trend by exponential smoothing, and the third one used a longtime forecast supplied by the Energy Information Administration. The three scenarios created for the price of oil covered a range between a low of about 42 dollars per barrel to highs in the low 100's. The 1970-2006 gasoline consumption trend was extended to year 2020 by ARIMA Box-Jenkins time series analysis, leading to a gasoline consumption value of about 10 millions barrels per day in year 2020. This trend line was taken as the reference or baseline of gasoline consumption. The savings that resulted by application of the explanatory variable OLS model were measured against such a baseline of gasoline consumption. Even on the most pessimistic scenario the savings obtained by the progressive improvement of the fuel efficiency indicator seem enough to offset the increase in consumption that otherwise would have occurred by extension of the trend, leaving consumption at the 2006 levels or about 9 million barrels per day. The most optimistic scenario led to savings up to about 2 million barrels per day below the 2006 level or about 3 millions barrels per day below the baseline in 2020. The "expected" or average consumption in 2020 is about 8 million barrels per day, 2 million barrels below the baseline or 1 million below the 2006 consumption level. More savings are possible if technologies such as plug-in hybrids that have been already implemented in other countries take over soon, are efficiently promoted, or are given incentives or subsidies such as tax credits. The savings in gasoline consumption may in the future contribute to stabilize the price of oil as worldwide demand is tamed by oil saving policy changes implemented in the United States.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002539, ucf:47659
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002539
-
-
Title
-
STRUCTURAL CONDITION ASSESSMENT OF PRESTRESSED CONCRETE TRANSIT GUIDEWAYS.
-
Creator
-
Shmerling, Robert, Catbas, F. Necati, University of Central Florida
-
Abstract / Description
-
Objective condition assessment is essential to make better decisions for safety and serviceability of existing civil infrastructure systems. This study explores the condition of an existing transit guideway system that has been in service for thirty-five years. The structural system is composed of six-span continuous prestressed concrete bridge segments. The overall transit system incorporates a number of continuous bridges which share common design details, geometries, and loading conditions...
Show moreObjective condition assessment is essential to make better decisions for safety and serviceability of existing civil infrastructure systems. This study explores the condition of an existing transit guideway system that has been in service for thirty-five years. The structural system is composed of six-span continuous prestressed concrete bridge segments. The overall transit system incorporates a number of continuous bridges which share common design details, geometries, and loading conditions. The original analysis is based on certain simplifying assumptions such as rigid behavior over supports and simplified tendon/concrete/steel plate interaction. The current objective is to conduct a representative study for a more accurate understanding of the structural system and its behavior. The scope of the study is to generate finite element models (FEMs) to be used in static and dynamic parameter sensitivity studies, as well load rating and reliability analysis of the structure. The FEMs are used for eigenvalue analysis and simulations. Parameter sensitivity studies consider the effect of changing critical parameters, including material properties, prestress loss, and boundary and continuity conditions, on the static and dynamic structural response. Load ratings are developed using an American Association for State Highway Transportation Officials Load and Resistance Factor Rating (AASHTO LRFR) approach. The reliability of the structural system is evaluated based on the data obtained from various finite element models. Recommendations for experimental validation of the FEM are presented. This study is expected to provide information to make better decisions for operations, maintenance and safety requirements; to be a benchmark for future studies, to establish a procedure and methodology for structural condition assessment, and to contribute to the general research body of knowledge in condition assessment and structural health monitoring.
Show less
-
Date Issued
-
2005
-
Identifier
-
CFE0000658, ucf:46520
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000658
-
-
Title
-
IDENTIFICATION OF SPATIOTEMPORAL NUTRIENT PATTERNS AND ASSOCIATED ECOHYDROLOGICAL TRENDS IN THE TAMPA BAY COASTAL REGION.
-
Creator
-
Wimberly, Brent, Chang, Ni-Bin, University of Central Florida
-
Abstract / Description
-
The comprehensive assessment techniques for monitoring of water quality of a coastal bay can be diversified via an extensive investigation of the spatiotemporal nutrient patterns and the associated eco-hydrological trends in a coastal urban region. With this work, it is intended to thoroughly investigate the spatiotemporal nutrient patterns and associated eco-hydrological trends via a two part inquiry of the watershed and its adjacent coastal bay. The findings show that the onset of drought...
Show moreThe comprehensive assessment techniques for monitoring of water quality of a coastal bay can be diversified via an extensive investigation of the spatiotemporal nutrient patterns and the associated eco-hydrological trends in a coastal urban region. With this work, it is intended to thoroughly investigate the spatiotemporal nutrient patterns and associated eco-hydrological trends via a two part inquiry of the watershed and its adjacent coastal bay. The findings show that the onset of drought lags the crest of the evapotranspiration and precipitation curve during each year of drought. During the transition year, ET and precipitation appears to start to shift back into the analogous temporal pattern as the 2005 wet year. NDVI shows a flat receding tail for the September crest in 2005 due to the hurricane impact signifying that the hurricane event in October dampening the severity of the winter dry season in which alludes to relative system memory. The k-means model with 8 clusters is the optimal choice, in which cluster 2 at Lower Tampa Bay had the minimum values of total nitrogen (TN) concentrations, chlorophyll a (Chl-a) concentrations, and ocean color values in every season as well as the minimum concentration of total phosphorus (TP) in three consecutive seasons in 2008. Cluster 5, located in Middle Tampa Bay, displayed elevated TN concentrations, ocean color values, and Chl-a concentrations, suggesting that high colored dissolved organic matter values are linked with some nutrient sources. The data presented by the gravity modeling analysis indicate that the Alafia River Basin is the major contributor of nutrients in terms of both TP and TN values in all seasons. Such ecohydrological evaluation can be applied for supporting the LULC management of climatic vulnerable regions as well as further enrich the comprehensive assessment techniques for estimating and examining the multi-temporal impacts and dynamic influence of urban land use and land cover. Improvements for environmental monitoring and assessment were achieved to advance our understanding of sea-land interactions and nutrient cycling in a coastal bay.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFH0004132, ucf:44878
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004132
Pages