View All Items
Pages
- Title
- A MODEL INTEGRATED MESHLESS SOLVER (MIMS) FOR FLUID FLOW AND HEAT TRANSFER.
- Creator
-
Gerace, Salvadore, Kassab, Alain, University of Central Florida
- Abstract / Description
-
Numerical methods for solving partial differential equations are commonplace in the engineering community and their popularity can be attributed to the rapid performance improvement of modern workstations and desktop computers. The ubiquity of computer technology has allowed all areas of engineering to have access to detailed thermal, stress, and fluid flow analysis packages capable of performing complex studies of current and future designs. The rapid pace of computer development, however,...
Show moreNumerical methods for solving partial differential equations are commonplace in the engineering community and their popularity can be attributed to the rapid performance improvement of modern workstations and desktop computers. The ubiquity of computer technology has allowed all areas of engineering to have access to detailed thermal, stress, and fluid flow analysis packages capable of performing complex studies of current and future designs. The rapid pace of computer development, however, has begun to outstrip efforts to reduce analysis overhead. As such, most commercially available software packages are now limited by the human effort required to prepare, develop, and initialize the necessary computational models. Primarily due to the mesh-based analysis methods utilized in these software packages, the dependence on model preparation greatly limits the accessibility of these analysis tools. In response, the so-called meshless or mesh-free methods have seen considerable interest as they promise to greatly reduce the necessary human interaction during model setup. However, despite the success of these methods in areas demanding high degrees of model adaptability (such as crack growth, multi-phase flow, and solid friction), meshless methods have yet to gain notoriety as a viable alternative to more traditional solution approaches in general solution domains. Although this may be due (at least in part) to the relative youth of the techniques, another potential cause is the lack of focus on developing robust methodologies. The failure to approach development from a practical perspective has prevented researchers from obtaining commercially relevant meshless methodologies which reach the full potential of the approach. The primary goal of this research is to present a novel meshless approach called MIMS (Model Integrated Meshless Solver) which establishes the method as a generalized solution technique capable of competing with more traditional PDE methodologies (such as the finite element and finite volume methods). This was accomplished by developing a robust meshless technique as well as a comprehensive model generation procedure. By closely integrating the model generation process into the overall solution methodology, the presented techniques are able to fully exploit the strengths of the meshless approach to achieve levels of automation, stability, and accuracy currently unseen in the area of engineering analysis. Specifically, MIMS implements a blended meshless solution approach which utilizes a variety of shape functions to obtain a stable and accurate iteration process. This solution approach is then integrated with a newly developed, highly adaptive model generation process which employs a quaternary triangular surface discretization for the boundary, a binary-subdivision discretization for the interior, and a unique shadow layer discretization for near-boundary regions. Together, these discretization techniques are able to achieve directionally independent, automatic refinement of the underlying model, allowing the method to generate accurate solutions without need for intermediate human involvement. In addition, by coupling the model generation with the solution process, the presented method is able to address the issue of ill-constructed geometric input (small features, poorly formed faces, etc.) to provide an intuitive, yet powerful approach to solving modern engineering analysis problems.
Show less - Date Issued
- 2010
- Identifier
- CFE0003299, ucf:48489
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003299
- Title
- Streamlining the Acquisition Process: Systems Analysis for Improving Army Acquisition Corps Officer Management.
- Creator
-
Chu-Quinn, Shawn, Kincaid, John, Wiegand, Rudolf, Mohammad, Syed, University of Central Florida
- Abstract / Description
-
The Army Acquisition Officer lacks proficient experience needed to fill key leadership positions within the Acquisition Corps. The active duty Army officer is considered for the Acquisition Corps functional area between their 5th and 9th years of service as an officer (-) after completing initial career milestones. The new Acquisition Corps officer is the rank of senior Captain or Major when he arrives to his first acquisition assignment with a proficiency level of novice (in acquisition)....
Show moreThe Army Acquisition Officer lacks proficient experience needed to fill key leadership positions within the Acquisition Corps. The active duty Army officer is considered for the Acquisition Corps functional area between their 5th and 9th years of service as an officer (-) after completing initial career milestones. The new Acquisition Corps officer is the rank of senior Captain or Major when he arrives to his first acquisition assignment with a proficiency level of novice (in acquisition). The Army officer may be advanced in his primary career branch, but his level decreases when he is assigned into the Acquisition Corps functional area. The civilian grade equivalent to the officer is a GS-12 or GS-13 whose proficiency level is advanced in his career field. The purpose of this study is to use a systems analysis approach to decompose the current acquisition officer professional development system, in order to study how well the current active duty officer flow works and how well it interacts or influences an acquisition officer's professional development; and to propose a potential solution to assist in the management of Army acquisition officers, so they gain proficiency through not only education and training, but also the hands-on experience that is needed to fill key leadership positions in the Army Acquisition Corps. An increased proficiency and proven successful track record in the acquisition workforce is the basis to positively affect acquisition streamlining processes within the Department of Defense by making good decisions through quality experience.
Show less - Date Issued
- 2015
- Identifier
- CFE0005590, ucf:50254
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005590
- Title
- Photoactivatable Organic and Inorganic Nanoparticles in Cancer Therapeutics and Biosensing.
- Creator
-
Mathew, Mona, Gesquiere, Andre, Hickman, James, Ye, Jingdong, Campiglia, Andres, Schoenfeld, Winston, University of Central Florida
- Abstract / Description
-
In photodynamic therapy a photosensitizer drug is administered and is irradiated with light. Upon absorption of light the photosensitizer goes into its triplet state and transfers energy or an electron to oxygen to form reactive oxygen species (ROS). These ROS react with biomolecules in cells leading to cell damage and cell death. PDT has interested many researchers because of its non-invasiveness as compared to surgery, it leaves little to no scars, it is time and cost effective, it has...
Show moreIn photodynamic therapy a photosensitizer drug is administered and is irradiated with light. Upon absorption of light the photosensitizer goes into its triplet state and transfers energy or an electron to oxygen to form reactive oxygen species (ROS). These ROS react with biomolecules in cells leading to cell damage and cell death. PDT has interested many researchers because of its non-invasiveness as compared to surgery, it leaves little to no scars, it is time and cost effective, it has potential for targeted treatment, and can be repeated as needed. Different photosensitizers such as porphyrines, chlorophylls, and dyes have been used in PDT to treat various cancers, skin diseases, aging and sun-damaged skin. These second generation sensitizers have yielded reduced skin sensitivity and improved extinction coefficients (up to ~ 105 L mol-1 cm-1). While PDT based on small molecule photosensitizers has shown great promise, several problems remain unsolved. The main issues with current sensitizers are (i) hydrophobicity leading to aggregation in aqueous media resulting in reduced efficacy and potential toxicity, (ii) dark toxicity of photosensitizers, (iii) non-selectivity towards malignant tissue resulting in prolonged cutaneous photosensitivity and damage to healthy tissue, (iv) limited light absorption efficiency, and (v) a lack of understanding of where the photosensitizer ends up in the tissue. In this dissertation research program, these issues were addressed by the development of conducting polymer nanoparticles as a next generation of photosensitizers. This choice was motivated by the fact that conducting polymers have large extinction coefficients ((>) 107 L mol-1 cm-1), are able to undergo intersystem crossing to the triplet state, and have triplet energies that are close to that of oxygen. It was therefore hypothesized that such polymers could be effective at generating ROS due to the large excitation rate that can be generated. Conducting polymer nanoparticles (CPNPs) composed of the conducting polymer poly[2-methoxy-5-(2-ethylhexyl-oxy)-p-phenylenevinylene] (MEH-PPV) were fabricated and studied in-vitro for their potential in PDT application. Although not fully selective, the nanoparticles exhibited a strong bias to the cancer cells. The formation of ROS was proven in-vitro by staining of the cells with CellROX Green Reagent, after which PDT results were quantified by MTT assays. Cell mortality was observed to scale with nanoparticle dosage and light dosage. Based on these promising results the MEH-PPV nanoparticles were developed further to allow for surface functionalization, with the aim of targeting these NPs to cancer cell lines. For this work targeting of cancers that overexpress folate receptors (FR) were considered. The functionalized nanoparticles (FNPs) were studied in OVCAR3 (ovarian cancer cell line) as FR+, MIA PaCa2 (pancreatic cell line) as FR-, and A549 (lung cancer cell line) having marginal FR expression. Complete selectivity of the FNPs towards the FR+ cell line was found. Quantification of PDT results by MTS assays and flow cytometry show that PDT treatment was fully selective to the FR+ cell line (OVCAR3). No cell mortality was observed for the other cell lines studied here within experimental error. Finally, the issue of confirming and quantifying small molecule drug delivery to diseased tissue was tackled by developing quantum dot (Qdot) biosensors with the aim of achieving fluorescence reporting of intracellular small molecule/drug delivery. For fluorescence reporting prior expertise in control of the fluorescence state of Qdots was employed, where redox active ligands can place the Qdot in a quenched OFF state. Ligand attachment was accomplished by disulfide linker chemistry. This chemistry is reversible in the presence of sulfur reducing biomolecules, resulting in Qdots in a brightly fluorescent ON state. Glutathione (GSH) is such a biomolecule that is present in the intracellular environment. Experimental in-vitro data shows that this design was successfully implemented.
Show less - Date Issued
- 2014
- Identifier
- CFE0005839, ucf:50923
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005839
- Title
- Base Flow Recession Analysis for Streamflow and Spring Flow.
- Creator
-
Ghosh, Debapi, Wang, Dingbao, Chopra, Manoj, Singh, Arvind, Medeiros, Stephen, Bohlen, Patrick, University of Central Florida
- Abstract / Description
-
Base flow recession curve during a dry period is a distinct hydrologic signature of a watershed. The base flow recession analysis for both streamflow and spring flow has been extensively studied in the literature. Studies have shown that the recession behaviors during the early stage and the late stage are different in many watersheds. However, research on the transition from early stage to late stage is limited and the hydrologic control on the transition is not completely understood. In...
Show moreBase flow recession curve during a dry period is a distinct hydrologic signature of a watershed. The base flow recession analysis for both streamflow and spring flow has been extensively studied in the literature. Studies have shown that the recession behaviors during the early stage and the late stage are different in many watersheds. However, research on the transition from early stage to late stage is limited and the hydrologic control on the transition is not completely understood. In this dissertation, a novel cumulative regression analysis method is developed to identify the transition flow objectively for individual recession events in the well-studied Panola Mountain Research Watershed in Georgia, USA. The streamflow at the watershed outlet is identified when the streamflow at the perennial stream head approaches zero, i.e., flowing streams contract to perennial streams. The identified transition flows are then compared with observed flows when the flowing stream contracts to the perennial stream head. As evidenced by a correlation coefficient of 0.90, these two characteristics of streamflow are found to be highly correlated, suggesting a fundamental linkage between the transition of base flow recession from early to late stages and the drying up of ephemeral streams. At the early stage, the contraction of ephemeral streams mostly controls the recession behavior. At the late stage, perennial streams dominate the flowing streams and groundwater hydraulics governs the recession behavior. The ephemeral stream densities vary from arid regions to humid regions. Therefore, the characteristics of transition flow across the climate gradients are also tested in 40 watersheds. It is found that climate, which is represented by climate aridity index, is the dominant controlling factor on transition flows from early to late recession stages. Transition flows and long-term average base flows are highly correlated with a correlation coefficient of 0.82. Long-term average base flow and the transition flow of recession are base flow characteristics at two temporal scales, i.e., the long-term scale and the event scale during a recession period. This is a signature of the co-evolution of climate, vegetation, soil, and topography at the watershed scale. The characteristics of early and late recession are applied for quantifying human impacts on streamflow in agricultural watersheds with extensive groundwater pumping for irrigation. A recession model is developed to incorporate the impacts of human activities (such as groundwater pumping) and climate variability (such as evapotranspiration) on base flow recession. Groundwater pumping is estimated based on the change of observed base flow recession in watersheds in the High Plains Aquifer. The estimated groundwater pumping rate is found consistent compared with the observed data of groundwater uses for irrigation. Besides streamflow recession analysis, this dissertation also presents a novel spring recession model for Silver Springs in Florida by incorporating groundwater head, spring pool altitude, and net recharge into the existing Torricelli model. The results show that the effective springshed area has continuously declined since 1988. The net recharge has declined since the 1970s with a significant drop in 2002. Subsequent to 2002, the net recharge increased modestly but not to the levels prior to the 1990s. The decreases in effective springshed area and net recharge caused by changes in hydroclimatic conditions including rainfall and temperature, along with groundwater withdrawals, contribute to the declined spring flow.
Show less - Date Issued
- 2015
- Identifier
- CFE0005951, ucf:50814
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005951
- Title
- Catalytically Enhanced Heterogeneous Combustion of methane.
- Creator
-
Terracciano, Anthony, Orlovskaya, Nina, Vasu Sumathi, Subith, Chow, Louis, Kassab, Alain, University of Central Florida
- Abstract / Description
-
Heterogeneous combustion is an advanced internal combustion technique, which enables heat recuperation within the flame by utilizing a highly porous ceramic media as a regenerator. Heat released within the gas phase convectively transfers to the solid media. This heat within the solid media then travels towards the inlet, enabling reactant preheating. Such heat redistribution enables stable burning of both ultra-lean fuel/air mixtures, forming a more diffuse flame through the combustion...
Show moreHeterogeneous combustion is an advanced internal combustion technique, which enables heat recuperation within the flame by utilizing a highly porous ceramic media as a regenerator. Heat released within the gas phase convectively transfers to the solid media. This heat within the solid media then travels towards the inlet, enabling reactant preheating. Such heat redistribution enables stable burning of both ultra-lean fuel/air mixtures, forming a more diffuse flame through the combustion chamber, and results in reduced pollutant formation. To further enhance heterogeneous combustion, the ceramic media can be coated with catalytically active materials, which facilitates surface based chemical reactions that could occur in parallel with gas phase reactions.Within this work, a flow stabilized heterogeneous combustor was designed and developed consisting of a reactant delivery nozzle, combustion chamber, and external instrumentation. The reactant delivery nozzle enables the combustor to operate on mixtures of air, liquid fuel, and gaseous fuel. Although this combustor has high fuel flexibility, only gaseous methane was used within the presented experiments. Within the reactant delivery nozzle, reactants flow through a tube mixer, and a homogeneous gaseous mixture is delivered to the combustion chamber. ?-alumina (?-Al2O3), magnesia stabilized zirconia (MgO-ZrO2), or silicon carbide (SiC) was used as the material for the porous media. Measurement techniques which were incorporated in the combustor include an array of axially mounted thermocouples, an external microphone, an external CCD camera, and a gas chromatograph with thermal conductivity detector which enable temperature measurements, acoustic spectroscopy, characterization of thermal radiative emissions, and composition analysis of exhaust gasses, respectively. Before evaluation of the various solid media in the combustion chamber the substrates and catalysts were characterized using X-ray diffraction, X-ray fluorescence, scanning electron microscopy and energy dispersive spectroscopy. MgO-ZrO2 porous media was found to outperform both ?-Al2O3 and SiC matrices, as it was established that higher temperatures for a given equivalence ratio were achieved when the flame was contained within a MgO-ZrO2 matrix. This was explained by the presence of oxygen vacancies within the MgO doped ZrO2 fluorite lattice which facilitated catalytic reactions. Several catalyst compositions were evaluated to promote combustion within a MgO-ZrO2 matrix even further.Catalysts such as: Pd enhanced WC, ZrB2, Ce0.80Gd0.20O1.90, LaCoO3, La0.80Ca0.20CoO3, La0.75Sr0.25Fe0.95Ru0.05O3, and La0.75Sr0.25Cr0.95Ru0.05O3; were evaluated under lean fuel/air mixtures. LaCoO3 outperformed all other catalysts, by enabling the highest temperatures within the combustion chamber, followed by Ce0.80Gd0.20O1.90. Both LaCoO3 and Ce0.80Gd0.20O1.90 enabled a flame to exist at ?=0.45(&)#177;0.02, however LaCoO3 caused the flame to be much more stable. Furthermore, it was discovered that the coating of MgO-ZrO2 with LaCoO3 significantly enhanced the total emissive power of the combustion chamber. In this work as acoustic spectroscopy was used to characterize heterogeneous combustion for the first time. It was found that there is a dependence of acoustic emission n the equivalence ratio and flame position regardless of media and catalyst combination. It was also found that when different catalysts were used, the acoustic tones produced during combustion at fixed reactant flow rates were distinct
Show less - Date Issued
- 2016
- Identifier
- CFE0006508, ucf:51364
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006508
- Title
- Explicit Feedback Within Game-Based Training: Examining the Influence of Source Modality Effects on Interaction.
- Creator
-
Goldberg, Benjamin, Bowers, Clint, Cannon-Bowers, Janis, Kincaid, John, McDaniel, Thomas, Sottilare, Robert, University of Central Florida
- Abstract / Description
-
This research aims to enhance Simulation-Based Training (SBT) applications to support training events in the absence of live instruction. The overarching purpose is to explore available tools for integrating intelligent tutoring communications in game-based learning platforms and to examine theory-based techniques for delivering explicit feedback in such environments. The primary tool influencing the design of this research was the Generalized Intelligent Framework for Tutoring (GIFT), a...
Show moreThis research aims to enhance Simulation-Based Training (SBT) applications to support training events in the absence of live instruction. The overarching purpose is to explore available tools for integrating intelligent tutoring communications in game-based learning platforms and to examine theory-based techniques for delivering explicit feedback in such environments. The primary tool influencing the design of this research was the Generalized Intelligent Framework for Tutoring (GIFT), a modular domain-independent architecture that provides the tools and methods to author, deliver, and evaluate intelligent tutoring technologies within any training platform. Influenced by research surrounding Social Cognitive Theory and Cognitive Load Theory, the resulting experiment tested varying approaches for utilizing an Embodied Pedagogical Agent (EPA) to function as a tutor during interaction in a game-based environment. Conditions were authored to assess the tradeoffs between embedding an EPA directly in a game, embedding an EPA in GIFT's browser-based Tutor-User Interface (TUI), or using audio prompts alone with no social grounding.The resulting data supports the application of using an EPA embedded in GIFT's TUI to provide explicit feedback during a game-based learning event. Analyses revealed conditions with an EPA situated in the TUI to be as effective as embedding the agent directly in the game environment. This inference is based on evidence showing reliable differences across conditions on the metrics of performance and self-reported mental demand and feedback usefulness items. This research provides source modality tradeoffs linked to tactics for relaying training relevant explicit information to a user based on real-time performance in a game.
Show less - Date Issued
- 2013
- Identifier
- CFE0004850, ucf:49696
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004850
- Title
- Nonlinear dynamic modeling, simulation and characterization of the mesoscale neuron-electrode interface.
- Creator
-
Thakore, Vaibhav, Hickman, James, Mucciolo, Eduardo, Rahman, Talat, Johnson, Michael, Behal, Aman, Molnar, Peter, University of Central Florida
- Abstract / Description
-
Extracellular neuroelectronic interfacing has important applications in the fields of neural prosthetics, biological computation and whole-cell biosensing for drug screening and toxin detection. While the field of neuroelectronic interfacing holds great promise, the recording of high-fidelity signals from extracellular devices has long suffered from the problem of low signal-to-noise ratios and changes in signal shapes due to the presence of highly dispersive dielectric medium in the neuron...
Show moreExtracellular neuroelectronic interfacing has important applications in the fields of neural prosthetics, biological computation and whole-cell biosensing for drug screening and toxin detection. While the field of neuroelectronic interfacing holds great promise, the recording of high-fidelity signals from extracellular devices has long suffered from the problem of low signal-to-noise ratios and changes in signal shapes due to the presence of highly dispersive dielectric medium in the neuron-microelectrode cleft. This has made it difficult to correlate the extracellularly recorded signals with the intracellular signals recorded using conventional patch-clamp electrophysiology. For bringing about an improvement in the signal-to-noise ratio of the signals recorded on the extracellular microelectrodes and to explore strategies for engineering the neuron-electrode interface there exists a need to model, simulate and characterize the cell-sensor interface to better understand the mechanism of signal transduction across the interface. Efforts to date for modeling the neuron-electrode interface have primarily focused on the use of point or area contact linear equivalent circuit models for a description of the interface with an assumption of passive linearity for the dynamics of the interfacial medium in the cell-electrode cleft. In this dissertation, results are presented from a nonlinear dynamic characterization of the neuroelectronic junction based on Volterra-Wiener modeling which showed that the process of signal transduction at the interface may have nonlinear contributions from the interfacial medium. An optimization based study of linear equivalent circuit models for representing signals recorded at the neuron-electrode interface subsequently proved conclusively that the process of signal transduction across the interface is indeed nonlinear. Following this a theoretical framework for the extraction of the complex nonlinear material parameters of the interfacial medium like the dielectric permittivity, conductivity and diffusivity tensors based on dynamic nonlinear Volterra-Wiener modeling was developed. Within this framework, the use of Gaussian bandlimited white noise for nonlinear impedance spectroscopy was shown to offer considerable advantages over the use of sinusoidal inputs for nonlinear harmonic analysis currently employed in impedance characterization of nonlinear electrochemical systems. Signal transduction at the neuron-microelectrode interface is mediated by the interfacial medium confined to a thin cleft with thickness on the scale of 20-110 nm giving rise to Knudsen numbers (ratio of mean free path to characteristic system length) in the range of 0.015 and 0.003 for ionic electrodiffusion. At these Knudsen numbers, the continuum assumptions made in the use of Poisson-Nernst-Planck system of equations for modeling ionic electrodiffusion are not valid. Therefore, a lattice Boltzmann method (LBM) based multiphysics solver suitable for modeling ionic electrodiffusion at the mesoscale neuron-microelectrode interface was developed. Additionally, a molecular speed dependent relaxation time was proposed for use in the lattice Boltzmann equation. Such a relaxation time holds promise for enhancing the numerical stability of lattice Boltzmann algorithms as it helped recover a physically correct description of microscopic phenomena related to particle collisions governed by their local density on the lattice. Next, using this multiphysics solver simulations were carried out for the charge relaxation dynamics of an electrolytic nanocapacitor with the intention of ultimately employing it for a simulation of the capacitive coupling between the neuron and the planar microelectrode on a microelectrode array (MEA). Simulations of the charge relaxation dynamics for a step potential applied at t = 0 to the capacitor electrodes were carried out for varying conditions of electric double layer (EDL) overlap, solvent viscosity, electrode spacing and ratio of cation to anion diffusivity. For a large EDL overlap, an anomalous plasma-like collective behavior of oscillating ions at a frequency much lower than the plasma frequency of the electrolyte was observed and as such it appears to be purely an effect of nanoscale confinement. Results from these simulations are then discussed in the context of the dynamics of the interfacial medium in the neuron-microelectrode cleft. In conclusion, a synergistic approach to engineering the neuron-microelectrode interface is outlined through a use of the nonlinear dynamic modeling, simulation and characterization tools developed as part of this dissertation research.
Show less - Date Issued
- 2012
- Identifier
- CFE0004797, ucf:49718
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004797
- Title
- MEASURING THE EFFECT OF ERRATIC DEMANDON SIMULATED MULTI-CHANNEL MANUFACTURINGSYSTEM PERFORMANCE.
- Creator
-
Kohan, Nancy, Kulonda, Dennis, University of Central Florida
- Abstract / Description
-
ABSTRACT To handle uncertainties and variabilities in production demands, many manufacturing companies have adopted different strategies, such as varying quoted lead time, rejecting orders, increasing stock or inventory levels, and implementing volume flexibility. Make-to-stock (MTS) systems are designed to offer zero lead time by providing an inventory buffer for the organizations, but they are costly and involve risks such as obsolescence and wasted expenditures. The main concern of make-to...
Show moreABSTRACT To handle uncertainties and variabilities in production demands, many manufacturing companies have adopted different strategies, such as varying quoted lead time, rejecting orders, increasing stock or inventory levels, and implementing volume flexibility. Make-to-stock (MTS) systems are designed to offer zero lead time by providing an inventory buffer for the organizations, but they are costly and involve risks such as obsolescence and wasted expenditures. The main concern of make-to-order (MTO) systems is eliminating inventories and reducing the non-value-added processes and wastes; however, these systems are based on the assumption that the manufacturing environments and customers' demand are deterministic. Research shows that in MTO systems variability and uncertainty in the demand levels causes instability in the production flow, resulting in congestion in the production flow, long lead times, and low throughput. Neither strategy is wholly satisfactory. A new alternative approach, multi-channel manufacturing (MCM) systems are designed to manage uncertainties and variabilities in demands by first focusing on customers' response time. The products are divided into different product families, each with its own manufacturing stream or sub-factory. MCM also allocates the production capacity needed in each sub-factory to produce each product family. In this research, the performance of an MCM system is studied by implementing MCM in a real case scenario from textile industry modeled via discrete event simulation. MTS and MTO systems are implemented for the same case scenario and the results are studied and compared. The variables of interest for this research are the throughput of products, the level of on-time deliveries, and the inventory level. The results conducted from the simulation experiments favor the simulated MCM system for all mentioned criteria. Further research activities, such as applying MCM to different manufacturing contexts, is highly recommended.
Show less - Date Issued
- 2004
- Identifier
- CFE0000240, ucf:46275
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000240
- Title
- IMPROVING AIRLINE SCHEDULE RELIABILITY USING A STRATEGIC MULTI-OBJECTIVE RUNWAY SLOT ASSIGNMENT SEARCH HEURISTIC.
- Creator
-
Hafner, Florian, Sepulveda, Alejandro, University of Central Florida
- Abstract / Description
-
Improving the predictability of airline schedules in the National Airspace System (NAS) has been a constant endeavor, particularly as system delays grow with ever-increasing demand. Airline schedules need to be resistant to perturbations in the system including Ground Delay Programs (GDPs) and inclement weather. The strategic search heuristic proposed in this dissertation significantly improves airline schedule reliability by assigning airport departure and arrival slots to each flight in the...
Show moreImproving the predictability of airline schedules in the National Airspace System (NAS) has been a constant endeavor, particularly as system delays grow with ever-increasing demand. Airline schedules need to be resistant to perturbations in the system including Ground Delay Programs (GDPs) and inclement weather. The strategic search heuristic proposed in this dissertation significantly improves airline schedule reliability by assigning airport departure and arrival slots to each flight in the schedule across a network of airports. This is performed using a multi-objective optimization approach that is primarily based on historical flight and taxi times but also includes certain airline, airport, and FAA priorities. The intent of this algorithm is to produce a more reliable, robust schedule that operates in today's environment as well as tomorrow's 4-Dimensional Trajectory Controlled system as described the FAA's Next Generation ATM system (NextGen). This novel airline schedule optimization approach is implemented using a multi-objective evolutionary algorithm which is capable of incorporating limited airport capacities. The core of the fitness function is an extensive database of historic operating times for flight and ground operations collected over a two year period based on ASDI and BTS data. Empirical distributions based on this data reflect the probability that flights encounter various flight and taxi times. The fitness function also adds the ability to define priorities for certain flights based on aircraft size, flight time, and airline usage. The algorithm is applied to airline schedules for two primary US airports: Chicago O'Hare and Atlanta Hartsfield-Jackson. The effects of this multi-objective schedule optimization are evaluated in a variety of scenarios including periods of high, medium, and low demand. The schedules generated by the optimization algorithm were evaluated using a simple queuing simulation model implemented in AnyLogic. The scenarios were simulated in AnyLogic using two basic setups: (1) using modes of flight and taxi times that reflect highly predictable 4-Dimensional Trajectory Control operations and (2) using full distributions of flight and taxi times reflecting current day operations. The simulation analysis showed significant improvements in reliability as measured by the mean square difference (MSD) of filed versus simulated flight arrival and departure times. Arrivals showed the most consistent improvements of up to 80% in on-time performance (OTP). Departures showed reduced overall improvements, particularly when the optimization was performed without the consideration of airport capacity. The 4-Dimensional Trajectory Control environment more than doubled the on-time performance of departures over the current day, more chaotic scenarios. This research shows that airline schedule reliability can be significantly improved over a network of airports using historical flight and taxi time data. It also provides for a mechanism to prioritize flights based on various airline, airport, and ATC goals. The algorithm is shown to work in today's environment as well as tomorrow's NextGen 4-Dimensional Trajectory Control setup.
Show less - Date Issued
- 2008
- Identifier
- CFE0002067, ucf:47572
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002067
- Title
- PATTERNS OF MOTION: DISCOVERY AND GENERALIZED REPRESENTATION.
- Creator
-
Saleemi, Imran, Shah, Mubarak, University of Central Florida
- Abstract / Description
-
In this dissertation, we address the problem of discovery and representation of motion patterns in a variety of scenarios, commonly encountered in vision applications. The overarching goal is to devise a generic representation, that captures any kind of object motion observable in video sequences. Such motion is a significant source of information typically employed for diverse applications such as tracking, anomaly detection, and action and event recognition. We present statistical...
Show moreIn this dissertation, we address the problem of discovery and representation of motion patterns in a variety of scenarios, commonly encountered in vision applications. The overarching goal is to devise a generic representation, that captures any kind of object motion observable in video sequences. Such motion is a significant source of information typically employed for diverse applications such as tracking, anomaly detection, and action and event recognition. We present statistical frameworks for representation of motion characteristics of objects, learned from tracks or optical flow, for static as well as moving cameras, and propose algorithms for their application to a variety of problems. The proposed motion pattern models and learning methods are general enough to be employed in a variety of problems as we demonstrate experimentally. We first propose a novel method to model and learn the scene activity, observed by a static camera. The motion patterns of objects in the scene are modeled in the form of a multivariate non-parametric probability density function of spatiotemporal variables (object locations and transition times between them). Kernel Density Estimation (KDE) is used to learn this model in a completely unsupervised fashion. Learning is accomplished by observing the trajectories of objects by a static camera over extended periods of time. The model encodes the probabilistic nature of the behavior of moving objects in the scene and is useful for activity analysis applications, such as persistent tracking and anomalous motion detection. In addition, the model also captures salient scene features, such as, the areas of occlusion and most likely paths. Once the model is learned, we use a unified Markov Chain Monte-Carlo (MCMC) based framework for generating the most likely paths in the scene, improving foreground detection, persistent labelling of objects during tracking and deciding whether a given trajectory represents an anomaly to the observed motion patterns. Experiments with real world videos are reported which validate the proposed approach. The representation and estimation framework proposed above, however, has a few limitations. This algorithm proposes to use a single global statistical distribution to represent all kinds of motion observed in a particular scene. It therefore, does not find a separation between multiple semantically distinct motion patterns in the scene. Instead, the learned model is a joint distribution over all possible patterns followed by objects. To overcome this limitation, we then propose a superior method for the discovery and statistical representation of motion patterns in a scene. The advantages of this approach over the first one are two-fold: first, this model is applicable to scenes of dense crowded motion where tracking may not be feasible, and second, it distinguishes between motion patterns that are distinct at a semantic level of abstraction. We propose a mixture model representation of salient patterns of optical flow, and present an algorithm for learning these patterns from dense optical flow in a hierarchical, unsupervised fashion. Using low level cues of noisy optical flow, K-means is employed to initialize a Gaussian mixture model for temporally segmented clips of video. The components of this mixture are then filtered and instances of motion patterns are computed using a simple motion model, by linking components across space and time. Motion patterns are then initialized and membership of instances in different motion patterns is established by using KL divergence between mixture distributions of pattern instances. Finally, a pixel level representation of motion patterns is proposed by deriving conditional expectation of optical flow. Results of extensive experiments are presented for multiple surveillance sequences containing numerous patterns involving both pedestrian and vehicular traffic. The proposed method exploits optical flow as the low level feature and performs a hierarchical clustering to obtain motion patterns; and we observe that the use of optical flow is also an integral part of a variety of other vision applications, for example, as features based representation of human actions. We, therefore, propose a new representation for articulated human actions using the motion patterns. The representation is based on hierarchical clustering of observed optical flow in four dimensional, spatial and motion flow space. The automatically discovered motion patterns, are the primitive actions, representative of flow at salient regions on the human body, much like trajectories of body joints, which are notoriously difficult to obtain automatically. The proposed method works in a completely unsupervised fashion, and in sharp contrast to state of the art representations like bag of video words, provides a truly semantically meaningful representation. Each primitive action depicts the most atomic sub-action, like left arm moving upwards, or right leg moving downward and leftward, and is represented by a mixture of four dimensional Gaussian distributions. A sequence of primitive actions are discovered in the test video, and labelled by computing the KL divergence between mixtures. The entire video sequence containing the human action, is thus reduced to a simple string, which is matched against similar strings of training videos to classify the action. The string matching is performed by global alignment, using the well-known Needleman-Wunsch algorithm. Experiments reported on multiple human actions data sets, confirm the validity, simplicity, and semantically meaningful nature of the proposed representation. Results obtained are encouraging and comparable to the state of the art.
Show less - Date Issued
- 2011
- Identifier
- CFE0003646, ucf:48836
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003646
- Title
- SPRAY COOLING FOR LAND, SEA, AIR AND SPACE BASED APPLICATIONS,A FLUID MANAGEMENT SYSTEM FOR MULTIPLE NOZZLE SPRAY COOLING AND A GUIDE TO HIGH HEAT FLUX HEATER DESIGN.
- Creator
-
Glassman, Brian, Chow, Louis, University of Central Florida
- Abstract / Description
-
This thesis is divided into four distinct chapters all linked by the topic of spray cooling. Chapter one gives a detailed categorization of future and current spray cooling applications, and reviews the major advantages and disadvantages that spray cooling has over other high heat flux cooling techniques. Chapter two outlines the developmental goals of spray cooling, which are to increase the output of a current system and to enable new technologies to be technically feasible. Furthermore,...
Show moreThis thesis is divided into four distinct chapters all linked by the topic of spray cooling. Chapter one gives a detailed categorization of future and current spray cooling applications, and reviews the major advantages and disadvantages that spray cooling has over other high heat flux cooling techniques. Chapter two outlines the developmental goals of spray cooling, which are to increase the output of a current system and to enable new technologies to be technically feasible. Furthermore, this chapter outlines in detail the impact that land, air, sea, and space environments have on the cooling system and what technologies could be enabled in each environment with the aid of spray cooling. In particular, the heat exchanger, condenser and radiator are analyzed in their corresponding environments. Chapter three presents an experimental investigation of a fluid management system for a large area multiple nozzle spray cooler. A fluid management or suction system was used to control the liquid film layer thickness needed for effective heat transfer. An array of sixteen pressure atomized spray nozzles along with an imbedded fluid suction system was constructed. Two surfaces were spray tested one being a clear grooved Plexiglas plate used for visualization and the other being a bottom heated grooved 4.5 x 4.5 cm2 copper plate used to determine the heat flux. The suction system utilized an array of thin copper tubes to extract excess liquid from the cooled surface. Pure water was ejected from two spray nozzle configurations at flow rates of 0.7 L/min to 1 L/min per nozzle. It was found that the fluid management system provided fluid removal efficiencies of 98% with a 4-nozzle array, and 90% with the full 16-nozzle array for the downward spraying orientation. The corresponding heat fluxes for the 16 nozzle configuration were found with and without the aid of the fluid management system. It was found that the fluid management system increased heat fluxes on the average of 30 W/cm2 at similar values of superheat. Unfortunately, the effectiveness of this array at removing heat at full levels of suction is approximately 50% & 40% of a single nozzle at respective 10aC & 15aC values of superheat. The heat transfer data more closely resembled convective pooling boiling. Thus, it was concluded that the poor heat transfer was due to flooding occurring which made the heat transfer mechanism mainly forced convective boiling and not spray cooling. Finally, Chapter four gives a detailed guide for the design and construction of a high heat flux heater for experimental uses where accurate measurements of surface temperatures and heat fluxes are extremely important. The heater designs presented allow for different testing applications; however, an emphasis is placed on heaters designed for use with spray cooling.
Show less - Date Issued
- 2005
- Identifier
- CFE0000473, ucf:46351
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000473