Current Search: distribution system (x)
View All Items
Pages
- Title
- RESEARCHES ON REVERSE LOOKUP PROBLEM IN DISTRIBUTED FILE SYSTEM.
- Creator
-
Zhang, Junyao, Wang, Jun, University of Central Florida
- Abstract / Description
-
Recent years have witnessed an increasing demand for super data clusters. The super data clusters have reached the petabyte-scale can consist of thousands or tens of thousands storage nodes at a single site. For this architecture, reliability is becoming a great concern. In order to achieve a high reliability, data recovery and node reconstruction is a must. Although extensive research works have investigated how to sustain high performance and high reliability in case of node failures at...
Show moreRecent years have witnessed an increasing demand for super data clusters. The super data clusters have reached the petabyte-scale can consist of thousands or tens of thousands storage nodes at a single site. For this architecture, reliability is becoming a great concern. In order to achieve a high reliability, data recovery and node reconstruction is a must. Although extensive research works have investigated how to sustain high performance and high reliability in case of node failures at large scale, a reverse lookup problem, namely finding the objects list for the failed node remains open. This is especially true for storage systems with high requirement of data integrity and availability, such as scientific research data clusters and etc. Existing solutions are either time consuming or expensive. Meanwhile, replication based block placement can be used to realize fast reverse lookup. However, they are designed for centralized, small-scale storage architectures. In this thesis, we propose a fast and efficient reverse lookup scheme named Group-based Shifted Declustering (G-SD) layout that is able to locate the whole content of the failed node. G-SD extends our previous shifted declustering layout and applies to large-scale file systems. Our mathematical proofs and real-life experiments show that G-SD is a scalable reverse lookup scheme that is up to one order of magnitude faster than existing schemes.
Show less - Date Issued
- 2010
- Identifier
- CFE0003504, ucf:48970
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003504
- Title
- Model-Based Systems Engineering Approach to Distributed and Hybrid Simulation Systems.
- Creator
-
Pastrana, John, Rabelo, Luis, Lee, Gene, Elshennawy, Ahmad, Kincaid, John, University of Central Florida
- Abstract / Description
-
INCOSE defines Model-Based Systems Engineering (MBSE) as (")the formalized application of modeling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases.(") One very important development is the utilization of MBSE to develop distributed and hybrid (discrete-continuous) simulation modeling systems. MBSE can help to describe the systems to be modeled...
Show moreINCOSE defines Model-Based Systems Engineering (MBSE) as (")the formalized application of modeling to support system requirements, design, analysis, verification, and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases.(") One very important development is the utilization of MBSE to develop distributed and hybrid (discrete-continuous) simulation modeling systems. MBSE can help to describe the systems to be modeled and help make the right decisions and partitions to tame complexity. The ability to embrace conceptual modeling and interoperability techniques during systems specification and design presents a great advantage in distributed and hybrid simulation systems development efforts. Our research is aimed at the definition of a methodological framework that uses MBSE languages, methods and tools for the development of these simulation systems. A model-based composition approach is defined at the initial steps to identify distributed systems interoperability requirements and hybrid simulation systems characteristics. Guidelines are developed to adopt simulation interoperability standards and conceptual modeling techniques using MBSE methods and tools. Domain specific system complexity and behavior can be captured with model-based approaches during the system architecture and functional design requirements definition. MBSE can allow simulation engineers to formally model different aspects of a problem ranging from architectures to corresponding behavioral analysis, to functional decompositions and user requirements (Jobe, 2008).
Show less - Date Issued
- 2014
- Identifier
- CFE0005395, ucf:50464
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005395
- Title
- WATER QUALITY VARIATIONS DURING NITRIFICATION IN DRINKING WATER DISTRIBUTION SYSTEMS.
- Creator
-
Webb, David W, Taylor, James S., University of Central Florida
- Abstract / Description
-
This thesis documents the relationship among the major water quality parametersduring a nitrification episode. Nitrification unexpectedly occurred in a chloraminated pilotdrinking water distribution system practicing with a 4.0 mg/L as Cl2 residual dosed at 4.5:1Cl2:NH3-N. Surface, ground and sea water were treated and disinfected withmonochloramines to produce finished water quality similar to regional utility water quality.PVC, galvanized, unlined cast iron and lined iron pipes were...
Show moreThis thesis documents the relationship among the major water quality parametersduring a nitrification episode. Nitrification unexpectedly occurred in a chloraminated pilotdrinking water distribution system practicing with a 4.0 mg/L as Cl2 residual dosed at 4.5:1Cl2:NH3-N. Surface, ground and sea water were treated and disinfected withmonochloramines to produce finished water quality similar to regional utility water quality.PVC, galvanized, unlined cast iron and lined iron pipes were harvested from regionaldistribution systems and used to build eighteen pilot distribution systems (PDSs). The PDSswere operated at a 5-day hydraulic residence time (HRT) and ambient temperatures.As seasonal temperatures increased the rate of monochloramine dissipation increaseduntil effluent PDS residuals were zero. PDSs effluent water quality parameters chloraminesresidual, dissolved oxygen, heterotrophic plate counts (HPCs), pH, alkalinity, and nitrogenspecies were monitored and found to vary as expected by stoichiometry associated withtheoretical biological reactions excepting alkalinity. Nitrification was confirmed in thePDSs. The occurrence in the PDSs was not isolated to any particular source water.Ammonia for nitrification came from degraded chloramines, which was common among allfinished waters. Consistent with nitrification trends of dissolved oxygen consumption,ammonia consumption, nitrite and nitrate production were clearly observed in the PDSs bulkwater quality profiles. Trends of pH and alkalinity were less apparent. To controlnitrification: residual was increased to 4.5 mg/L as Cl2 at 5:1 Cl2:NH3-N dosing ratio, and theHRT was reduced from 5 to 2 days. Elimination of the nitrification episode was achievedafter a 1 week free chlorine burn.
Show less - Date Issued
- 2004
- Identifier
- CFE0000063, ucf:46118
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000063
- Title
- BIOSTABILITY IN DRINKING WATER DISTRIBUTION SYSTEMS: STUDY AT PILOT-SCALE.
- Creator
-
LE PUIL, Michael, Randall, Andrew A., University of Central Florida
- Abstract / Description
-
Biostability and related issues (e.g. nitrification) were investigated for 18 months in 18 pilot distribution systems, under various water quality scenarios. This study specifically investigated the impact of steady-state water changes on HPC levels in chlorinated and chloraminated distribution systems. Chlorination was more effective than chloramination in reducing HPC levels (1-2 log difference). There was a rapid increase in HPC corresponding to the change in steady-state water quality,...
Show moreBiostability and related issues (e.g. nitrification) were investigated for 18 months in 18 pilot distribution systems, under various water quality scenarios. This study specifically investigated the impact of steady-state water changes on HPC levels in chlorinated and chloraminated distribution systems. Chlorination was more effective than chloramination in reducing HPC levels (1-2 log difference). There was a rapid increase in HPC corresponding to the change in steady-state water quality, which was observed in all PDS. Modeling effort demonstrated that HPC levels reached a maximum within five days after water quality change and return to initial level ten days after the change. Since alkalinity was used as a tracer of the steady-state water quality change, time to reach maximum HPC was related to a mixing model using alkalinity as a surrogate that confirmed alkalinity transition was complete in approximately eight days.Biostability was assessed by HPC levels, since no coliform were ever detected. It was observed that HPC levels would be above four logs if residual droped below 0.1-0.2 mg/L as Cl2, which is below the regulatory minimum of 0.6 mg/L as Cl2. Therefore bacterial proliferation is more likely to be controlled in distribution systems as long as residual regulatory requirements are met. An empirical modeling effort showed that residual, pipe material and temperature were the most important parameters in controlling HPC levels in distribution systems, residual being the only parameter that can be practically used by utilities to control biological stability in their distribution systems. Use of less reactive (i.e. with less chlorine demand) pipes is recommended in order to prevent residual depletion and subsequent bacterial proliferation.This study is investigated biofilm growth simultaneously with suspended growth under a wide range of water quality scenarios and pipe materials. It was found that increasing the degree of treatment led to reduction of biofilm density, except for reverse osmosis treated groundwater, which exerted the highest biofilm density of all waters. Biofilm densities on corrodible, highly reactive materials (e.g. unlined cast iron and galvanized steel) were significantly greater than on PVC and lined cast iron. Biofilm modeling showed that attached bacteria were most affected by temperature and much less by HRT, bulk HPC and residual. The model predicts biofilms will always be active for environments common to drinking water distribution systems. As American utilities do not control biofilms with extensive and costly AOC reduction, American utilities must maintain a strong residual to maintain biological integrity and stability in drinking water distribution systems.Nitrite and nitrate were considered the most suitable indicators for utilities to predict onset of a nitrification episode in the distribution system bulk liquid. DO and ammonia were correlated to production of nitrite and nitrate and therefore could be related to nitrification. However since ammonia and DO consumptions can be caused by other phenomena than nitrification (e.g. oxidation by disinfectant to nitrite and reduction at the pipe wall, respectively), these parameters are not considered indicators of nitrification.Ammonia-Oxidizing Bacteria (AOB) densities in the bulk phase correlated well with nitrite and nitrate production, reinforcing the fact that nitrite and nitrate are good monitoring tools to predict nitrification. Chloramine residual proved to be helpful in reducing nitrification in the bulk phase but has little effect on biofilm densities. As DO has been related to bacterial proliferation and nitrification, it can be a useful and inexpensive option for utilities in predicting biological instability, if monitored in conjunction with residual, nitrite and nitrate. Autotrophic (i.e. AOB) and heterotrophic (i.e. HPC) organisms were correlated in the bulk phase and biofilms.
Show less - Date Issued
- 2004
- Identifier
- CFE0000111, ucf:46183
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000111
- Title
- Blockchain-Driven Secure and Transparent Audit Logs.
- Creator
-
Ahmad, Ashar, Mohaisen, David, Awad, Amro, Zhang, Wei, Posey, Clay, University of Central Florida
- Abstract / Description
-
In enterprise business applications, large volumes of data are generated daily, encoding business logic and transactions. Those applications are governed by various compliance requirements, making it essential to provide audit logs to store, track, and attribute data changes. In traditional audit log systems, logs are collected and stored in a centralized medium, making them prone to various forms of attacks and manipulations, including physical access and remote vulnerability exploitation...
Show moreIn enterprise business applications, large volumes of data are generated daily, encoding business logic and transactions. Those applications are governed by various compliance requirements, making it essential to provide audit logs to store, track, and attribute data changes. In traditional audit log systems, logs are collected and stored in a centralized medium, making them prone to various forms of attacks and manipulations, including physical access and remote vulnerability exploitation attacks, and eventually allowing for unauthorized data modification, threatening the guarantees of audit logs. Moreover, such systems, and given their centralized nature, are characterized by a single point of failure. To harden the security of audit logs in enterprise business applications, in this work we explore the design space of blockchain-driven secure and transparent audit logs. We highlight the possibility of ensuring stronger security and functional properties by a generic blockchain system for audit logs, realize this generic design through BlockAudit, which addresses both security and functional requirements, optimize BlockAudit through multi-layered design in BlockTrail, and explore the design space further by assessing the functional and security properties the consensus algorithms through comprehensive evaluations. The first component of this work is BlockAudit, a design blueprint that enumerates structural, functional, and security requirements for blockchain-based audit logs. BlockAudit uses a consensus-driven approach to replicate audit logs across multiple application peers to prevent the single-point-of-failure. BlockAudit also uses the Practical Byzantine Fault Tolerance (PBFT) protocol to achieve consensus over the state of the audit log data. We evaluate the performance of BlockAudit using event-driven simulations, abstracted from IBM Hyperledger. Through the performance evaluation of BlockAudit, we pinpoint a need for high scalability and high throughput. We achieve those requirements by exploring various design optimizations to the flat structure of BlockAudit inspired by real-world application characteristics. Namely, enterprise business applications often operate across non-overlapping geographical hierarchies including cities, counties, states, and federations. Leveraging that, we applied a similar transformation to BlockAudit to fragment the flat blockchain system into layers of codependent hierarchies, capable of processing transactions in parallel. Our hierarchical design, called BlockTrail, reduced the storage and search complexity for blockchains substantially while increasing the throughput and scalability of the audit log system. We prototyped BlockTrail on a custom-built blockchain simulator and analyzed its performance under varying transactions and network sizes demonstrating its advantages over BlockAudit. A recurring limitation in both BlockAudit and BlockTrail is the use of the PBFT consensus protocol, which has high complexity and low scalability features. Moreover, the performance of our proposed designs was only evaluated in computer simulations, which sidestepped the complexities of the real-world blockchain system. To address those shortcomings, we created a generic cloud-based blockchain testbed capable of executing five well-known consensus algorithms including Proof-of-Work, Proof-of-Stake, Proof-of-Elapsed Time, Clique, and PBFT. For each consensus protocol, we instrumented our auditing system with various benchmarks to measure the latency, throughput, and scalability, highlighting the trade-off between the different protocols.
Show less - Date Issued
- 2019
- Identifier
- CFE0007773, ucf:52375
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007773
- Title
- Research on Improving Reliability, Energy Efficiency and Scalability in Distributed and Parallel File Systems.
- Creator
-
Zhang, Junyao, Wang, Jun, Zhang, Shaojie, Lee, Jooheung, University of Central Florida
- Abstract / Description
-
With the increasing popularity of cloud computing and "Big data" applications, current data centers are often required to manage petabytes or exabytes of data. To store this huge amount of data, thousands or tens of thousands storage nodes are required at a single site. This imposes three major challenges for storage system designers: (1) Reliability---node failure in these datacenters is a normal occurrence rather than a rare situation. This makes data reliability a great concern. (2) Energy...
Show moreWith the increasing popularity of cloud computing and "Big data" applications, current data centers are often required to manage petabytes or exabytes of data. To store this huge amount of data, thousands or tens of thousands storage nodes are required at a single site. This imposes three major challenges for storage system designers: (1) Reliability---node failure in these datacenters is a normal occurrence rather than a rare situation. This makes data reliability a great concern. (2) Energy efficiency---a data center can consume up to 100 times more energy than a standard office building. More than 10% of this energy consumption can be attributed to storage systems. Thus, reducing the energy consumption of the storage system is key to reducing the overall consumption of the data center.(3) Scalability---with the continuously increasing size of data, maintaining the scalability of the storage systems is essential. That is, the expansion of the storage system should be completed efficiently and without limitations on the total number of storage nodes or performance.This thesis proposes three ways to improve the above three key features for current large-scale storage systems. Firstly, we define the problem of "reverse lookup", namely finding the list of objects (blocks) for a failed node. As the first step of failure recovery, this process is directly related to the recovery/reconstruction time. While existing solutions use metadata traversal or data distribution reversing methods for reverse lookup, which are either time consuming or expensive, a deterministic block placement can achieve fast and efficient reverse lookup.However, the deterministic placement solutions are designed for centralized, small-scale storage architectures such as RAID etc.. Due to their lacking of scalability, they cannot be directly applied in large-scale storage systems. In this paper, we propose Group-Shifted Declustering (G-SD), a deterministic data layout for multi-way replication. G-SD addresses the scalability issue of our previous Shifted Declustering layout and supports fast and efficient reverse lookup.Secondly, we define a problem: "how to balance the performance, energy, and recovery in degradation mode for an energy efficient storage system?". While extensive researches have been proposed to tradeoff performance for energy efficiency under normal mode, the system enters degradation mode when node failure occurs, in which node reconstruction is initiated. This very process requires a number of disks to be spun up and requires a substantial amount of I/O bandwidth, which will not only compromise energy efficiency but also performance. Without considering the I/O bandwidth contention between recovery and performance, we find that the current energy proportional solutions cannot answer this question accurately. This thesis present PERP, a mathematical model to minimize the energy consumption for a storage systems with respect to performance and recovery. PERP answers this problem by providing the accurate number of nodes and the assigned recovery bandwidth at each time frame.Thirdly, current distributed file systems such as Google File System(GFS) and Hadoop Distributed File System (HDFS), employ a pseudo-random method for replica distribution and a centralized lookup table (block map) to record all replica locations. This lookup table requires a large amount of memory and consumes a considerable amount of CPU/network resources on the metadata server. With the booming size of "Big Data", the metadata server becomes a scalability and performance bottleneck. While current approaches such as HDFS Federation attempt to "horizontally" extend scalability by allowing multiple metadata servers, we believe a more promising optimization option is to "vertically" scale up each metadata server. We propose Deister, a novel block management scheme that builds on top of a deterministic declustering distribution method Intersected Shifted Declustering (ISD). Thus both replica distribution and location lookup can be achieved without a centralized lookup table.
Show less - Date Issued
- 2015
- Identifier
- CFE0006238, ucf:51082
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006238
- Title
- Exploring new boundaries in team cognition: Integrating knowledge in distributed teams.
- Creator
-
Zajac, Stephanie, Salas, Eduardo, Bowers, Clint, Burke, Shawn, University of Central Florida
- Abstract / Description
-
Distributed teams continue to emerge in response to the complex organizational environments brought about by globalization, technological advancements, and the shift toward a knowledge-based economy. These teams are comprised of members who hold the disparate knowledge necessary to take on cognitively demanding tasks. However, knowledge coordination between team members who are not co-located is a significant challenge, often resulting in process loss and decrements to the effectiveness of...
Show moreDistributed teams continue to emerge in response to the complex organizational environments brought about by globalization, technological advancements, and the shift toward a knowledge-based economy. These teams are comprised of members who hold the disparate knowledge necessary to take on cognitively demanding tasks. However, knowledge coordination between team members who are not co-located is a significant challenge, often resulting in process loss and decrements to the effectiveness of team level knowledge structures. The current effort explores the configuration dimension of distributed teams, and specifically how subgroup formation based on geographic location, may impact the effectiveness of a team's transactive memory system and subsequent team process. In addition, the role of task cohesion as a buffer to negative intergroup interaction is explored.
Show less - Date Issued
- 2014
- Identifier
- CFE0005449, ucf:50393
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005449
- Title
- A Multiagent Q-learning-based Restoration Algorithm for Resilient Distribution System Operation.
- Creator
-
Hong, Jungseok, Sun, Wei, Zhou, Qun, Zheng, Qipeng, University of Central Florida
- Abstract / Description
-
Natural disasters, human errors, and technical issues have caused disastrous blackouts to power systems and resulted in enormous economic losses. Moreover, distributed energy resources have been integrated into distribution systems, which bring extra uncertainty and challenges to system restoration. Therefore, the restoration of power distribution systems requires more efficient and effective methods to provide resilient operation.In the literature, using Q-learning and multiagent system (MAS...
Show moreNatural disasters, human errors, and technical issues have caused disastrous blackouts to power systems and resulted in enormous economic losses. Moreover, distributed energy resources have been integrated into distribution systems, which bring extra uncertainty and challenges to system restoration. Therefore, the restoration of power distribution systems requires more efficient and effective methods to provide resilient operation.In the literature, using Q-learning and multiagent system (MAS) to restore power systems has the limitation in real system application, without considering power system operation constraints. In order to adapt to system condition changes quickly, a restoration algorithm using Q-learning and MAS, together with the combination method and battery algorithm is proposed in this study. The developed algorithm considers voltage and current constraints while finding system switching configuration to maximize the load pick-up after faults happen to the given system. The algorithm consists of three parts. First, it finds switching configurations using Q-learning. Second, the combination algorithm works as a back-up plan in case of the solution from Q-learning violates system constraints. Third, the battery algorithm is applied to determine the charging or discharging schedule of battery systems. The obtained switching configuration provides restoration solutions without violating system constraints. Furthermore, the algorithm can adjust switching configurations after the restoration. For example, when renewable output changes, the algorithm provides an adjusted solution to avoid violating system constraints.The proposed algorithm has been tested in the modified IEEE 9-bus system using the real-time digital simulator. Simulation results demonstrate that the algorithm offers an efficient and effective restoration strategy for resilient distribution system operation.
Show less - Date Issued
- 2017
- Identifier
- CFE0006746, ucf:51856
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006746
- Title
- MODELING FREE CHLORINE AND CHLORAMINE DECAY IN A PILOT DISTRIBUTION SYSTEM.
- Creator
-
Arevalo, Jorge, Taylor, James, University of Central Florida
- Abstract / Description
-
The purpose of this study was to identify the effect that water quality, pipe material, pipe size, flow conditions and the use of corrosion inhibitors would have on the rate of free chlorine and chloramine decay in distribution systems. Empirical models were developed to predict the disinfectant residual concentration with time based on the parameters that affected it. Different water treatment processes were used to treat groundwater and surface water to obtain 7 types of finished waters...
Show moreThe purpose of this study was to identify the effect that water quality, pipe material, pipe size, flow conditions and the use of corrosion inhibitors would have on the rate of free chlorine and chloramine decay in distribution systems. Empirical models were developed to predict the disinfectant residual concentration with time based on the parameters that affected it. Different water treatment processes were used to treat groundwater and surface water to obtain 7 types of finished waters with a wide range of water quality characteristics. The groundwater was treated either by conventional treatment by aeration (G1) or softening (G2) or high pressure reverse osmosis (RO) and the surface water was treated either by enhanced coagulation, ozonation and GAC filtration (CSF-O3-GAC or S1) or an integrated membrane system (CSF-NF or S2). The remaining two water types were obtained by treating a blend of G1, S1 and RO by softening (S2) and nanofiltration (G4). A pilot distribution systems (PDS) consisting of eighteen (18) lines was built using old pipes obtained from existing distribution system. The pipe materials used were polyvinyl chloride (PVC), lined cast iron (LCI), unlined cast iron (UCI) and galvanized steel (G). During the first stage of the study, the 7 types of water were blended and fed to the PDS to study the effect of feed water quality changes on PDS effluent water quality, and specifically disinfectant residual. Both free chlorine and chloramines were used as disinfectant and the PDSs were operated at hydraulic retention times (HRT) of 2 and 5 days. The PDSs were periodically tested for free and combined chlorine, organic content, temperature, pH, turbidity and color. The data obtained were used to develop separate models for free chlorine and chloramines. The best fit model was a first-order kinetic model with respect to initial disinfectant concentration that is dependent on the pipe material, pipe diameter and the organic content and temperature of the water. Turbidity, color and pH were found to be not significant for the range of values observed. The models contain two decay constants, the first constant (KB) accounts for the decay due to reaction in the bulk liquid and is affected by the organics and temperature while the second constant, KW, represents the reactions at the pipe wall and is affected by the temperature of the water and the pipe material and diameter. The rate of free chlorine and chloramine decay was found to be highly affected by the pipe material, the decay was faster in unlined metallic pipes (UCI and G) and slower in the synthetic (PVC) and lined pipes (LCI). The models showed that the rate of disinfectant residual loss increases with the increase of temperature or the organics in the water irrespective of pipe material. During the second part of the study, corrosion control inhibitors were added to a blend of S1, G1 and RO that fed all the hybrid PDSs. The inhibitors used were: orthophosphate, blended ortho-polyphosphate, zinc orthophosphate and sodium silicate. Three PDSs were used for each inhibitor type, for a total of 12 PDSs, to study the effect of low, medium and high dose on water quality. Two PDSs were used as control, fed with the blend without any inhibitor addition. The control PDSs were used to observe the effect of pH control on water quality and compare to the inhibitor use. One of the control PDSs (called PDS 13) had the pH adjusted to be equal to the saturation pH in relation to calcium carbonate precipitation (pHs) while the pH of the other control PDS (PDS 14) was adjusted to be 0.3 pH units above the pHs. The disinfectant used for this part of the study was chloramine and the flow rates were set to obtain a HRT of 2 days. The chloramine demand was the same for PDS 14 and all the PDSs receiving inhibitors. PDS 13 had a chloramine demand greater than any other PDS. The lowest chloramine demand was observed in PDS 12, which received silicate inhibitor at a dose of 12 mg/L, and presented the highest pH. The elevation of pH of the water seems to reduce the rate of decay of chloramines while the use of corrosion inhibitors did not have any effect. on the rate of chloramine decay. The PDS were monitored for chloramine residual, temperature, pH, phosphate, reactive silica, and organic content. Empirical models were developed for the dissipation of chloramine in the pilot distribution systems as a function of time, pipe material, pipe diameter and water quality. Terms accounting for the effect of pH and the type and dose of corrosion inhibitor were included in the model. The use of phosphate-based or silica-based corrosion inhibitors was found to have no effect on the rate of chloramine dissipation in any of the pipe materials. Only the increase of pH was found to decrease the rate of chloramine decay. The model to best describe the decay of chloramine in the pilot distribution systems was a first-order kinetic model containing separate rate constants for the bulk reactions, pH effect and the pipe wall reactions. The rate of chloramine decay was dependent on the material and diameter of the pipe, and the temperature, pH and organic content of the water. The rate of chloramine decay was low for PVC and LCI, and more elevated in UCI and G pipes. Small diameter pipes and higher temperatures increase the rate of chlorine decay irrespective of pipe material. Additional experiments were conducted to evaluate the effect of flow velocity on chloramine decay in a pilot distribution system (PDS) for different pipe materials and water qualities. The experiments were done using the single material lines and the flow velocity of the water was varied to obtain Reynolds' numbers from 50 to 8000. A subset of experiments included the addition of blended orthophosphate corrosion inhibitor (BOP) at a dose of 1.0 mg/L as P to evaluate the effect of the inhibitor on chloramine decay. The effect of Reynolds' number on the overall chloramine decay rate (K) and the wall decay rate constant (W) was assessed for PVC, LCI, UCI, and G pipes. PVC and LCI showed no change on the rate of chloramine decay at any flow velocity. UCI and G pipes showed a rapid increase on the wall decay rate under laminar conditions (Re < 500) followed by a more gradual increase under fully turbulent flow conditions (Re > 2000). The use of the BOP inhibitor did not have an effect on the rate of chloramine decay for any of the pipe materials studied. Linear correlations were developed to adjust the rate of chloramine decay at the pipe wall for UCI and G depending on the Reynolds' number.
Show less - Date Issued
- 2007
- Identifier
- CFE0001863, ucf:47400
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001863
- Title
- BIOSTABILITY IN DRINKING WATER DISTRIBUTION SYSTEMS IN A CHANGING WATER QUALITY ENVIRONMENT USING CORROSION INHIBITORS.
- Creator
-
Zhao, Bingjie, Randall, Andrew, University of Central Florida
- Abstract / Description
-
In this study, the bacterial growth dynamics of 14 pilot drinking water distribution systems were studied in order to observe water quality changes due to corrosion inhibitor addition. Empirical models were developed to quantity the effect of inhibitor type and dose on bacterial growth (biofilm and bulk water). Water and pipe coupon samples were taken and examined during the experiments. The coupons were exposed to drinking water at approximately 20 °C for at least 5 weeks to allow the...
Show moreIn this study, the bacterial growth dynamics of 14 pilot drinking water distribution systems were studied in order to observe water quality changes due to corrosion inhibitor addition. Empirical models were developed to quantity the effect of inhibitor type and dose on bacterial growth (biofilm and bulk water). Water and pipe coupon samples were taken and examined during the experiments. The coupons were exposed to drinking water at approximately 20 °C for at least 5 weeks to allow the formation of a measurable quasi- steady-state biofilm. Bulk water samples were taken every week. In this study, two simple but practical empirical models were created. Sensitivity analysis for the bulk HPC model (for all 14 of the PDSs) showed that maintaining a chloramine residual at 2.6 mg/L instead of 1.1 mg/L would decrease bulk HPC by anywhere from 0.5 to 0.9 log, which was greater than the increase in bulk HPC from inhibitor addition at 0.31 to 0.42 log for Si and P based inhibitors respectively. This means that maintaining higher residual levels can counteract the relatively modest increases due to inhibitors. BF HPC was affected by pipe material, effluent residual and temperature in addition to a small increase due to inhibitor addition. Biofilm density was most affected by material type, with polyvinyl chloride (PVC) biofilm density consistently much lower than other materials (0.66, 0.92, and 1.22 log lower than lined cast iron (LCI), unlined cast iron (UCI), and galvanized steel (G), respectively). Temperature had a significant effect on both biofilm and bulk HPC levels but it is not practical to alter temperature for public drinking water distribution systems so temperature is not a management tool like residual. This study evaluated the effects of four different corrosion inhibitors (i.e. based on either phosphate or silica) on drinking water distribution system biofilms and bulk water HPC levels. Four different pipe materials were used in the pilot scale experiments, polyvinyl chloride (PVC), lined cast iron (LCI), unlined cast iron (UCI), and galvanized steel (G). Three kinds of phosphate based and one silica based corrosion inhibitors were added at concentrations typically applied in a drinking water distribution system for corrosion control. The data showed that there was a statistically significant increase of 0.34 log in biofilm bacterial densities (measured as HPC) with the addition of any of the phosphate based inhibitors (ortho-phosphorus, blended ortho-poly-phosphate, and zinc ortho-phosphate). A silica based inhibitor resulted in an increase of 0.36 log. The biological data also showed that there was a statistically significant increase in bulk water bacterial densities (measured as heterotrophic plates count, HPC) with the addition of any of the four inhibitors. For bulk HPC this increase was relatively small, being 15.4% (0.42 log) when using phosphate based inhibitors, and 11.0% (0.31 log) for the silica based inhibitor. Experiments with PDS influent spiked with phosphate salts, phosphate based inhibitors, and the silicate inhibitor showed that the growth response of P17 and NOx in the AOC test was increased by addition of these inorganic compounds. For this source water and the PDSs there was more than one limiting nutrient. In addition to organic compounds phosphorus was identified as a nutrient stimulating growth, and there was also an unidentified nutrient in the silica based inhibitor. However since the percentage increases due to inhibitors were no greater than 15% it is unlikely that this change would be significant for the bulk water microbial quality. In addition it was shown that increasing the chloramines residual could offset any additional growth and that the inhibitors could help compliance with the lead and copper rule. However corrosion inhibitors might result in an increase in monitoring and maintenance requirements, particularly in dead ends, reaches with long HRTs, and possibly storage facilities. In addition it is unknown what the effect of corrosion inhibitors are on the growth of coliform bacteria and opportunistic pathogens relative to ordinary heterotrophs. A method was developed to monitor precision for heterotrophic plate count (HPC) using both blind duplicates and lab replicates as part of a project looking at pilot drinking water distribution systems. Precision control charts were used to monitor for changes in assay variability with time just as they are used for chemical assays. In adapting these control charts for the HPC assay, it was determined that only plate counts ≥ 30 cfu per plate could be used for Quality Assurance (QA) purposes. In addition, four dilutions were used for all known Quality Control (QC) samples to insure counts usable for QC purposes would be obtained. As a result there was a 50% increase in the required labor for a given number of samples when blind duplicates and lab replicates were run in parallel with the samples. For bulk water HPCs the distributions of the duplicate and replicate data were found to be significantly different and separate control charts were used. A probability based analysis for setting up the warning limit (WL) and control limit (CL) was compared with the method following National Institute of Standard and Technology (NIST) guidelines.
Show less - Date Issued
- 2007
- Identifier
- CFE0001947, ucf:47452
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001947
- Title
- NITRIFICATION INVESTIGATION AND MODELING IN THE CHLORAMINATED DRINKING WATER DISTRIBUTION SYSTEM.
- Creator
-
Liu, Suibing, Taylor, James, University of Central Florida
- Abstract / Description
-
This dissertation consists of five papers concerning nitrification in chloraminated drinking water distribution systems in a one and a half year field study. Seven finished waters were produced from different treatment processes and distributed to eighteen pilot distribution systems (PDSs) that were made pipes taken from actual distribution systems. Unlined cast iron (UCI), galvanized steel (G), lined cast iron (LCI), and PVC pipes were used to build the PDSs. All finished waters were...
Show moreThis dissertation consists of five papers concerning nitrification in chloraminated drinking water distribution systems in a one and a half year field study. Seven finished waters were produced from different treatment processes and distributed to eighteen pilot distribution systems (PDSs) that were made pipes taken from actual distribution systems. Unlined cast iron (UCI), galvanized steel (G), lined cast iron (LCI), and PVC pipes were used to build the PDSs. All finished waters were stabilized and chloraminated before entering the PDSs. This dissertation consists of five major parts.(1) System variations of nitrates, nitrites, DO, pH, alkalinity, temperature, chloramine residuals and hydraulic residence times (HRT) during biological nitrification are interrelated and discussed relative to nitrification, which demonstrated Stoichiometric relationships associated with conventional biochemical nitrification reactions. Ammonia is always released when chloramines are used for residual maintenance in drinking water distribution systems, which practically insures the occurrence of biological nitrification to some degree. Biological nitrification was initiated by a loss of chloramine residual brought about by increasing temperatures at a five day HRT, which was accompanied by DO loss and slightly decreased pH. Ammonia increased due to chloramine decomposition and then decreased as nitrification began. Nitrites and nitrates increased initially with time after the chloramine residual was lost but decreased if denitrification began. Dissolved oxygen limited nitrifier growth and nitrification. No significant alkalinity variation was observed during nitrification. Residual and nitrites are key parameters for monitoring nitrification in drinking water distribution systems.(2) Using Monod kinetics, a steady state plug-flow kinetics model was developed to describe the variations of ammonia, nitrite and nitrate-N concentrations in a chloraminated distribution system. Active AOB and NOB biomass in the distribution system was determined using predictive equations within the model. The kinetic model used numerical analysis and was solved by C language to predict ammonia, nitrite, nitrate variation.(3) Nitrification control strategies were investigated during an unexpected episode and controlled study in a field study. Once nitrification began, increasing chloramine dose from 4.0 to 4.5 mg/L as Cl2 and Cl2:N ratio from 4/1 to 5/1 did not stop nitrification. Nitrification was significantly reduced but not stopped, when the distribution system hydraulic retention time was decreased from 5 to 2 days. A free chlorine burn for one week at 5 mg/L Cl2 stopped nitrification. In a controlled nitrification study, nitrification increased with increasing free ammonia and Cl2:N ratios less than 5. Flushing with increased chloramine concentration reduced nitrification, but varying flush frequency from 1 to 2 weeks had no effect on nitrification.(4) HPC variations in a chloraminated drinking water distribution system were investigated. Results showed average residual and temperature were the only water quality variables shown to affect HPC change at a five day distribution system hydraulic residence time was five days. Once nitrification began, HPC change was correlated to HRT, average residual and generated nitrite-N in the distribution system. (5) Biostability was assessed for water treatment processes and distribution system pipe by AOCs, BDOCs, and HPCs of the bulk water, and by PEPAs of the attached biofilms. All membrane finished waters were more likely to be biologically stable as indicated by lower AOCs. RO produced the lowest AOC. The order of biofilm growth by pipe material was UCI > G > LCI > PVC. Biostability decreased as temperature increased.
Show less - Date Issued
- 2004
- Identifier
- CFE0000039, ucf:46151
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000039
- Title
- DYNAMIC SHARED STATE MAINTENANCE IN DISTRIBUTED VIRTUAL ENVIRONMENTS.
- Creator
-
Hamza-Lup, Felix George, Hughes, Charles, University of Central Florida
- Abstract / Description
-
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual...
Show moreAdvances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. In a distributed interactive VE the dynamic shared state represents the changing information that multiple machines must maintain about the shared virtual components. One of the challenges in such environments is maintaining a consistent view of the dynamic shared state in the presence of inevitable network latency and jitter. A consistent view of the shared scene will significantly increase the sense of presence among participants and facilitate their interactive collaboration. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model.A review of the literature illustrates that the techniques for consistency maintenance in distributed Virtual Reality (VR) environments can be roughly grouped into three categories: centralized information management, prediction through dead reckoning algorithms, and frequent state regeneration. Additional resource management methods can be applied across these techniques for shared state consistency improvement. Some of these techniques are related to the systems infrastructure, others are related to the human nature of the participants (e.g., human perceptual limitations, area of interest management, and visual and temporal perception).An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability.
Show less - Date Issued
- 2004
- Identifier
- CFE0000096, ucf:46152
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000096
- Title
- PLANNING AND SCHEDULING FOR LARGE-SCALEDISTRIBUTED SYSTEMS.
- Creator
-
Yu, Han, Marinescu, Dan, University of Central Florida
- Abstract / Description
-
Many applications require computing resources well beyond those available on any single system. Simulations of atomic and subatomic systems with application to material science, computations related to study of natural sciences, and computer-aided design are examples of applications that can benefit from the resource-rich environment provided by a large collection of autonomous systems interconnected by high-speed networks. To transform such a collection of systems into a user's virtual...
Show moreMany applications require computing resources well beyond those available on any single system. Simulations of atomic and subatomic systems with application to material science, computations related to study of natural sciences, and computer-aided design are examples of applications that can benefit from the resource-rich environment provided by a large collection of autonomous systems interconnected by high-speed networks. To transform such a collection of systems into a user's virtual machine, we have to develop new algorithms for coordination, planning, scheduling, resource discovery, and other functions that can be automated. Then we can develop societal services based upon these algorithms, which hide the complexity of the computing system for users. In this dissertation, we address the problem of planning and scheduling for large-scale distributed systems. We discuss a model of the system, analyze the need for planning, scheduling, and plan switching to cope with a dynamically changing environment, present algorithms for the three functions, report the simulation results to study the performance of the algorithms, and introduce an architecture for an intelligent large-scale distributed system.
Show less - Date Issued
- 2005
- Identifier
- CFE0000781, ucf:46595
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000781
- Title
- COORDINATION, MATCHMAKING, AND RESOURCE ALLOCATION FOR LARGE-SCALE DISTRIBUTED SYSTEMS.
- Creator
-
Bai, Xin, Marinescu, Dan, University of Central Florida
- Abstract / Description
-
While existing grid environments cater to specific needs of a particular user community, we need to go beyond them and consider general-purpose large-scale distributed systems consisting of large collections of heterogeneous computers and communication systems shared by a large user population with very diverse requirements. Coordination, matchmaking, and resource allocation are among the essential functions of large-scale distributed systems. Although deterministic approaches for...
Show moreWhile existing grid environments cater to specific needs of a particular user community, we need to go beyond them and consider general-purpose large-scale distributed systems consisting of large collections of heterogeneous computers and communication systems shared by a large user population with very diverse requirements. Coordination, matchmaking, and resource allocation are among the essential functions of large-scale distributed systems. Although deterministic approaches for coordination, matchmaking, and resource allocation have been well studied, they are not suitable for large-scale distributed systems due to the large-scale, the autonomy, and the dynamics of the systems. We have to seek for nondeterministic solutions for large-scale distributed systems. In this dissertation we describe our work on a coordination service, a matchmaking service, and a macro-economic resource allocation model for large-scale distributed systems. The coordination service coordinates the execution of complex tasks in a dynamic environment, the matchmaking service supports finding the appropriate resources for users, and the macro-economic resource allocation model allows a broker to mediate resource providers who want to maximize their revenues and resource consumers who want to get the best resources at the lowest possible price, with some global objectives, e.g., to maximize the resource utilization of the system.
Show less - Date Issued
- 2006
- Identifier
- CFE0001172, ucf:46845
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001172
- Title
- MEASURING AND IMPROVING INTERNET VIDEO QUALITY OF EXPERIENCE.
- Creator
-
Iyengar, Mukundan, Chatterjee, Mainak, University of Central Florida
- Abstract / Description
-
Streaming multimedia content over the IP-network is poised to be the dominant Internet traffic for the coming decade, predicted to account for more than 91% of all consumer traffic in the coming years. Streaming multimedia content ranges from Internet television (IPTV), video on demand (VoD), peer-to-peer streaming, and 3D television over IP to name a few. Widespread acceptance, growth, and subscriber retention are contingent upon network providers assuring superior Quality of Experience (QoE...
Show moreStreaming multimedia content over the IP-network is poised to be the dominant Internet traffic for the coming decade, predicted to account for more than 91% of all consumer traffic in the coming years. Streaming multimedia content ranges from Internet television (IPTV), video on demand (VoD), peer-to-peer streaming, and 3D television over IP to name a few. Widespread acceptance, growth, and subscriber retention are contingent upon network providers assuring superior Quality of Experience (QoE) on top of todays Internet. This work presents the first empirical understanding of Internet's video-QoE capabilities, and tools and protocols to efficiently infer and improve them. To infer video-QoE at arbitrary nodes in the Internet, we design and implement MintMOS: a lightweight, real-time, no-reference framework for capturing perceptual quality. We demonstrate that MintMOS's projections closely match with subjective surveys in accessing perceptual quality. We use MintMOS to characterize Internet video-QoE both at the link level and end-to-end path level. As an input to our study, we use extensive measurements from a large number of Internet paths obtained from various measurement overlays deployed using PlanetLab. Link level degradations of intra-- and inter--ISP Internet links are studied to create an empirical understanding of their shortcomings and ways to overcome them. Our studies show that intra--ISP links are often poorly engineered compared to peering links, and that degradations are induced due to transient network load imbalance within an ISP. Initial results also indicate that overlay networks could be a promising way to avoid such ISPs in times of degradations. A large number of end-to-end Internet paths are probed and we measure delay, jitter, and loss rates. The measurement data is analyzed offline to identify ways to enable a source to select alternate paths in an overlay network to improve video-QoE, without the need for background monitoring or apriori knowledge of path characteristics. We establish that for any unstructured overlay of N nodes, it is sufficient to reroute key frames using a random subset of k nodes in the overlay, where k is bounded by O(lnN). We analyze various properties of such random subsets to derive simple, scalable, and an efficient path selection strategy that results in a k-fold increase in path options for any source-destination pair; options that consistently outperform Internet path selection. Finally, we design a prototype called source initiated frame restoration (SIFR) that employs random subsets to derive alternate paths and demonstrate its effectiveness in improving Internet video-QoE.
Show less - Date Issued
- 2011
- Identifier
- CFE0004012, ucf:49168
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004012
- Title
- Harmony Oriented Architecture.
- Creator
-
Martin, Kyle, Hua, Kien, Wu, Annie, Heinrich, Mark, University of Central Florida
- Abstract / Description
-
This thesis presents Harmony Oriented Architecture: a novel architectural paradigm that applies the principles of Harmony Oriented Programming to the architecture of scalable and evolvable distributed systems. It is motivated by research on Ultra Large Scale systems that has revealed inherent limitations in human ability to design large-scale software systems that can only be overcome through radical alternatives to traditional object-oriented software engineering practice that simplifies the...
Show moreThis thesis presents Harmony Oriented Architecture: a novel architectural paradigm that applies the principles of Harmony Oriented Programming to the architecture of scalable and evolvable distributed systems. It is motivated by research on Ultra Large Scale systems that has revealed inherent limitations in human ability to design large-scale software systems that can only be overcome through radical alternatives to traditional object-oriented software engineering practice that simplifies the construction of highly scalable and evolvable system.HOP eschews encapsulation and information hiding, the core principles of object- oriented design, in favor of exposure and information sharing through a spatial abstraction. This helps to avoid the brittle interface dependencies that impede the evolution of object-oriented software. HOA extends these concepts to distributed systems resulting in an architecture in which application components are represented by objects in a spatial database and executed in strict isolation using an embedded application server. Application components store their state entirely in the database and interact solely by diffusing data into a space for proximate components to observe. This architecture provides a high degree of decoupling, isolation, and state exposure allowing highly scalable and evolvable applications to be built.A proof-of-concept prototype of a non-distributed HOA middleware platform supporting JavaScript application components is implemented and evaluated. Results show remarkably good performance considering that little effort was made to optimize the implementation.
Show less - Date Issued
- 2011
- Identifier
- CFE0004480, ucf:49298
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004480
- Title
- EFFECT OF SOURCE WATER BLENDING ON COPPER RELEASE IN PIPE DISTRIBUTION SYSTEM: THERMODYNAMIC AND EMPIRICAL MODELS.
- Creator
-
Xiao, Weizhong, Taylor, James S., University of Central Florida
- Abstract / Description
-
This dissertation focuses on copper release in drinking water. Qualitative and quantitative assessment of Cu and Fe corrosion by process water quality was assessed over one year in a field study using finished waters produced from seven different treatment process and eighteen pilot distribution systems (PDSs) that were made from unlined cast iron and galvanized steel pipes, and lined cement and PVC pipes taken from actual distribution systems. Totally seven different waters were studied,...
Show moreThis dissertation focuses on copper release in drinking water. Qualitative and quantitative assessment of Cu and Fe corrosion by process water quality was assessed over one year in a field study using finished waters produced from seven different treatment process and eighteen pilot distribution systems (PDSs) that were made from unlined cast iron and galvanized steel pipes, and lined cement and PVC pipes taken from actual distribution systems. Totally seven different waters were studied, which consisted of three source waters: groundwater, surface, and simulated brackish water designated as G1, S1, and RO. With certain pre-established blending ratios, these three waters were blended to form another three waters designated as G2, G3, and G4. Enhanced surface water treatment was CFS, ozonation and GAC filtration, which was designated as S1. The CFS surface water was nanofiltered, which is S2. All seven finished waters were stabilized and chloraminated before entering the PDSs. Corrosion potential was compared qualitatively and quantitatively for all seven waters by monitoring copper and iron release from the PDSs. This dissertation consists of four major parts.(1) Copper corrosion surface characterization in which the solid corrosion products formed in certain period of exposure to drinking water were tried to be identified with kinds of surface techniques. Surface characterization indicated that major corrosion products consists of cuprite (Cu2O) as major underneath corrosion layer and tenorite (CuO), cupric hydroxide (Cu(OH)2) on the top surface. In terms of dissolution/precipitation mechanism controlling the copper concentration in bulk solution, cupric hydroxide thermodynamic model was developed.(2) Theoretical thermodynamic models were developed to predict the copper release level quantitatively based on controlling solid phases identified in part (1). These models are compared to actual data and relative assessment is made of controlling solid phases. (3) Non-linear and linear regression models were developed that accommodated the release to total copper for varying water quality. These models were verified using independent data and provide proactive means of assessing and controlling copper release in a varying water quality environment. (4) Simulation of total copper release was conducted using all possible combinations of water quality produced by blending finished waters from ground, surface and saline sources, which involves the comparison of copper corrosion potentials among reverse osmosis, nanofiltration, enhanced coagulation, lime softening, and conventional drinking water treatment.
Show less - Date Issued
- 2004
- Identifier
- CFE0000042, ucf:46069
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000042
- Title
- IMPACT OF ZINC ORTHOPHOSPHATE INHIBITOR ONDISTRIBUTION SYSTEM WATER QUALITY.
- Creator
-
Guan, Xiaotao, Taylor, James, University of Central Florida
- Abstract / Description
-
This dissertation consists of four papers concerning impacts of zinc orthophosphate (ZOP) inhibitor on iron, copper and lead release in a changing water quality environment. The mechanism of zinc orthophosphate corrosion inhibition in drinking water municipal and home distribution systems and the role of zinc were investigated. Fourteen pilot distribution systems (PDSs) which were identical and consisted of increments of PVC, lined cast iron, unlined cast iron and galvanized steel pipes were...
Show moreThis dissertation consists of four papers concerning impacts of zinc orthophosphate (ZOP) inhibitor on iron, copper and lead release in a changing water quality environment. The mechanism of zinc orthophosphate corrosion inhibition in drinking water municipal and home distribution systems and the role of zinc were investigated. Fourteen pilot distribution systems (PDSs) which were identical and consisted of increments of PVC, lined cast iron, unlined cast iron and galvanized steel pipes were used in this study. Changing quarterly blends of finished ground, surface and desalinated waters were fed into the pilot distribution systems over a one year period. Zinc orthophosphate inhibitor at three different doses was applied to three PDSs. Water quality and iron, copper and lead scale formation was monitored for the one year study duration. The first article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on surface characteristics of iron corrosion products in a changing water quality environment. Surface compositions of iron surface scales for iron and galvanized steel coupons incubated in different blended waters in the presence of ZOP inhibitor were investigated using X-ray Photoelectron Spectroscopy (XPS), Scanning Electron Microscopy (SEM) / Energy Dispersive X-ray Spectroscopy (EDS). Based on surface characterization, predictive equilibrium models were developed to describe the controlling solid phase and mechanism of ZOP inhibition and the role of zinc for iron release. The second article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on total iron release in a changing water quality environment. Development of empirical models as a function of water quality and ZOP inhibitor dose for total iron release and mass balances analysis for total zinc and total phosphorus data provided insight into the mechanism of ZOP corrosion inhibition regarding iron release in drinking water distribution systems. The third article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on total copper release in a changing water quality environment. Empirical model development was undertaken for prediction of total copper release as a function of water quality and inhibitor dose. Thermodynamic models for dissolved copper based on surface characterization of scale that were generated on copper coupons exposed to ZOP inhibitor were also developed. Surface composition was determined by X-ray Photoelectron Spectroscopy (XPS). The fourth article describes the effects of zinc orthophosphate (ZOP) corrosion inhibitor on total lead release in a changing water quality environment. Surface characterization of lead scale on coupons exposed to ZOP inhibitor by X-ray Photoelectron Spectroscopy (XPS) was utilized to identify scale composition. Development of thermodynamic model for lead release based on surface analysis results provided insight into the mechanism of ZOP inhibition and the role of zinc.
Show less - Date Issued
- 2007
- Identifier
- CFE0001931, ucf:47453
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001931
- Title
- EFFECTS OF SOURCE WATER BLENDING FOLLOWING TREATMENT WITH SODIUM SILICATE AS A CORROSION INHIBITOR ON METAL RELEASE WITHIN A WATER DISTRIBUTION SYSTEM.
- Creator
-
Lintereur, Phillip, Duranceau, Steven, University of Central Florida
- Abstract / Description
-
A study was conducted to investigate and quantify the effects of corrosion inhibitors on metal release within a pilot distribution system while varying the source water. The pilot distribution system consisted of pre-existing facilities from Taylor et al (2005). Iron, copper, and lead release data were collected during four separate phases of operation. Each phase was characterized by the particular blend ratios used during the study. A blended source water represented a water that had been...
Show moreA study was conducted to investigate and quantify the effects of corrosion inhibitors on metal release within a pilot distribution system while varying the source water. The pilot distribution system consisted of pre-existing facilities from Taylor et al (2005). Iron, copper, and lead release data were collected during four separate phases of operation. Each phase was characterized by the particular blend ratios used during the study. A blended source water represented a water that had been derived from a consistent proportion of three different source waters. These source waters included (1) surface water treated through enhanced coagulation/sedimentation/filtration, (2) conventionally treated groundwater, and (3) finished surface water treated using reverse osmosis membranes. The corrosion inhibitors used during the study were blended orthophosphate (BOP), orthophosphate (OP), zinc orthophosphate (ZOP), and sodium silicate (Si). This document was intended to cite the findings from the study associated with corrosion treatment using various doses of sodium silicate. The doses were maintained to 3, 6, and 12 mg/L as SiO2 above the blend-dependent background silica concentration. Sources of iron release within the pilot distribution system consisted of, in the following order of entry, (1) lined cast iron, (2) un-lined cast iron, and (3) galvanized steel. Iron release data from these materials was not collected for each individual iron source. Instead, iron release data represented the measurement of iron upon exposure to the pilot distribution system in general. There was little evidence to suggest that iron release was affected by sodium silicate. Statistical modeling of iron release suggested that iron release could be described by the water quality parameters of alkalinity, chlorides, and pH. The R2 statistic implied that the model could account for only 36% of the total variation within the iron release data set (i.e. R2 = 0.36). The model implies that increases in alkalinity and pH would be expected to decrease iron release on average, while an increase in chlorides would increase iron release. The surface composition of cast iron and galvanized steel coupons were analyzed using X-ray photoelectron spectroscopy (XPS). The surface analysis located binding energies consistent with Fe2O3, Fe3O4, and FeOOH for both cast iron and galvanized steel. Elemental scans detected the presence of silicon as amorphous silica; however, there was no significant difference between scans of coupons treated with sodium silicate and coupons simply exposed to the blended source water. The predominant form of zinc found on the galvanized steel coupons was ZnO. Thermodynamic modeling of the galvanized steel system suggested that zinc release was more appropriately described by Zn5(CO3)2(OH)6. The analysis of the copper release data set suggested that treatment with sodium silicate decreased copper release during the study. On average the low, medium, and high doses decreased copper release, when compared to the original blend source water prior to sodium silicate addition, by approximately 20%, 30%, and 50%, respectively. Statistical modeling found that alkalinity, chlorides, pH, and sodium silicate dose were significant variables (R2 = 0.68). The coefficients of the model implied that increases in pH and sodium silicate dose decreased copper release, while increases in alkalinity and chlorides increased copper release. XPS for copper coupons suggested that the scale composition consisted of Cu2O, CuO, and Cu(OH)2 for both the coupons treated with sodium silicate and those exposed to the blended source water. Analysis of the silicon elemental scan detected amorphous silica on 3/5 copper coupons exposed to sodium silicate. Silicon was not detected on any of the 8 control coupons. This suggested that sodium silicate inhibitor varied the surface composition of the copper scale. The XPS results seemed to be validated by the visual differences of the copper coupons exposed to sodium silicate. Copper coupons treated with sodium silicate developed a blue-green scale, while control coupons were reddish-brown. Thermodynamic modeling was unsuccessful in identifying a controlling solid that consisted of a silicate-based cupric solid. Lead release was generally decreased when treated with sodium silicate. Many of the observations were recorded below the detection limit (1 ppb as Pb) of the instrument used to measure the lead concentration of the samples during the study. The frequency of observations below the detection limit tended to increase as the dose of sodium silicate increased. An accurate quantification of the effect of sodium silicate was complicated by the observations recorded below detection limit. If the lead concentration of a sample was below detection limit, then the observation was recorded as 1 ppb. Statistical modeling suggested that temperature, alkalinity, chlorides, pH, and sodium silicate dose were important variables associated with lead release (R2 = 0.60). The exponents of the non-linear model implied that an increase in temperature, alkalinity, and chlorides increased lead release, while an increase in pH and sodium silicate dose were associated with a decrease in lead release. XPS surface characterization of lead coupons indicated the presence of PbO, PbO2, PbCO3, and Pb3(OH)2(CO3)2. XPS also found evidence of silicate scale formation. Thermodynamic modeling did not support the possibility of a silicate-based lead controlling solid. A solubility model assuming Pb3(OH)2(CO3)2 as the controlling solid was used to evaluate lead release data from samples in which lead coupons were incubated for long stagnation times. This thermodynamic model seemed to similarly describe the lead release of samples treated with sodium silicate and samples exposed to the blended source water. The pH of each sample was similar, thus sodium silicate, rather than the corresponding increase in pH, would appear to be responsible if a difference had been observed. During the overall study, the effects of BOP, OP, ZOP, and Si corrosion inhibitors were described by empirical models. Statistically, the model represented the expected value, or mean average, function. If these models are to be used to predict a dose for copper release, then the relationship between the expected value function and the 90th percentile must be approximated. The USEPA Lead and Copper Rule (LCR) regulates total copper release at an action level of 1.3 mg/L. This action level represents a 90th percentile rather than a mean average. Evaluation of the complete copper release data set suggested that the standard deviation was proportional to the mean average of a particular treatment. This relationship was estimated using a linear model. It was found that most of the copper data sub-sets (represented by a given phase, inhibitor, and dose) could be described by a normal distribution. The information obtained from the standard deviation analysis and the normality assumption validated the use of a z-score to relate the empirical models to the estimated 90th percentile observations. Since an analysis of the normality and variance (essentially contains the same information as the standard deviation) are required to assess the assumptions associated with an ANOVA, an ANOVA was performed to directly compare the effects of the inhibitors and corresponding doses. The findings suggested that phosphate-based inhibitors were consistently more effective than sodium silicate when comparing the same treatment levels (i.e. doses). Among the phosphate-based inhibitors, the effectiveness of each respective treatment level was inconsistent (i.e. there was no clear indication that any one phosphate-based inhibitor was more effective than the other). As the doses increased for each inhibitor, the results generally suggested that there was a corresponding tendency for copper release to decrease.
Show less - Date Issued
- 2008
- Identifier
- CFE0002383, ucf:47737
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002383
- Title
- THE EFFECT OF FREE CHLORINE AND CHLORAMINES ON LEAD RELEASE IN A DISTRIBUTION SYSTEM.
- Creator
-
Vasquez, Ferdinand, Taylor, James, University of Central Florida
- Abstract / Description
-
Total lead release in drinking water in the presence of free chlorine and chloramine residuals was investigated in field, laboratory and fundamental investigations for finished waters produced from ground (GW), surface (SW), saline (RO) and blended (B) sources. Field investigations found more total lead was released in the presence of chloramines than in the presence of free chlorine for RO and blended finished waters; however, there were no statistical differences in total lead release to...
Show moreTotal lead release in drinking water in the presence of free chlorine and chloramine residuals was investigated in field, laboratory and fundamental investigations for finished waters produced from ground (GW), surface (SW), saline (RO) and blended (B) sources. Field investigations found more total lead was released in the presence of chloramines than in the presence of free chlorine for RO and blended finished waters; however, there were no statistical differences in total lead release to finished GW and SW. Laboratory measurements of finished waters oxidation-reduction potential (ORP) were equivalent by source and were not affected by the addition of more than 100 mg/L of sulfates or chlorides, but were significantly higher in the presence of free chlorine relative to chloramines. Development of Pourbaix diagrams revealed the PbO2 was the controlling solid phase at the higher ORP in the presence of free chlorine and Pb3(CO3)2(OH)2(s) (hydrocerussite) was the controlling solid phase in the presence of chloramines at the lower ORP, which mechanistically accounted for the observed release of total lead as PbO2 is much less soluble than hydrocerussite. The lack of differences in total lead release to finished GW and SW was attributed to differences in water quality and intermittent behavior of particulate release from controlling solid films.
Show less - Date Issued
- 2005
- Identifier
- CFE0000533, ucf:46427
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000533