Current Search: distribution (x)
View All Items
Pages
- Title
- Improvement of Data-Intensive Applications Running on Cloud Computing Clusters.
- Creator
-
Ibrahim, Ibrahim, Bassiouni, Mostafa, Lin, Mingjie, Zhou, Qun, Ewetz, Rickard, Garibay, Ivan, University of Central Florida
- Abstract / Description
-
MapReduce, designed by Google, is widely used as the most popular distributed programmingmodel in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobehave been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in...
Show moreMapReduce, designed by Google, is widely used as the most popular distributed programmingmodel in cloud environments. Hadoop, an open-source implementation of MapReduce, is a data management framework on large cluster of commodity machines to handle data-intensive applications. Many famous enterprises including Facebook, Twitter, and Adobehave been using Hadoop for their data-intensive processing needs. Task stragglers in MapReduce jobs dramatically impede job execution on massive datasets in cloud computing systems. This impedance is due to the uneven distribution of input data and computation load among cluster nodes, heterogeneous data nodes, data skew in reduce phase, resource contention situations, and network configurations. All these reasons may cause delay failure and the violation of job completion time. One of the key issues that can significantly affect the performance of cloud computing is the computation load balancing among cluster nodes. Replica placement in Hadoop distributed file system plays a significant role in data availability and the balanced utilization of clusters. In the current replica placement policy (RPP) of Hadoop distributed file system (HDFS), the replicas of data blocks cannot be evenly distributed across cluster's nodes. The current HDFS must rely on a load balancing utility for balancing the distribution of replicas, which results in extra overhead for time and resources. This dissertation addresses data load balancing problem and presents an innovative replica placement policy for HDFS. It can perfectly balance the data load among cluster's nodes. The heterogeneity of cluster nodes exacerbates the issue of computational load balancing; therefore, another replica placement algorithm has been proposed in this dissertation for heterogeneous cluster environments. The timing of identifying the straggler map task is very important for straggler mitigation in data-intensive cloud computing. To mitigate the straggler map task, Present progress and Feedback based Speculative Execution (PFSE) algorithm has been proposed in this dissertation. PFSE is a new straggler identification scheme to identify the straggler map tasks based on the feedback information received from completed tasks beside the progress of the current running task. Straggler reduce task aggravates the violation of MapReduce job completion time. Straggler reduce task is typically the result of bad data partitioning during the reduce phase. The Hash partitioner employed by Hadoop may cause intermediate data skew, which results in straggler reduce task. In this dissertation a new partitioning scheme, named Balanced Data Clusters Partitioner (BDCP), is proposed to mitigate straggler reduce tasks. BDCP is based on sampling of input data and feedback information about the current processing task. BDCP can assist in straggler mitigation during the reduce phase and minimize the job completion time in MapReduce jobs. The results of extensive experiments corroborate that the algorithms and policies proposed in this dissertation can improve the performance of data-intensive applications running on cloud platforms.
Show less - Date Issued
- 2019
- Identifier
- CFE0007818, ucf:52804
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007818
- Title
- The Effect of Registered Nurse Supply on Population Health Outcomes: A Distributed Lag Model Approach.
- Creator
-
Sampson, Carla Jackie, Unruh, Lynn, Malvey, Donna, Liu, Albert Xinliang, Neff, Donna, University of Central Florida
- Abstract / Description
-
Registered nurses (RNs) are essential to providing care in the healthcare system. To date, research on the relationship between healthcare provider supply and population health has focused on physician supply. This study explored the effect of RN supply on population health outcomes in the U.S. This is a retrospective, cross-sectional study of U.S. counties and county equivalents using national data. Seven population health outcomes (total and disease specific mortalities and low infant birth...
Show moreRegistered nurses (RNs) are essential to providing care in the healthcare system. To date, research on the relationship between healthcare provider supply and population health has focused on physician supply. This study explored the effect of RN supply on population health outcomes in the U.S. This is a retrospective, cross-sectional study of U.S. counties and county equivalents using national data. Seven population health outcomes (total and disease specific mortalities and low infant birth weight rate) were the response variables. The predictor variable, RN supply, and some control variables were anticipated to have an asynchronous effect on the seven outcome variables in the hypothesized relationship. Therefore, these variables were examined using three different models: contemporaneous; a three-year lagged; and a distributed lag (both contemporaneous and lagged variables). Quadratic terms for RN and physician supply variables were included. Because the Area Health Resource File (AHRF) outcome variables were skewed toward zero and left censored, Tobit regression analyses were used. Data were obtained from 19 states using historical RN Supply data for 1,472 counties, representing 47% of the total target population of 3,108 U.S. counties and county equivalents. Regions with rural populations(-)the Midwest and Southeast(-)were overrepresented. Higher RN supply is positively related to higher mortality rates from ischemic heart disease, other cardiovascular disease, and chronic lower respiratory disease in the distributed lag models. Higher RN supply is not significantly related to rates of low infant birth weight, infant mortality, or mortality from cerebrovascular disease in any model. Higher RN supply is positively related to total deaths in the contemporaneous and lagged model. The results suggest a counter-intuitive, but non-linear relationship between RN supply and health outcomes. More research is needed to understand these relationships and policies must be devised to reduce the current and growing future RN shortage.
Show less - Date Issued
- 2018
- Identifier
- CFE0007091, ucf:51933
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007091
- Title
- Security of Autonomous Systems under Physical Attacks: With application to Self-Driving Cars.
- Creator
-
Dutta, Raj, Jin, Yier, Sundaram, Kalpathy, DeMara, Ronald, Zhang, Shaojie, Zhang, Teng, University of Central Florida
- Abstract / Description
-
The drive to achieve trustworthy autonomous cyber-physical systems (CPS), which can attain goals independently in the presence of significant uncertainties and for long periods of time without any human intervention, has always been enticing. Significant progress has been made in the avenues of both software and hardware for fulfilling these objectives. However, technological challenges still exist and particularly in terms of decision making under uncertainty. In an autonomous system,...
Show moreThe drive to achieve trustworthy autonomous cyber-physical systems (CPS), which can attain goals independently in the presence of significant uncertainties and for long periods of time without any human intervention, has always been enticing. Significant progress has been made in the avenues of both software and hardware for fulfilling these objectives. However, technological challenges still exist and particularly in terms of decision making under uncertainty. In an autonomous system, uncertainties can arise from the operating environment, adversarial attacks, and from within the system. As a result of these concerns, human-beings lack trust in these systems and hesitate to use them for day-to-day use.In this dissertation, we develop algorithms to enhance trust by mitigating physical attacks targeting the integrity and security of sensing units of autonomous CPS. The sensors of these systems are responsible for gathering data of the physical processes. Lack of measures for securing their information can enable malicious attackers to cause life-threatening situations. This serves as a motivation for developing attack resilient solutions.Among various security solutions, attention has been recently paid toward developing system-level countermeasures for CPS whose sensor measurements are corrupted by an attacker. Our methods are along this direction as we develop an active and multiple passive algorithm to detect the attack and minimize its effect on the internal state estimates of the system. In the active approach, we leverage a challenge authentication technique for detection of two types of attacks: The Denial of Service (DoS) and the delay injection on active sensors of the systems. Furthermore, we develop a recursive least square estimator for recovery of system from attacks. The majority of the dissertation focuses on designing passive approaches for sensor attacks. In the first method, we focus on a linear stochastic system with multiple sensors, where measurements are fused in a central unit to estimate the state of the CPS. By leveraging Bayesian interpretation of the Kalman filter and combining it with the Chi-Squared detector, we recursively estimate states within an error bound and detect the DoS and False Data Injection attacks. We also analyze the asymptotic performance of the estimator and provide conditions for resilience of the state estimate.Next, we propose a novel distributed estimator based on l1 norm optimization, which could recursively estimate states within an error bound without restricting the number of agents of the distributed system that can be compromised. We also extend this estimator to a vehicle platoon scenario which is subjected to sparse attacks. Furthermore, we analyze the resiliency and asymptotic properties of both the estimators. Finally, at the end of the dissertation, we make an initial effort to formally verify the control system of the autonomous CPS using the statistical model checking method. It is done to ensure that a real-time and resource constrained system such as a self-driving car, with controllers and security solutions, adheres to strict timing constrains.
Show less - Date Issued
- 2018
- Identifier
- CFE0007174, ucf:52253
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007174
- Title
- In-Plant and Distribution System Corrosion Control for Reverse Osmosis, Nanofiltration, and Anion Exchange Process Blends.
- Creator
-
Jeffery, Samantha, Duranceau, Steven, Randall, Andrew, Wang, Dingbao, University of Central Florida
- Abstract / Description
-
The integration of advanced technologies into existing water treatment facilities (WTFs) can improve and enhance water quality; however, these same modifications or improvements may adversely affect finished water provided to the consumer by public water systems (PWSs) that embrace these advanced technologies. Process modification or improvements may unintentionally impact compliance with the provisions of the United States Environmental Protection Agency's (USEPA's) Safe Drinking Water Act ...
Show moreThe integration of advanced technologies into existing water treatment facilities (WTFs) can improve and enhance water quality; however, these same modifications or improvements may adversely affect finished water provided to the consumer by public water systems (PWSs) that embrace these advanced technologies. Process modification or improvements may unintentionally impact compliance with the provisions of the United States Environmental Protection Agency's (USEPA's) Safe Drinking Water Act (SDWA). This is especially true with respect to corrosion control, since minor changes in water quality can affect metal release. Changes in metal release can have a direct impact on a water purveyor's compliance with the SDWA's Lead and Copper Rule (LCR). In 2010, the Town of Jupiter (Town) decommissioned its ageing lime softening (LS) plant and integrated a nanofiltration (NF) plant into their WTF. The removal of the LS process subsequently decreased the pH in the existing reverse osmosis (RO) clearwell, leaving only RO permeate and anion exchange (AX) effluent to blend. The Town believed that the RO-AX blend was corrosive in nature and that blending with NF permeate would alleviate their concern. Consequently, a portion of the NF permeate stream was to be split between the existing RO-AX clearwell and a newly constructed NF primary clearwell. The Town requested that the University of Central Florida (UCF) conduct research evaluating how to mitigate negative impacts that may result from changing water quality, should the Town place its AX into ready-reserve. The research presented in this document was focused on the evaluation of corrosion control alternatives for the Town, and was segmented into two major components: 1.The first component of the research studied internal corrosion within the existing RO clearwell and appurtenances of the Town's WTF, should the Town place the AX process on standby. Research related to WTF in-plant corrosion control focused on blending NF and RO permeate, forming a new intermediate blend, and pH-adjusting the resulting mixture to reduce corrosion in the RO clearwell. 2.The second component was implemented with respect to the Town's potable water distribution system. The distribution system corrosion control research evaluated various phosphate-based corrosion inhibitors to determine their effectiveness in reducing mild steel, lead and copper release in order to maintain the Town's continual compliance with the LCR.The primary objective of the in-plant corrosion control research was to determine the appropriate ratio of RO to NF permeate and the pH necessary to reduce corrosion in the RO clearwell. In this research, the Langelier saturation index (LSI) was the corrosion index used to evaluate the stability of RO:NF blends. Results indicated that a pH-adjusted blend consisting of 70% RO and 30% NF permeate at 8.8-8.9 pH units would produce an LSI of +0.1, theoretically protecting the RO clearwell from corrosion.The primary objective of the distribution system corrosion control component of the research was to identify a corrosion control inhibitor that would further reduce lead and copper metal release observed in the Town's distribution system to below their respective action limits (ALs) as defined in the LCR. Six alternative inhibitors composed of various orthophosphate and polyphosphate (ortho:poly) ratios were evaluated sequentially using a corrosion control test apparatus. The apparatus was designed to house mild steel, lead and copper coupons used for weight loss analysis, as well as mild steel, lead solder and copper electrodes used for linear polarization analysis. One side of the apparatus, referred to as the (")control condition,(") was fed potable water that did not contain the corrosion inhibitor, while the other side of the corrosion apparatus, termed the (")test condition,(") was fed potable water that had been dosed with a corrosion inhibitor. Corrosion rate measurements were taken twice per weekday, and water quality was measured twice per week. Inhibitor evaluations were conducted over a span of 55 to 56 days, varying with each inhibitor. Coupons and electrodes were pre-corroded to simulate existing distribution system conditions. Water flow to the apparatus was controlled with an on/off timer to represent variations in the system and homes. Inhibitor comparisons were made based on their effectiveness at reducing lead and copper release after chemical addition. Based on the results obtained from the assessment of corrosion inhibitors for distribution system corrosion control, it appears that Inhibitors 1 and 3 were more successful in reducing lead corrosion rates, and each of these inhibitors reduced copper corrosion rates. Also, it is recommended that consideration be given to use of a redundant single-loop duplicate test apparatus in lieu of a double rack corrosion control test apparatus in experiments where pre-corrosion phases are implemented. This recommendation is offered because statistically, the control versus test double loop may not provide relevance in data analysis. The use of the Wilcoxon signed ranks test comparing the initial pre-corroding phase to the inhibitor effectiveness phase has proven to be a more useful analytical method for corrosion studies.
Show less - Date Issued
- 2013
- Identifier
- CFE0005008, ucf:50001
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005008
- Title
- A framework for interoperability on the United States electric grid infrastructure.
- Creator
-
Laval, Stuart, Rabelo, Luis, Zheng, Qipeng, Xanthopoulos, Petros, Ajayi, Richard, University of Central Florida
- Abstract / Description
-
Historically, the United States (US) electric grid has been a stable one-way power delivery infrastructure that supplies centrally-generated electricity to its predictably consuming demand. However, the US electric grid is now undergoing a huge transformation from a simple and static system to a complex and dynamic network, which is starting to interconnect intermittent distributed energy resources (DERs), portable electric vehicles (EVs), and load-altering home automation devices, that...
Show moreHistorically, the United States (US) electric grid has been a stable one-way power delivery infrastructure that supplies centrally-generated electricity to its predictably consuming demand. However, the US electric grid is now undergoing a huge transformation from a simple and static system to a complex and dynamic network, which is starting to interconnect intermittent distributed energy resources (DERs), portable electric vehicles (EVs), and load-altering home automation devices, that create bidirectional power flow or stochastic load behavior. In order for this grid of the future to effectively embrace the high penetration of these disruptive and fast-responding digital technologies without compromising its safety, reliability, and affordability, plug-and-play interoperability within the field area network must be enabled between operational technology (OT), information technology (IT), and telecommunication assets in order to seamlessly and securely integrate into the electric utility's operations and planning systems in a modular, flexible, and scalable fashion. This research proposes a potential approach to simplifying the translation and contextualization of operational data on the electric grid without being routed to the utility datacenter for a control decision. This methodology integrates modern software technology from other industries, along with utility industry-standard semantic models, to overcome information siloes and enable interoperability. By leveraging industrial engineering tools, a framework is also developed to help devise a reference architecture and use-case application process that is applied and validated at a US electric utility.
Show less - Date Issued
- 2015
- Identifier
- CFE0005647, ucf:50193
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005647
- Title
- MODELING, DESIGN AND EVALUATION OF NETWORKING SYSTEMS AND PROTOCOLS THROUGH SIMULATION.
- Creator
-
Lacks, Daniel, Kocak, Taskin, University of Central Florida
- Abstract / Description
-
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework...
Show moreComputer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.
Show less - Date Issued
- 2007
- Identifier
- CFE0001887, ucf:47399
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001887
- Title
- VERIFICATION OF PILOT-SCALE IRON RELEASE MODELS.
- Creator
-
Glatthorn, Stephen, Taylor, James, University of Central Florida
- Abstract / Description
-
A model for the prediction of color release from a pilot distribution system was created in 2003 by Imran. This model allows prediction of the release of color from aged cast iron and galvanized steel pipes as a function of water quality and hydraulic residence time. Color was used as a surrogate measurement for iron, which exhibited a strong linear correlation. An anomaly of this model was an absence of a term to account for pH, due to the influent water being well stabilized. A new study...
Show moreA model for the prediction of color release from a pilot distribution system was created in 2003 by Imran. This model allows prediction of the release of color from aged cast iron and galvanized steel pipes as a function of water quality and hydraulic residence time. Color was used as a surrogate measurement for iron, which exhibited a strong linear correlation. An anomaly of this model was an absence of a term to account for pH, due to the influent water being well stabilized. A new study was completed to evaluate the effectiveness of corrosion inhibitors against traditional adjustment. Two control lines were supplied with nearly same water qualities, one at pH close to pHs and one at pH well above pHs. The resulting data showed that effluent iron values were typically greater in the line with lower pH. The non-linear color model by Imran shows good agreement when the LSI was largely positive, but underpredicted the color release from the lower LSI line. A modification to the Larson Ratio proposed by Imran was able to give a reasonable agreement to the data at lower LSI values. LSI showed no definite relation to iron release, although a visual trend of higher LSI mitigating iron release can be seen. An iron flux model was also developed on the same pilot system by Mutoti. This model was based on a steady state mass balance of iron in a pipe. The constants for the model were empirically derived from experiments at different hydraulic conditions with a constant water quality. Experiments were assumed to reach steady state at 3 pipe volumes due to the near constant effluent turbidity achieved at this point. The model proposes that the iron flux under laminar flow conditions is constant, while the iron flux is linearly related to the Reynolds Number under turbulent conditions. This model incorporates the color release models developed by Imran to calculate flux values from different water qualities. A limited number of experiments were performed in the current study using desalinated and ground water sources at Reynolds Numbers ranging from 50 to 200. The results of these limited experiments showed that the iron flux for cast iron pipe was approximately one-half of the predicted values from Mutoti. This discrepancy may be caused by the more extensive flushing of the pipes performed on the current experiments which allowed attainment of a true steady state. Model changes were proposed to distinguish between near stagnant flow and the upper laminar region, with the upper laminar region showing a slight linear increase. Predictions using the galvanized flux model were not accurate due to an inferior color release model that was developed for galvanized pipes. The model exhibits a high dependence on sulfate concentrations, but concentrations of sulfates in the current experiments were low. This led to low predicted flux values when the actual data showed otherwise. A new galvanized model was developed from a combination of data from the original and current experiments. The predicted flux values using the new model showed great improvement over the old model, but the new model database was limited and the resulting model was not able to be independently tested.
Show less - Date Issued
- 2007
- Identifier
- CFE0001704, ucf:47332
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001704
- Title
- AN INTERACTIVE DISTRIBUTED SIMULATION FRAMEWORK WITH APPLICATION TO WIRELESS NETWORKS AND INTRUSION DETECTION.
- Creator
-
Kachirski, Oleg, Guha, Ratan, University of Central Florida
- Abstract / Description
-
In this dissertation, we describe the portable, open-source distributed simulation framework (WINDS) targeting simulations of wireless network infrastructures that we have developed. We present the simulation framework which uses modular architecture and apply the framework to studies of mobility pattern effects, routing and intrusion detection mechanisms in simulations of large-scale wireless ad hoc, infrastructure, and totally mobile networks. The distributed simulations within the...
Show moreIn this dissertation, we describe the portable, open-source distributed simulation framework (WINDS) targeting simulations of wireless network infrastructures that we have developed. We present the simulation framework which uses modular architecture and apply the framework to studies of mobility pattern effects, routing and intrusion detection mechanisms in simulations of large-scale wireless ad hoc, infrastructure, and totally mobile networks. The distributed simulations within the framework execute seamlessly and transparently to the user on a symmetric multiprocessor cluster computer or a network of computers with no modifications to the code or user objects. A visual graphical interface precisely depicts simulation object states and interactions throughout the simulation execution, giving the user full control over the simulation in real time. The network configuration is detected by the framework, and communication latency is taken into consideration when dynamically adjusting the simulation clock, allowing the simulation to run on a heterogeneous computing system. The simulation framework is easily extensible to multi-cluster systems and computing grids. An entire simulation system can be constructed in a short time, utilizing user-created and supplied simulation components, including mobile nodes, base stations, routing algorithms, traffic patterns and other objects. These objects are automatically compiled and loaded by the simulation system, and are available for dynamic simulation injection at runtime. Using our distributed simulation framework, we have studied modern intrusion detection systems (IDS) and assessed applicability of existing intrusion detection techniques to wireless networks. We have developed a mobile agent-based IDS targeting mobile wireless networks, and introduced load-balancing optimizations aimed at limited-resource systems to improve intrusion detection performance. Packet-based monitoring agents of our IDS employ a CASE-based reasoner engine that performs fast lookups of network packets in the existing SNORT-based intrusion rule-set. Experiments were performed using the intrusion data from MIT Lincoln Laboratories studies, and executed on a cluster computer utilizing our distributed simulation system.
Show less - Date Issued
- 2005
- Identifier
- CFE0000642, ucf:46545
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000642
- Title
- Assessment of a Surface Water Supply for Source and Treated Distribution System Quality.
- Creator
-
Rodriguez, Angela, Duranceau, Steven, Lee, Woo Hyoung, Sadmani, A H M Anwar, University of Central Florida
- Abstract / Description
-
This study focused on providing a source to tap assessment of surface water systems with respect to (i) the use of alternative biomonitoring tools, (ii) disinfection byproduct (DBP) formation and control, and (iii) corrosion control. In the first study component, two water systems were microbiologically evaluated using adenosine triphosphate (ATP) bioluminescence technology. It was determined that microbial ATP was useful as a surrogate for biomonitoring within a surface water system when...
Show moreThis study focused on providing a source to tap assessment of surface water systems with respect to (i) the use of alternative biomonitoring tools, (ii) disinfection byproduct (DBP) formation and control, and (iii) corrosion control. In the first study component, two water systems were microbiologically evaluated using adenosine triphosphate (ATP) bioluminescence technology. It was determined that microbial ATP was useful as a surrogate for biomonitoring within a surface water system when paired with traditional methods. Although microbial activity differed between distribution systems that used either chloramine or chlorine disinfectant, in both cases flowrate and season affected microbial ATP values. In the second study component, total trihalomethanes (TTHM) and haloacetic acids (HAA5) DBP formation and disinfectant stability was investigated using a novel DBP control process. The method relied on a combination of sulfate, ultraviolet light irradiation, pH, and aeration unit operations. Results indicate respective decreases in 7-day TTHM and HAA5 formation potentials of 36% - 57% and 20% - 47% for the surface waters investigated. In the third component of this work, a corrosion study assessed the effect of disinfectant chemical transitions on the corrosion rates of common distribution system metals. When a chlorine based disinfection system transitioned between chlorine and chloramine, mild steel corrosion increased by 0.45 mils per year (mpy) under chloramine and returned to baseline corrosion rates under chlorine. However, when a chloramine based disinfection system transitioned between chloramine and chlorine, mild steel corrosion increased in tandem with total chlorine levels. Unlike the chlorine system, the mild steel corrosion rates did not return to baseline under chloramine after exposure to 5 mg/L of total chlorine. Surface water systems should consider the use of ATP as a surrogate for biomonitoring, consider the novel treatment process for DBP formation control, and consider corrosion control in disinfectant decision-making activities.
Show less - Date Issued
- 2019
- Identifier
- CFE0007901, ucf:52751
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007901
- Title
- Determining the Small-scale Structure and Particle Properties in Saturn's Rings from Stellar and Radio Occultations.
- Creator
-
Jerousek, Richard, Colwell, Joshua, Britt, Daniel, Fernandez, Yan, Hedman, Mathew, University of Central Florida
- Abstract / Description
-
Saturn's rings consist of icy particles of various sizes ranging from millimeters to several meters. Particles may aggregate into ephemeral elongated clumps known as self-gravity wakes in regions where the surface mass density and epicyclic frequency give a Toomre critical wavelength which is much larger than the largest individual particles (Julian and Toomre 1966). Optical depth measurements at different wavelengths can be used to constrain the sizes of individual particles (Zebker et al....
Show moreSaturn's rings consist of icy particles of various sizes ranging from millimeters to several meters. Particles may aggregate into ephemeral elongated clumps known as self-gravity wakes in regions where the surface mass density and epicyclic frequency give a Toomre critical wavelength which is much larger than the largest individual particles (Julian and Toomre 1966). Optical depth measurements at different wavelengths can be used to constrain the sizes of individual particles (Zebker et al. 1985, Marouf et al. 1983) while measurements of optical depths spanning many viewing geometries can be used to determine the properties of self-gravity wakes (Colwell et al. 2006, 2007, Hedman et al. 2007, Nicholson and Hedman 2010, Jerousek et al. 2016). Studies constraining the parameters of the assumed power-law particle size distribution have been attempted (Zebker et al. 1985, Marouf et al. 1983) but have not yet accounted for the presence of self-gravity wakes or the much larger elongated particle aggregates seen in Cassini Imaging Subsystem (ISS) images and commonly referred to as (")straw("). We use a multitude of Cassini stellar occultations measured by UVIS (Ultraviolet Imaging Spectrograph) and VIMS (Visual and Infrared Mapping Spectrometer) together with Cassini's RSS (Radio Science Sub System) X-band, Ka-band, and S-band radio occultations to better constrain the particle size distribution throughout Saturn's main ring system, including regions where self-gravity wakes have a significant effect on the measured optical depth of the rings.
Show less - Date Issued
- 2018
- Identifier
- CFE0007019, ucf:52029
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007019
- Title
- THE EFFECTS OF PHOSPHATE AND SILICATE INHIBITORS ON SURFACE ROUGHNESS AND COPPER RELEASE IN WATER DISTRIBUTION SYSTEMS.
- Creator
-
MacNevin, David, Taylor, James, University of Central Florida
- Abstract / Description
-
The effects of corrosion inhibitors on water quality and the distribution system were studied. This dissertation investigates the effect of inhibitors on iron surface roughness, copper surface roughness, and copper release. Corrosion inhibitors included blended poly/ortho phosphate, sodium orthophosphate, zinc orthophosphate, and sodium silicate. These inhibitors were added to a blend of surface water, groundwater, and desalinated brackish water. Surface roughness of galvanized iron, unlined...
Show moreThe effects of corrosion inhibitors on water quality and the distribution system were studied. This dissertation investigates the effect of inhibitors on iron surface roughness, copper surface roughness, and copper release. Corrosion inhibitors included blended poly/ortho phosphate, sodium orthophosphate, zinc orthophosphate, and sodium silicate. These inhibitors were added to a blend of surface water, groundwater, and desalinated brackish water. Surface roughness of galvanized iron, unlined cast iron, lined cast iron, and polyvinyl chloride was measured using pipe coupons exposed for three months. Roughness of each pipe coupon was measured with an optical surface profiler before and after exposure to inhibitors. For most materials, inhibitor did not have a significant effect on surface roughness; instead, the most significant factor determining the final surface roughness was the initial surface roughness. Coupons with low initial surface roughness tended to have an increase in surface roughness during exposure, and vice versa, implying that surface roughness tended to regress towards an average or equilibrium value. For unlined cast iron, increased alkalinity and increased temperature tended to correspond with increases in surface roughness. Unlined cast iron coupons receiving phosphate inhibitors were more likely to have a significant change in surface roughness, suggesting that phosphate inhibitors affect stability of iron pipe scales. Similar roughness data collected with new copper coupons showed that elevated orthophosphate, alkalinity, and temperature were all factors associated with increased copper surface roughness. The greatest increases in surface roughness were observed with copper coupons receiving phosphate inhibitors. Smaller increases were observed with copper coupons receiving silicate inhibitor or no inhibitor. With phosphate inhibitors, elevated temperature and alkalinity were associated with larger increases in surface roughness and blue-green copper (II) scales.. Otherwise a compact, dull red copper (I) scale was observed. These data suggest that phosphate inhibitor addition corresponds with changes in surface morphology, and surface composition, including the oxidation state of copper solids. The effects of corrosion inhibitors on copper surface chemistry and cuprosolvency were investigated. Most copper scales had X-ray photoelectron spectroscopy binding energies consistent with a mixture of Cu2O, CuO, Cu(OH)2, and other copper (II) salts. Orthophosphate and silica were detected on copper surfaces exposed to each inhibitor. All phosphate and silicate inhibitors reduced copper release relative to the no inhibitor treatments, keeping total copper below the 1.3 mg/L MCLG for all water quality blends. All three kinds of phosphate inhibitors, when added at 1 mg/L as P, corresponded with a 60% reduction in copper release relative to the no inhibitor control. On average, this percent reduction was consistent across varying water quality conditions in all four phases. Similarly when silicate inhibitor was added at 6 mg/L as SiO2, this corresponded with a 25-40% reduction in copper release relative to the no inhibitor control. Hence, on average, for the given inhibitors and doses, phosphate inhibitors provided more predictable control of copper release across changing water quality conditions. A plot of cupric ion concentration versus orthophosphate concentration showed a decrease in copper release consistent with mechanistic control by either cupric phosphate solubility or a diffusion limiting phosphate film. Thermodynamic models were developed to identify feasible controlling solids. For the no inhibitor treatment, Cu(OH)2 provided the closest prediction of copper release. With phosphate inhibitors both Cu(OH)2 and Cu(PO4)·2H2O models provided plausible predictions. Similarly, with silicate inhibitor, the Cu(OH)2 and CuSiO3·H2O models provided plausible predictions.
Show less - Date Issued
- 2008
- Identifier
- CFE0002001, ucf:47621
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002001
- Title
- Stability and Control in Complex Networks of Dynamical Systems.
- Creator
-
Manaffam, Saeed, Vosoughi, Azadeh, Behal, Aman, Atia, George, Rahnavard, Nazanin, Javidi, Tara, Das, Tuhin, University of Central Florida
- Abstract / Description
-
Stability analysis of networked dynamical systems has been of interest in many disciplines such as biology and physics and chemistry with applications such as LASER cooling and plasma stability. These large networks are often modeled to have a completely random (Erd\"os-R\'enyi) or semi-random (Small-World) topologies. The former model is often used due to mathematical tractability while the latter has been shown to be a better model for most real life networks.The recent emergence of cyber...
Show moreStability analysis of networked dynamical systems has been of interest in many disciplines such as biology and physics and chemistry with applications such as LASER cooling and plasma stability. These large networks are often modeled to have a completely random (Erd\"os-R\'enyi) or semi-random (Small-World) topologies. The former model is often used due to mathematical tractability while the latter has been shown to be a better model for most real life networks.The recent emergence of cyber physical systems, and in particular the smart grid, has given rise to a number of engineering questions regarding the control and optimization of such networks. Some of the these questions are: \emph{How can the stability of a random network be characterized in probabilistic terms? Can the effects of network topology and system dynamics be separated? What does it take to control a large random network? Can decentralized (pinning) control be effective? If not, how large does the control network needs to be? How can decentralized or distributed controllers be designed? How the size of control network would scale with the size of networked system?}Motivated by these questions, we began by studying the probability of stability of synchronization in random networks of oscillators. We developed a stability condition separating the effects of topology and node dynamics and evaluated bounds on the probability of stability for both Erd\"os-R\'enyi (ER) and Small-World (SW) network topology models. We then turned our attention to the more realistic scenario where the dynamics of the nodes and couplings are mismatched. Utilizing the concept of $\varepsilon$-synchronization, we have studied the probability of synchronization and showed that the synchronization error, $\varepsilon$, can be arbitrarily reduced using linear controllers.We have also considered the decentralized approach of pinning control to ensure stability in such complex networks. In the pinning method, decentralized controllers are used to control a fraction of the nodes in the network. This is different from traditional decentralized approaches where all the nodes have their own controllers. While the problem of selecting the minimum number of pinning nodes is known to be NP-hard and grows exponentially with the number of nodes in the network we have devised a suboptimal algorithm to select the pinning nodes which converges linearly with network size. We have also analyzed the effectiveness of the pinning approach for the synchronization of oscillators in the networks with fast switching, where the network links disconnect and reconnect quickly relative to the node dynamics.To address the scaling problem in the design of distributed control networks, we have employed a random control network to stabilize a random plant network. Our results show that for an ER plant network, the control network needs to grow linearly with the size of the plant network.
Show less - Date Issued
- 2015
- Identifier
- CFE0005834, ucf:50902
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005834
- Title
- On Distributed Estimation for Resource Constrained Wireless Sensor Networks.
- Creator
-
Sani, Alireza, Vosoughi, Azadeh, Rahnavard, Nazanin, Wei, Lei, Atia, George, Chatterjee, Mainak, University of Central Florida
- Abstract / Description
-
We study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically...
Show moreWe study Distributed Estimation (DES) problem, where several agents observe a noisy version of an underlying unknown physical phenomena (which is not directly observable), and transmit a compressed version of their observations to a Fusion Center (FC), where collective data is fused to reconstruct the unknown. One of the most important applications of Wireless Sensor Networks (WSNs) is performing DES in a field to estimate an unknown signal source. In a WSN battery powered geographically distributed tiny sensors are tasked with collecting data from the field. Each sensor locally processes its noisy observation (local processing can include compression,dimension reduction, quantization, etc) and transmits the processed observation over communication channels to the FC, where the received data is used to form a global estimate of the unknown source such that the Mean Square Error (MSE) of the DES is minimized. The accuracy of DES depends on many factors such as intensity of observation noises in sensors, quantization errors in sensors, available power and bandwidth of the network, quality of communication channels between sensors and the FC, and the choice of fusion rule in the FC. Taking into account all of these contributing factors and implementing a DES system which minimizes the MSE and satisfies all constraints is a challenging task. In order to probe into different aspects of this challenging task we identify and formulate the following three problems and address them accordingly:1- Consider an inhomogeneous WSN where the sensors' observations is modeled linear with additive Gaussian noise. The communication channels between sensors and FC are orthogonal power and bandwidth-constrained erroneous wireless fading channels. The unknown to be estimated is a Gaussian vector. Sensors employ uniform multi-bit quantizers and BPSK modulation. Given this setup, we ask: what is the best fusion rule in the FC? what is the best transmit power and quantization rate (measured in bits per sensor) allocation schemes that minimize the MSE? In order to answer these questions, we derive some upper bounds on global MSE and through minimizing those bounds, we propose various resource allocation schemes for the problem, through which we investigate the effect of contributing factors on the MSE.2- Consider an inhomogeneous WSN with an FC which is tasked with estimating a scalar Gaussian unknown. The sensors are equipped with uniform multi-bit quantizers and the communication channels are modeled as Binary Symmetric Channels (BSC). In contrast to former problem the sensors experience independent multiplicative noises (in addition to additive noise). The natural question in this scenario is: how does multiplicative noise affect the DES system performance? how does it affect the resource allocation for sensors, with respect to the case where there is no multiplicative noise? We propose a linear fusion rule in the FC and derive the associated MSE in closed-form. We propose several rate allocation schemes with different levels of complexity which minimize the MSE. Implementing the proposed schemes lets us study the effect of multiplicative noise on DES system performance and its dynamics. We also derive Bayesian Cramer-Rao Lower Bound (BCRLB) and compare the MSE performance of our porposed methods against the bound.As a dual problem we also answer the question: what is the minimum required bandwidth of thenetwork to satisfy a predetermined target MSE?3- Assuming the framework of Bayesian DES of a Gaussian unknown with additive and multiplicative Gaussian noises involved, we answer the following question: Can multiplicative noise improve the DES performance in any case/scenario? the answer is yes, and we call the phenomena as 'enhancement mode' of multiplicative noise. Through deriving different lower bounds, such as BCRLB,Weiss-Weinstein Bound (WWB), Hybrid CRLB (HCRLB), Nayak Bound (NB), Yatarcos Bound (YB) on MSE, we identify and characterize the scenarios that the enhancement happens. We investigate two situations where variance of multiplicative noise is known and unknown. Wealso compare the performance of well-known estimators with the derived bounds, to ensure practicability of the mentioned enhancement modes.
Show less - Date Issued
- 2017
- Identifier
- CFE0006913, ucf:51698
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006913
- Title
- Integrating Spray Aeration and Granular Activated Carbon for Disinfection By-Product Control in a Potable Water System.
- Creator
-
Rodriguez, Angela, Duranceau, Steven, Lee, Woo Hyoung, Sadmani, A H M Anwar, University of Central Florida
- Abstract / Description
-
Public water systems add disinfectants in water treatment to inactivate microbial pathogens. Chlorine, when used as a disinfectant, reacts with natural organic matter in the water to form trihalomethane (THM) and haloacetic acid (HAA5) disinfection by-products (DBPs), which are suspected carcinogens. The Safe Drinking Water Act's Disinfectant and Disinfection By-Product (D/DBP) Rules were promulgated by the U.S. Environmental Protection Agency to regulate the amount of DBPs in water systems....
Show morePublic water systems add disinfectants in water treatment to inactivate microbial pathogens. Chlorine, when used as a disinfectant, reacts with natural organic matter in the water to form trihalomethane (THM) and haloacetic acid (HAA5) disinfection by-products (DBPs), which are suspected carcinogens. The Safe Drinking Water Act's Disinfectant and Disinfection By-Product (D/DBP) Rules were promulgated by the U.S. Environmental Protection Agency to regulate the amount of DBPs in water systems. Regulatory compliance is based on maximum contaminant levels (MCL), measured as a locational running annual average (LRAA), for total THM (TTHM) and HAA5 of 80 (&)#181;g/L and 60 (&)#181;g/L, respectively. Regulated DBPs, if consumed in excess of EPA's MCL standard over many years, may increase chronic health risks. In order to comply with the D/DBP Rules, the County of Maui Department of Water Supply (DWS) adopted two DBP control technologies. A GridBee(&)#174; spray-aeration process was place into DWS's Lower Kula water system's Brooks ground storage tank in February of 2013. In March of 2015 the second DBP control technology, granular activated carbon (GAC), was integrated into DWS's Pi'iholo surface water treatment plant. To investigate the integration effectiveness of GAC and spray-aeration into a water system for DBP control, DBP data was gathered from the system between August of 2011 and August 2016, and analyzed relative to cost and performance.Prior to the spray aeration and GAC integration, it was found that TTHM levels at the LRAA compliance site ranged between 58.5 (&)#181;g/L and 125 (&)#181;g/L (at times exceeding the MCL). Additionally, HAA5 levels at the LRAA compliance site ranged between 21.2 and 52.0 (&)#181;g/L. The concerted efforts of the GAC and GridBee(&)#174; system was found to reduce LRAA TTHM and HAA5 concentrations to 38.5 (&)#181;g/L and 20.5 (&)#181;g/L, respectively, in the Lower Kula system. Hypothesis testing utilizing t-Tests confirmed that TTHMs levels were controlled by the spray aeration system and the GAC was responsible for controlling HAA5 formation. Although TTHM levels were reduced by 58 percent, and HAA5 levels by 48 percent, the estimated cumulative annual operation and maintenance (O(&)M) cost of the two systems was $1,036,000. In light of the cost analysis, total organic carbon (TOC)-based models for predicting LRAA TTHM and HAA5 levels were developed as equation (i) and (ii), respectively:(i) TTHM (&)#181;g/L = (32.5 x (TOC ppm)) + 5.59, (ii) HAA5 (&)#181;g/L = (8.37 x (TOC ppm)) + 12.4.The TTHM model yielded an R2 of 0.93, and the HAA5 model had an R2 of 0.52. F-Tests comparing predicted LRAA TTHM and HAA5 levels to actual LRAA TTHM and HAA5 levels determined no statistically-significant difference. With the knowledge of how the GAC and spray aerator controlled DBPs in the water system, a cost-effective and practical treatment operating parameter was developed. The parameter, Pi'iholo water plant filter effluent TOC content, can serve as an indicator that operators would use to alter DBP treatment process flow set points to achieve cost-effective treatment. Furthermore, the significant annual cost contribution by the GAC, coupled with HAA5 levels below DWS's MCLG, led to the recommendation of variable frequency drive (VFD) pumps for the GAC system. The addition of VFD pumps should reduce the frequency of carbon change outs while preserving adequate HAA5 control in the system.
Show less - Date Issued
- 2016
- Identifier
- CFE0006841, ucf:52881
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006841
- Title
- Estimation for the Cox Model with Various Types of Censored Data.
- Creator
-
Riddlesworth, Tonya, Ren, Joan, Mohapatra, Ram, Richardson, Gary, Ni, Liqiang, Schott, James, University of Central Florida
- Abstract / Description
-
In survival analysis, the Cox model is one of the most widely used tools. However, up to now there has not been any published work on the Cox model with complicated types of censored data, such as doubly censored data, partly-interval censored data, etc., while these types of censored data have been encountered in important medical studies, such as cancer, heart disease, diabetes, etc. In this dissertation, we first derive the bivariate nonparametric maximum likelihood estimator (BNPMLE) Fn(t...
Show moreIn survival analysis, the Cox model is one of the most widely used tools. However, up to now there has not been any published work on the Cox model with complicated types of censored data, such as doubly censored data, partly-interval censored data, etc., while these types of censored data have been encountered in important medical studies, such as cancer, heart disease, diabetes, etc. In this dissertation, we first derive the bivariate nonparametric maximum likelihood estimator (BNPMLE) Fn(t,z) for joint distribution function Fo(t,z) of survival time T and covariate Z, where T is subject to right censoring, noting that such BNPMLE Fn has not been studied in statistical literature. Then, based on this BNPMLE Fn we derive empirical likelihood-based (Owen, 1988) confidence interval for the conditional survival probabilities, which is an important and difficult problem in statistical analysis, and also has not been studied in literature. Finally, with this BNPMLE Fn as a starting point, we extend the weighted empirical likelihood method (Ren, 2001 and 2008a) to the multivariate case, and obtain a weighted empirical likelihood-based estimation method for the Cox model. Such estimation method is given in a unified form, and is applicable to various types of censored data aforementioned.
Show less - Date Issued
- 2011
- Identifier
- CFE0004158, ucf:49051
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004158
- Title
- Counterproductive Work Behaviors, Justice, and Affect: A Meta-Analysis.
- Creator
-
Cochran, Megan, Joseph, Dana, Fritzsche, Barbara, Jentsch, Kimberly, University of Central Florida
- Abstract / Description
-
Counterproductive work behaviors (CWBs) are an expensive phenomenon for organizations, costing billions of dollars collectively each year. Recent research has focused on justice perceptions as predictors of CWBs, but little research has been conducted on the specific types of counterproductive work behaviors (i.e., sabotage, withdrawal, production deviance, abuse, and theft) that result from specific organizational justice perceptions (i.e., distributive, procedural, interpersonal, and...
Show moreCounterproductive work behaviors (CWBs) are an expensive phenomenon for organizations, costing billions of dollars collectively each year. Recent research has focused on justice perceptions as predictors of CWBs, but little research has been conducted on the specific types of counterproductive work behaviors (i.e., sabotage, withdrawal, production deviance, abuse, and theft) that result from specific organizational justice perceptions (i.e., distributive, procedural, interpersonal, and informational) and the mediating effect of state affect. The current paper meta-analyzed the relationships between justice, CWB, and state affect and found that justice was negatively related to dimensions of CWB and state positive/negative affect were negatively/positively related to CWB dimensions, respectively. However, mediation of the relationship between justice and CWB by state affect was inconsistent across justice types and CWB dimensions. These findings suggests that, while managers should maintain an awareness of justice and state affect as individual predictors of CWBs, the current study does not necessarily support the claim that state affect explains the relationship between justice and counterproductive work behavior dimensions.
Show less - Date Issued
- 2014
- Identifier
- CFE0005151, ucf:50689
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005151
- Title
- Photon Statistics in Disordered Lattices.
- Creator
-
Kondakci, Hasan, Saleh, Bahaa, Abouraddy, Ayman, Christodoulides, Demetrios, Mucciolo, Eduardo, University of Central Florida
- Abstract / Description
-
Propagation of coherent waves through disordered media, whether optical, acoustic, or radio waves, results in a spatially redistributed random intensity pattern known as speckle -- a statistical phenomenon. The subject of this dissertation is the statistics of monochromatic coherent light traversing disordered photonic lattices and its dependence on the disorder class, the level of disorder and the excitation configuration at the input. Throughout the dissertation, two disorder classes are...
Show morePropagation of coherent waves through disordered media, whether optical, acoustic, or radio waves, results in a spatially redistributed random intensity pattern known as speckle -- a statistical phenomenon. The subject of this dissertation is the statistics of monochromatic coherent light traversing disordered photonic lattices and its dependence on the disorder class, the level of disorder and the excitation configuration at the input. Throughout the dissertation, two disorder classes are considered, namely, diagonal and off-diagonal disorders. The latter exhibits disorder-immune chiral symmetry -- the appearance of the eigenmodes in skew-symmetric pairs and the corresponding eigenvalues in opposite signs. When a disordered photonic lattice, an array of evanescently coupled waveguides, is illuminated with an extended coherent optical field, discrete speckle develops. Numerical simulations and analytical modeling reveal that discrete speckle shows a set of surprising features, that are qualitatively indistinguishable in both disorder classes. First, the fingerprint of transverse Anderson localization -- associated with disordered lattices, is exhibited in the narrowing of the spatial coherence function. Second, the transverse coherence length (or speckle grain size) freezes upon propagation. Third, the axial coherence depth is independent of the axial position, thereby resulting in a coherence voxel of fixed volume independently of position.When a single lattice site is coherently excited, I discovered that a thermalization gap emerges for light propagating in disordered lattices endowed with disorder-immune chiral symmetry. In these systems, the span of sub-thermal photon statistics is inaccessible to the input coherent light, which -- once the steady state is reached -- always emerges with super-thermal statistics no matter how small the disorder level. An independent constraint of the input field for the chiral symmetry to be activated and the gap to be observed is formulated. This unique feature enables a new form of photon-statistics interferometry: by exciting two lattice sites with a variable relative phase, as in a traditional two-path interferometer, the excitation-symmetry of the chiral mode pairs is judiciously broken and interferometric control over the photon statistics is exercised, spanning sub-thermal and super-thermal regimes. By considering an ensemble of disorder realizations, this phenomenon is demonstrated experimentally: a deterministic tuning of the intensity fluctuations while the mean intensity remains constant.Finally, I examined the statistics of the emerging light in two different lattice topologies: linear and ring lattices. I showed that the topology dictates the light statistics in the off-diagonal case: for even-sited ring and linear lattices, the electromagnetic field evolves into a single quadrature component, so that the field takes discrete phase values and is non-circular in the complex plane. As a consequence, the statistics become super-thermal. For odd-sited ring lattices, the field becomes random in both quadratures resulting in sub-thermal statistics. However, this effect is suppressed due to the transverse localization of light in lattices with high disorder. In the diagonal case, the lattice topology does not play a role and the transmitted field always acquires random components in both quadratures, hence the phase distribution is uniform in the steady state.
Show less - Date Issued
- 2015
- Identifier
- CFE0005968, ucf:50786
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005968
- Title
- Microscopic Assessment of Transportation Emissions on Limited Access Highways.
- Creator
-
Abou-Senna, Hatem, Radwan, Ahmed, Abdel-Aty, Mohamed, Al-Deek, Haitham, Cooper, Charles, Johnson, Mark, University of Central Florida
- Abstract / Description
-
On-road vehicles are a major source of transportation carbon dioxide (CO2) greenhouse gas emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, e.g., carbon monoxide (CO), nitrogen oxides (NOx), and particulate matter (PM). The need to accurately quantify transportation-related emissions from vehicles is essential. Transportation agencies and researchers in the past have...
Show moreOn-road vehicles are a major source of transportation carbon dioxide (CO2) greenhouse gas emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, e.g., carbon monoxide (CO), nitrogen oxides (NOx), and particulate matter (PM). The need to accurately quantify transportation-related emissions from vehicles is essential. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The new United States Environmental Protection Agency (USEPA) mobile source emissions model, MOVES2010a (MOVES) can estimate vehicle emissions on a second-by-second basis creating the opportunity to develop new software (")VIMIS 1.0(") (VISSIM/MOVES Integration Software) to facilitate the integration process. This research presents a microscopic examination of five key transportation parameters (traffic volume, speed, truck percentage, road grade and temperature) on a 10-mile stretch of Interstate 4 (I-4) test bed prototype; an urban limited access highway corridor in Orlando, Florida. The analysis was conducted utilizing VIMIS 1.0 and using an advanced custom design technique; D-Optimality and I-Optimality criteria, to identify active factors and to ensure precision in estimating the regression coefficients as well as the response variable.The analysis of the experiment identified the optimal settings of the key factors and resulted in the development of Micro-TEM (Microscopic Transportation Emissions Meta-Model). The main purpose of Micro-TEM is to serve as a substitute model for predicting transportation emissions on limited access highways to an acceptable degree of accuracy in lieu of running simulations using a traffic model and integrating the results in an emissions model. Furthermore, significant emission rate reductions were observed from the experiment on the modeled corridor especially for speeds between 55 and 60 mph while maintaining up to 80% and 90% of the freeway's capacity. However, vehicle activity characterization in terms of speed was shown to have a significant impact on the emission estimation approach.Four different approaches were further examined to capture the environmental impacts of vehicular operations on the modeled test bed prototype. First, (at the most basic level), emissions were estimated for the entire 10-mile section (")by hand(") using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link driving schedules (LDS), and second-by-second operating mode distributions (OPMODE). This research analyzed how the various approaches affect predicted emissions of CO, NOx, PM and CO2. The results demonstrated that obtaining accurate and comprehensive operating mode distributions on a second-by-second basis improves emission estimates. Specifically, emission rates were found to be highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, frequent braking/coasting and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach.Additionally, model applications and mitigation scenarios were examined on the modeled corridor to evaluate the environmental impacts in terms of vehicular emissions and at the same time validate the developed model (")Micro-TEM("). Mitigation scenarios included the future implementation of managed lanes (ML) along with the general use lanes (GUL) on the I-4 corridor, the currently implemented variable speed limits (VSL) scenario as well as a hypothetical restricted truck lane (RTL) scenario. Results of the mitigation scenarios showed an overall speed improvement on the corridor which resulted in overall reduction in emissions and emission rates when compared to the existing condition (EX) scenario and specifically on link by link basis for the RTL scenario.The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.
Show less - Date Issued
- 2012
- Identifier
- CFE0004777, ucf:49788
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004777
- Title
- The Identification and Segmentation of Astrocytoma Prior to Critical Mass, by means of a Volumetric/Subregion Regression Analysis of Normal and Neoplastic Brain Tissue.
- Creator
-
Higgins, Lyn, Hughes, Charles, Morrow, Patricia Bockelman, Bagci, Ulas, Lisle, Curtis, University of Central Florida
- Abstract / Description
-
As the underlying cause of Glioblastoma Multiforme (GBM) is presently unclear, this research implements a new approach to identifying and segmenting plausible instances of GBM prior to critical mass. Grade-IV Astrocytoma, or GBM, is an aggressive and malignant cancer arising from star-shaped glial cells, or astrocytes, where the astrocytes, functionally, assist in the support and protection of neurons within the central nervous system and spinal cord. Subsequently, our motivation for...
Show moreAs the underlying cause of Glioblastoma Multiforme (GBM) is presently unclear, this research implements a new approach to identifying and segmenting plausible instances of GBM prior to critical mass. Grade-IV Astrocytoma, or GBM, is an aggressive and malignant cancer arising from star-shaped glial cells, or astrocytes, where the astrocytes, functionally, assist in the support and protection of neurons within the central nervous system and spinal cord. Subsequently, our motivation for researching the ability to recognize GBM is that the underlying cause of the mutation is presently unclear, leading to the operative that GBM is only detectable through a combination of MRI and CT brain scans, cooperatively, along with a resection biopsy. Since astrocytoma only becomes evident at critical mass, when the cellular structure of the neoplasm becomes visible within the image, this research seeks to achieve earlier identification and segmentation of the neoplasm by evaluating the malignant area via a volumetric voxel approach to removing noise artifacts and analyzing voxel differentials. In order to investigate neoplasm continuity, a differential approach has been implemented utilizing a multi-polynomial/multi-domain regression algorithm, thus, ultimately, providing a graphical and mathematical analysis of the differentials within critical mass and non-critical mass images. Given these augmentations to MRI and CT image rectifications, we theorize that our approach will improve on astrocytoma recognition and segmentation, along with achieving greater accuracy in diagnostic evaluations of the malignant area.
Show less - Date Issued
- 2018
- Identifier
- CFE0007336, ucf:52111
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007336
- Title
- Adaptive Architectural Strategies for Resilient Energy-Aware Computing.
- Creator
-
Ashraf, Rizwan, DeMara, Ronald, Lin, Mingjie, Wang, Jun, Jha, Sumit, Johnson, Mark, University of Central Florida
- Abstract / Description
-
Reconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited...
Show moreReconfigurable logic or Field-Programmable Gate Array (FPGA) devices have the ability to dynamically adapt the computational circuit based on user-specified or operating-condition requirements. Such hardware platforms are utilized in this dissertation to develop adaptive techniques for achieving reliable and sustainable operation while autonomously meeting these requirements. In particular, the properties of resource uniformity and in-field reconfiguration via on-chip processors are exploited to implement Evolvable Hardware (EHW). EHW utilize genetic algorithms to realize logic circuits at runtime, as directed by the objective function. However, the size of problems solved using EHW as compared with traditional approaches has been limited to relatively compact circuits. This is due to the increase in complexity of the genetic algorithm with increase in circuit size. To address this research challenge of scalability, the Netlist-Driven Evolutionary Refurbishment (NDER) technique was designed and implemented herein to enable on-the-fly permanent fault mitigation in FPGA circuits. NDER has been shown to achieve refurbishment of relatively large sized benchmark circuits as compared to related works. Additionally, Design Diversity (DD) techniques which are used to aid such evolutionary refurbishment techniques are also proposed and the efficacy of various DD techniques is quantified and evaluated.Similarly, there exists a growing need for adaptable logic datapaths in custom-designed nanometer-scale ICs, for ensuring operational reliability in the presence of Process, Voltage, and Temperature (PVT) and, transistor-aging variations owing to decreased feature sizes for electronic devices. Without such adaptability, excessive design guardbands are required to maintain the desired integration and performance levels. To address these challenges, the circuit-level technique of Self-Recovery Enabled Logic (SREL) was designed herein. At design-time, vulnerable portions of the circuit identified using conventional Electronic Design Automation tools are replicated to provide post-fabrication adaptability via intelligent techniques. In-situ timing sensors are utilized in a feedback loop to activate suitable datapaths based on current conditions that optimize performance and energy consumption. Primarily, SREL is able to mitigate the timing degradations caused due to transistor aging effects in sub-micron devices by reducing the stress induced on active elements by utilizing power-gating. As a result, fewer guardbands need to be included to achieve comparable performance levels which leads to considerable energy savings over the operational lifetime.The need for energy-efficient operation in current computing systems has given rise to Near-Threshold Computing as opposed to the conventional approach of operating devices at nominal voltage. In particular, the goal of exascale computing initiative in High Performance Computing (HPC) is to achieve 1 EFLOPS under the power budget of 20MW. However, it comes at the cost of increased reliability concerns, such as the increase in performance variations and soft errors. This has given rise to increased resiliency requirements for HPC applications in terms of ensuring functionality within given error thresholds while operating at lower voltages. My dissertation research devised techniques and tools to quantify the effects of radiation-induced transient faults in distributed applications on large-scale systems. A combination of compiler-level code transformation and instrumentation are employed for runtime monitoring to assess the speed and depth of application state corruption as a result of fault injection. Finally, fault propagation models are derived for each HPC application that can be used to estimate the number of corrupted memory locations at runtime. Additionally, the tradeoffs between performance and vulnerability and the causal relations between compiler optimization and application vulnerability are investigated.
Show less - Date Issued
- 2015
- Identifier
- CFE0006206, ucf:52889
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006206