Current Search: distribution (x)
View All Items
Pages
- Title
- SPEEDES: A CASE STUDY OF SPACE OPERATIONS.
- Creator
-
Paruchuri, Amith, Rabelo, Luis, University of Central Florida
- Abstract / Description
-
This thesis describes the application of parallel simulation techniques to represent the structured functional parallelism present within the Space Shuttle Operations Flow using the Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES), an object-oriented multi-computing architecture. SPEEDES is a unified parallel simulation environment, which allocates events over multiple processors to get simulation speed up. Its optimistic processing capability minimizes...
Show moreThis thesis describes the application of parallel simulation techniques to represent the structured functional parallelism present within the Space Shuttle Operations Flow using the Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES), an object-oriented multi-computing architecture. SPEEDES is a unified parallel simulation environment, which allocates events over multiple processors to get simulation speed up. Its optimistic processing capability minimizes simulation lag time behind wall clock time, or multiples of real-time. SPEEDES accommodates an increase in process complexity with additional parallel computing nodes to allow sharing of processing loads. This thesis focuses on the process of translating a model of Space Shuttle Operations from a procedural oriented and single processor approach to one represented in a process-driven, object-oriented, and distributed processor approach. The processes are depicted by several classes created to represent the operations at the space center. The reference model used is the existing Space Shuttle Model created in ARENA by NASA and UCF in the year 2001. A systematic approach was used for this translation. A reduced version of the ARENA model was created, and then used as the SPEEDES prototype using C++. The prototype was systematically augmented to reflect the entire Space Shuttle Operations Flow. It was then verified, validated, and implemented.
Show less - Date Issued
- 2005
- Identifier
- CFE0000330, ucf:46286
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000330
- Title
- cooperative control and advanced management of distributed generators in a smart grid.
- Creator
-
Maknouninejad, Ali, Qu, Zhihua, Lotfifard, Saeed, Haralambous, Michael, Wu, Xinzhang, Kutkut, Nasser, University of Central Florida
- Abstract / Description
-
Smart grid is more than just the smart meters. The future smart grids are expected to include ahigh penetration of distributed generations (DGs), most of which will consist of renewable energysources, such as solar or wind energy. It is believed that the high penetration of DGs will resultin the reduction of power losses, voltage profile improvement, meeting future load demand, andoptimizingthe use of non-conventionalenergy sources. However, more serious problems will ariseif a decent control...
Show moreSmart grid is more than just the smart meters. The future smart grids are expected to include ahigh penetration of distributed generations (DGs), most of which will consist of renewable energysources, such as solar or wind energy. It is believed that the high penetration of DGs will resultin the reduction of power losses, voltage profile improvement, meeting future load demand, andoptimizingthe use of non-conventionalenergy sources. However, more serious problems will ariseif a decent control mechanism is not exploited. An improperly managed high PV penetration maycause voltage profile disturbance, conflict with conventional network protection devices, interferewith transformer tap changers, and as a result, cause network instability.Indeed, it is feasible to organize DGs in a microgrid structure which will be connected to the maingrid through a point of common coupling (PCC). Microgrids are natural innovation zones for thesmart grid because of their scalability and flexibility. A proper organization and control of theinteraction between the microgrid and the smartgrid is a challenge.Cooperative control makes it possible to organize different agents in a networked system to actas a group and realize the designated objectives. Cooperative control has been already appliedto the autonomous vehicles and this work investigates its application in controlling the DGs in amicro grid. The microgrid power objectives are set by a higher level control and the application ofthe cooperative control makes it possible for the DGs to utilize a low bandwidth communicationnetwork and realize the objectives.Initially, the basics of the application of the DGs cooperative control are formulated. This includesorganizing all the DGs of a microgrid to satisfy an active and a reactive power objective. Then, thecooperative control is further developed by the introduction of clustering DGs into several groupsto satisfy multiple power objectives. Then, the cooperative distribution optimization is introducedto optimally dispatch the reactive power of the DGs to realize a unified microgrid voltage profileand minimizethelosses. Thisdistributedoptimizationis agradient based techniqueand itis shownthat when the communication is down, it reduces to a form of droop. However, this gradient baseddroop exhibits a superior performance in the transient response, by eliminating the overshootscaused by the conventional droop.Meanwhile, the interaction between each microgrid and the main grid can be formulated as aStackelberg game. The main grid as the leader, by offering proper energy price to the micro grid,minimizes its cost and secures the power. This not only optimizes the economical interests ofboth sides, the microgrids and the main grid, but also yields an improved power flow and shavesthe peak power. As such, a smartgrid may treat microgrids as individually dispatchable loads orgenerators.
Show less - Date Issued
- 2013
- Identifier
- CFE0004712, ucf:49817
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004712
- Title
- RESEARCHES ON REVERSE LOOKUP PROBLEM IN DISTRIBUTED FILE SYSTEM.
- Creator
-
Zhang, Junyao, Wang, Jun, University of Central Florida
- Abstract / Description
-
Recent years have witnessed an increasing demand for super data clusters. The super data clusters have reached the petabyte-scale can consist of thousands or tens of thousands storage nodes at a single site. For this architecture, reliability is becoming a great concern. In order to achieve a high reliability, data recovery and node reconstruction is a must. Although extensive research works have investigated how to sustain high performance and high reliability in case of node failures at...
Show moreRecent years have witnessed an increasing demand for super data clusters. The super data clusters have reached the petabyte-scale can consist of thousands or tens of thousands storage nodes at a single site. For this architecture, reliability is becoming a great concern. In order to achieve a high reliability, data recovery and node reconstruction is a must. Although extensive research works have investigated how to sustain high performance and high reliability in case of node failures at large scale, a reverse lookup problem, namely finding the objects list for the failed node remains open. This is especially true for storage systems with high requirement of data integrity and availability, such as scientific research data clusters and etc. Existing solutions are either time consuming or expensive. Meanwhile, replication based block placement can be used to realize fast reverse lookup. However, they are designed for centralized, small-scale storage architectures. In this thesis, we propose a fast and efficient reverse lookup scheme named Group-based Shifted Declustering (G-SD) layout that is able to locate the whole content of the failed node. G-SD extends our previous shifted declustering layout and applies to large-scale file systems. Our mathematical proofs and real-life experiments show that G-SD is a scalable reverse lookup scheme that is up to one order of magnitude faster than existing schemes.
Show less - Date Issued
- 2010
- Identifier
- CFE0003504, ucf:48970
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003504
- Title
- RESOURCE-CONSTRAINT AND SCALABLE DATA DISTRIBUTION MANAGEMENT FOR HIGH LEVEL ARCHITECTURE.
- Creator
-
Gupta, Pankaj, Guha, Ratan, University of Central Florida
- Abstract / Description
-
In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six...
Show moreIn this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences.
Show less - Date Issued
- 2007
- Identifier
- CFE0001949, ucf:47447
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001949
- Title
- APPLICATION OF STATISTICAL METHODS IN RISK AND RELIABILITY.
- Creator
-
Heard, Astrid, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
The dissertation considers construction of confidence intervals for a cumulative distribution function F(z) and its inverse at some fixed points z and u on the basis of an i.i.d. sample where the sample size is relatively small. The sample is modeled as having the flexible Generalized Gamma distribution with all three parameters being unknown. This approach can be viewed as an alternative to nonparametric techniques which do not specify distribution of X and lead to less efficient procedures....
Show moreThe dissertation considers construction of confidence intervals for a cumulative distribution function F(z) and its inverse at some fixed points z and u on the basis of an i.i.d. sample where the sample size is relatively small. The sample is modeled as having the flexible Generalized Gamma distribution with all three parameters being unknown. This approach can be viewed as an alternative to nonparametric techniques which do not specify distribution of X and lead to less efficient procedures. The confidence intervals are constructed by objective Bayesian methods and use the Jeffreys noninformative prior. Performance of the resulting confidence intervals is studied via Monte Carlo simulations and compared to the performance of nonparametric confidence intervals based on binomial proportion. In addition, techniques for change point detection are analyzed and further evaluated via Monte Carlo simulations. The effect of a change point on the interval estimators is studied both analytically and via Monte Carlo simulations.
Show less - Date Issued
- 2005
- Identifier
- CFE0000736, ucf:46565
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000736
- Title
- Where's the Boss? The Influences of Emergent Team Leadership Structures on Team Outcomes in Virtual and Distributed Environments.
- Creator
-
Porter, Marissa, Salas, Eduardo, Jentsch, Florian, Joseph, Dana, Burke, Shawn, University of Central Florida
- Abstract / Description
-
The influence of leadership on team success has been noted extensively in research and practice. However, as organizations move to flatter team based structures with workers communicating virtually across space and time, our conceptualization of team leadership must change to meet these new workplace demands. Given this need, the current study aims to begin untangling the effects of distribution and virtuality on team leadership structure and subsequent team outcomes that may be affected by...
Show moreThe influence of leadership on team success has been noted extensively in research and practice. However, as organizations move to flatter team based structures with workers communicating virtually across space and time, our conceptualization of team leadership must change to meet these new workplace demands. Given this need, the current study aims to begin untangling the effects of distribution and virtuality on team leadership structure and subsequent team outcomes that may be affected by differences in conceptualizing such structures. Specifically, the goals of this study were threefold. First, this study investigated how the physical distribution of members may impact perceptions of team leadership structure, depending on virtual tool type utilized for communicating. Second, this study explored how different indices of team leadership structure may have different influences on team outcomes, specifically in terms of conceptualizing the degree to which multiple members are perceived as collectively enacting particular leadership behaviors via a network density metric, and conceptualizing team leadership in regards to the specialization of members into particular behavioral roles, as captured via role distance and role variety indices. Finally, this study expanded on current research regarding team leadership structure by examining how the collective enactment of particular leadership (i.e., structuring/planning, problem solving, supporting social climate) behaviors may facilitate specific teamwork processes (i.e., transition, action, interpersonal), leading to enhanced team performance, as well as how leadership role specialization may impact overall teamwork and team performance.Findings from a laboratory study of 188 teams participating in a simulated decision making task reveal a significant interaction for the influences of physical distribution and virtuality on perceptions of leadership structure, such that less distributed teams (i.e., those with fewer isolated members) were more likely to perceive their distributed members as participating in the collective enactment of necessary leadership responsibilities when communicating via richer media (i.e., videoconferencing, teleconferencing) than less rich media (i.e., instant messaging). However, virtuality and distribution did not impact the degree to which members were perceived as specializing in a particular leadership role, or the overall variety of leadership roles being performed. In terms of team outcomes, the perceived collective enactment of leadership emanating from distributed team members significantly predicted teamwork, while the perceived collective leadership of collocated members did not have a significant impact. Specifically, greater distributed team member involvement in the collective enactment of structuring/planning leadership positively impacted team transition processes, while the collective enactment of supporting the social climate positively predicted team interpersonal processes. Although the relationship between perceived leadership role specialization, in terms of role distance and role variety, and team performance was mediated by overall teamwork processes as expected, leadership role specialization had a negative impact on overall teamwork. Finally, while team action processes did not serve to mediate the relationship between perceived problem solving network density and team performance, team transition processes mediated the relationships between the collective enactment of structuring/planning for distributed members and team performance. The collective enactment of supporting the social climate by distributed team members and its relationship to team performance was also mediated by interpersonal teamwork processes. Together, these results reveal the importance in considering context, specifically virtuality and physical distribution, when designing, developing and maintaining effective team leadership, teamwork, and team performance. Furthermore, they provide unique insight regarding how different configurations of leadership may be possible in teams. Study limitations, practical implications, and recommendations for future research and practice are further discussed.
Show less - Date Issued
- 2013
- Identifier
- CFE0004911, ucf:49603
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004911
- Title
- WATER QUALITY VARIATIONS DURING NITRIFICATION IN DRINKING WATER DISTRIBUTION SYSTEMS.
- Creator
-
Webb, David W, Taylor, James S., University of Central Florida
- Abstract / Description
-
This thesis documents the relationship among the major water quality parametersduring a nitrification episode. Nitrification unexpectedly occurred in a chloraminated pilotdrinking water distribution system practicing with a 4.0 mg/L as Cl2 residual dosed at 4.5:1Cl2:NH3-N. Surface, ground and sea water were treated and disinfected withmonochloramines to produce finished water quality similar to regional utility water quality.PVC, galvanized, unlined cast iron and lined iron pipes were...
Show moreThis thesis documents the relationship among the major water quality parametersduring a nitrification episode. Nitrification unexpectedly occurred in a chloraminated pilotdrinking water distribution system practicing with a 4.0 mg/L as Cl2 residual dosed at 4.5:1Cl2:NH3-N. Surface, ground and sea water were treated and disinfected withmonochloramines to produce finished water quality similar to regional utility water quality.PVC, galvanized, unlined cast iron and lined iron pipes were harvested from regionaldistribution systems and used to build eighteen pilot distribution systems (PDSs). The PDSswere operated at a 5-day hydraulic residence time (HRT) and ambient temperatures.As seasonal temperatures increased the rate of monochloramine dissipation increaseduntil effluent PDS residuals were zero. PDSs effluent water quality parameters chloraminesresidual, dissolved oxygen, heterotrophic plate counts (HPCs), pH, alkalinity, and nitrogenspecies were monitored and found to vary as expected by stoichiometry associated withtheoretical biological reactions excepting alkalinity. Nitrification was confirmed in thePDSs. The occurrence in the PDSs was not isolated to any particular source water.Ammonia for nitrification came from degraded chloramines, which was common among allfinished waters. Consistent with nitrification trends of dissolved oxygen consumption,ammonia consumption, nitrite and nitrate production were clearly observed in the PDSs bulkwater quality profiles. Trends of pH and alkalinity were less apparent. To controlnitrification: residual was increased to 4.5 mg/L as Cl2 at 5:1 Cl2:NH3-N dosing ratio, and theHRT was reduced from 5 to 2 days. Elimination of the nitrification episode was achievedafter a 1 week free chlorine burn.
Show less - Date Issued
- 2004
- Identifier
- CFE0000063, ucf:46118
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000063
- Title
- THE EFFECT OF INTERNET BOOKING ON THE CENTRAL FLORIDA LODGING MARKET OVER THE PAST FIVE YEARS.
- Creator
-
Smith, Scott, Rompf, Paul, University of Central Florida
- Abstract / Description
-
This study reviews the effect of Internet bookings on the Central Florida lodging market over the past five years. As the number of lodging accommodations booked directly by the consumer over the Internet continues to increase, the ramifications brought about by this emerging distribution channel have not been fully investigated or interpreted. This study observes how Internet-enabled distribution channel bookings have trended in occupancy and average daily rate in the Central Florida lodging...
Show moreThis study reviews the effect of Internet bookings on the Central Florida lodging market over the past five years. As the number of lodging accommodations booked directly by the consumer over the Internet continues to increase, the ramifications brought about by this emerging distribution channel have not been fully investigated or interpreted. This study observes how Internet-enabled distribution channel bookings have trended in occupancy and average daily rate in the Central Florida lodging market in the past five years. Specifically the author segmented the survey respondents into the lodging product service categories of budget, moderate, upscale and luxury to analyze if there were any observable trends between the categories over the past five years. The author also segmented the respondents into the lodging geographic sub-categories of airport, downtown, suburban and resort/attractions area to determine if there were any observable trends between the sub-classifications over the past five years. Utilizing a descriptive approach, the author determined that each product service category and lodging sub-classification displayed continuous growth in Internet-enabled distribution channel bookings over the five-year period of 1999-2003. The author also observed that each product service category continuously represented a discounted Internet distribution channel rate over the five-year period of 1999-2003. This analysis suggests that lodging properties in the Central Florida market are discounting their Internet-enabled distribution channel rates in comparison to the property's overall average rate. At the same time, these properties appear to be increasing their Internet-enabled distribution channel bookings as a percentage of overall bookings.
Show less - Date Issued
- 2004
- Identifier
- CFE0000322, ucf:46302
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000322
- Title
- PADE APPROXIMANTS AND ONE OF ITS APPLICATIONS.
- Creator
-
fowe, Tame-kouontcho, Mohapatra, Ram, University of Central Florida
- Abstract / Description
-
This thesis is concerned with a brief summary of the theory of Padé approximants and one of its applications to Finance. Proofs of most of the theorems are omitted and many developments could not be mentioned due to the vastness of the field of Padé approximations. We provide reference to research papers and books that contain exhaustive treatment of the subject. This thesis is mainly divided into two parts. In the first part we derive a general expression of the Padé...
Show moreThis thesis is concerned with a brief summary of the theory of Padé approximants and one of its applications to Finance. Proofs of most of the theorems are omitted and many developments could not be mentioned due to the vastness of the field of Padé approximations. We provide reference to research papers and books that contain exhaustive treatment of the subject. This thesis is mainly divided into two parts. In the first part we derive a general expression of the Padé approximants and some of the results that will be related to the work on the second part of the thesis. The Aitken's method for quick convergence of series is highlighted as Padé . We explore the criteria for convergence of a series approximated by Padé approximants and obtain its relationship to numerical analysis with the help of the Crank-Nicholson method. The second part shows how Padé approximants can be a smooth method to model the term structure of interest rates using stochastic processes and the no arbitrage argument. Padé approximants have been considered by physicists to be appropriate for approximating large classes of functions. This fact is used here to compare Padé approximants with very low indices and two parameters to interest rates variations provided by the Federal Reserve System in the United States.
Show less - Date Issued
- 2007
- Identifier
- CFE0001682, ucf:47217
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001682
- Title
- BIOSTABILITY IN DRINKING WATER DISTRIBUTION SYSTEMS: STUDY AT PILOT-SCALE.
- Creator
-
LE PUIL, Michael, Randall, Andrew A., University of Central Florida
- Abstract / Description
-
Biostability and related issues (e.g. nitrification) were investigated for 18 months in 18 pilot distribution systems, under various water quality scenarios. This study specifically investigated the impact of steady-state water changes on HPC levels in chlorinated and chloraminated distribution systems. Chlorination was more effective than chloramination in reducing HPC levels (1-2 log difference). There was a rapid increase in HPC corresponding to the change in steady-state water quality,...
Show moreBiostability and related issues (e.g. nitrification) were investigated for 18 months in 18 pilot distribution systems, under various water quality scenarios. This study specifically investigated the impact of steady-state water changes on HPC levels in chlorinated and chloraminated distribution systems. Chlorination was more effective than chloramination in reducing HPC levels (1-2 log difference). There was a rapid increase in HPC corresponding to the change in steady-state water quality, which was observed in all PDS. Modeling effort demonstrated that HPC levels reached a maximum within five days after water quality change and return to initial level ten days after the change. Since alkalinity was used as a tracer of the steady-state water quality change, time to reach maximum HPC was related to a mixing model using alkalinity as a surrogate that confirmed alkalinity transition was complete in approximately eight days.Biostability was assessed by HPC levels, since no coliform were ever detected. It was observed that HPC levels would be above four logs if residual droped below 0.1-0.2 mg/L as Cl2, which is below the regulatory minimum of 0.6 mg/L as Cl2. Therefore bacterial proliferation is more likely to be controlled in distribution systems as long as residual regulatory requirements are met. An empirical modeling effort showed that residual, pipe material and temperature were the most important parameters in controlling HPC levels in distribution systems, residual being the only parameter that can be practically used by utilities to control biological stability in their distribution systems. Use of less reactive (i.e. with less chlorine demand) pipes is recommended in order to prevent residual depletion and subsequent bacterial proliferation.This study is investigated biofilm growth simultaneously with suspended growth under a wide range of water quality scenarios and pipe materials. It was found that increasing the degree of treatment led to reduction of biofilm density, except for reverse osmosis treated groundwater, which exerted the highest biofilm density of all waters. Biofilm densities on corrodible, highly reactive materials (e.g. unlined cast iron and galvanized steel) were significantly greater than on PVC and lined cast iron. Biofilm modeling showed that attached bacteria were most affected by temperature and much less by HRT, bulk HPC and residual. The model predicts biofilms will always be active for environments common to drinking water distribution systems. As American utilities do not control biofilms with extensive and costly AOC reduction, American utilities must maintain a strong residual to maintain biological integrity and stability in drinking water distribution systems.Nitrite and nitrate were considered the most suitable indicators for utilities to predict onset of a nitrification episode in the distribution system bulk liquid. DO and ammonia were correlated to production of nitrite and nitrate and therefore could be related to nitrification. However since ammonia and DO consumptions can be caused by other phenomena than nitrification (e.g. oxidation by disinfectant to nitrite and reduction at the pipe wall, respectively), these parameters are not considered indicators of nitrification.Ammonia-Oxidizing Bacteria (AOB) densities in the bulk phase correlated well with nitrite and nitrate production, reinforcing the fact that nitrite and nitrate are good monitoring tools to predict nitrification. Chloramine residual proved to be helpful in reducing nitrification in the bulk phase but has little effect on biofilm densities. As DO has been related to bacterial proliferation and nitrification, it can be a useful and inexpensive option for utilities in predicting biological instability, if monitored in conjunction with residual, nitrite and nitrate. Autotrophic (i.e. AOB) and heterotrophic (i.e. HPC) organisms were correlated in the bulk phase and biofilms.
Show less - Date Issued
- 2004
- Identifier
- CFE0000111, ucf:46183
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000111
- Title
- A HOLISTIC USABILITY FRAMEWORK FOR DISTRIBUTED SIMULATION SYSTEMS.
- Creator
-
Dawson, Jeffrey, Rabelo, Luis, University of Central Florida
- Abstract / Description
-
This dissertation develops a holistic usability framework for distributed simulation systems (DSSs). The framework is developed considering relevant research in human-computer interaction, computer science, technical writing, engineering, management, and psychology. The methodology used consists of three steps: (1) framework development, (2) surveys of users to validate and refine the framework, and to determine attribute weights, and (3) application of the framework to two real-world systems...
Show moreThis dissertation develops a holistic usability framework for distributed simulation systems (DSSs). The framework is developed considering relevant research in human-computer interaction, computer science, technical writing, engineering, management, and psychology. The methodology used consists of three steps: (1) framework development, (2) surveys of users to validate and refine the framework, and to determine attribute weights, and (3) application of the framework to two real-world systems. The concept of a holistic usability framework for DSSs arose during a project to improve the usability of the Virtual Test Bed, a prototypical DSS, and the framework is partly a result of that project. In addition, DSSs at Ames Research Center were studied for additional insights. The framework has six dimensions: end user needs, end user interface(s), programming, installation, training, and documentation. The categories of participants in this study include managers, researchers, programmers, end users, trainers, and trainees. The first survey was used to obtain qualitative and quantitative data to validate and refine the framework. Attributes that failed the validation test were dropped from the framework. A second survey was used to obtain attribute weights. The refined framework was used to evaluate two existing DSSs, measuring their holistic usabilities. Ensuring that the needs of the variety of types of users who interact with the system during design, development, and use are met is important to launch a successful system. Adequate consideration of system usability along the several dimensions in the framework will not only ensure system success but also increase productivity, lower life cycle costs, and result in a more pleasurable working experience for people who work with the system.
Show less - Date Issued
- 2006
- Identifier
- CFE0001256, ucf:46906
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001256
- Title
- LIBERTARIAN, LIBERAL, AND SOCIALIST CONCEPTS OF DISRIBUTIVE JUSTICE.
- Creator
-
Kassebaum, Daniel, Marien, Daniel, University of Central Florida
- Abstract / Description
-
What makes for a just society constitutes one of the most intensely debated subject among political philosophers. There are many theorists striving to identify principles of justice and each believes his/hers theory to be the best. The literature on this subject is much too voluminous to be canvassed in its entirety here. I will, however, examine the stances and arguments of three key schools of thought shaping the modern discussion of social justice: libertarianism (particularly Robert...
Show moreWhat makes for a just society constitutes one of the most intensely debated subject among political philosophers. There are many theorists striving to identify principles of justice and each believes his/hers theory to be the best. The literature on this subject is much too voluminous to be canvassed in its entirety here. I will, however, examine the stances and arguments of three key schools of thought shaping the modern discussion of social justice: libertarianism (particularly Robert Nozick and Milton and Rose Friedman), liberal egalitarianism (John Rawls and Ronald Dworkin), and socialism (Karl Marx and John Roemer). Each of these schools articulate sharply contrasting views. These differences create an intriguing debate about what the most just society would look like.
Show less - Date Issued
- 2014
- Identifier
- CFH0004697, ucf:45235
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004697
- Title
- THE DISTRIBUTION, ABUNDANCE, AND HABITAT USE OF THE BIG CYPRESS FOX SQUIRREL, (SCIURUS NIGER AVICENNIA).
- Creator
-
Munim, Danielle, Noss, Reed, University of Central Florida
- Abstract / Description
-
Human population growth and development reduce the area and quality of natural communities and lead to a reduction of populations of the species associated with them. Certain species can be useful indicators or "focal species" for determining the quality of ecosystem remnants and the required management practices. Tree squirrels are good models for studies on the effects of fragmentation because they depend on mature forests. The Big Cypress fox squirrel, (Sciurus niger avicennia), a state...
Show moreHuman population growth and development reduce the area and quality of natural communities and lead to a reduction of populations of the species associated with them. Certain species can be useful indicators or "focal species" for determining the quality of ecosystem remnants and the required management practices. Tree squirrels are good models for studies on the effects of fragmentation because they depend on mature forests. The Big Cypress fox squirrel, (Sciurus niger avicennia), a state-listed Threatened subspecies endemic to south Florida, appears sensitive to habitat fragmentation and fire regime. This research aims to assess the conservation status of the Big Cypress fox squirrel. I documented the current distribution of the fox squirrel by obtaining and mapping occurrence records and through interviews with biologists and other field personnel of public land-managing agencies, and private landowners including golf course managers. Transect sampling was used to survey and sample natural areas and private lands to evaluate the distribution, abundance, and habitat use of fox squirrels. Natural areas and suburban areas appear to support Big Cypress fox squirrels, but individuals are widely distributed and only found in low numbers throughout southwest Florida. The distribution of fox squirrel populations depends on land use and understory height, but not the size of trees. Fire suppression has resulted in a dense understory in large portions of parks and preserve lands, which is unsuitable for fox squirrels.
Show less - Date Issued
- 2008
- Identifier
- CFE0002276, ucf:47838
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002276
- Title
- Blockchain-Driven Secure and Transparent Audit Logs.
- Creator
-
Ahmad, Ashar, Mohaisen, David, Awad, Amro, Zhang, Wei, Posey, Clay, University of Central Florida
- Abstract / Description
-
In enterprise business applications, large volumes of data are generated daily, encoding business logic and transactions. Those applications are governed by various compliance requirements, making it essential to provide audit logs to store, track, and attribute data changes. In traditional audit log systems, logs are collected and stored in a centralized medium, making them prone to various forms of attacks and manipulations, including physical access and remote vulnerability exploitation...
Show moreIn enterprise business applications, large volumes of data are generated daily, encoding business logic and transactions. Those applications are governed by various compliance requirements, making it essential to provide audit logs to store, track, and attribute data changes. In traditional audit log systems, logs are collected and stored in a centralized medium, making them prone to various forms of attacks and manipulations, including physical access and remote vulnerability exploitation attacks, and eventually allowing for unauthorized data modification, threatening the guarantees of audit logs. Moreover, such systems, and given their centralized nature, are characterized by a single point of failure. To harden the security of audit logs in enterprise business applications, in this work we explore the design space of blockchain-driven secure and transparent audit logs. We highlight the possibility of ensuring stronger security and functional properties by a generic blockchain system for audit logs, realize this generic design through BlockAudit, which addresses both security and functional requirements, optimize BlockAudit through multi-layered design in BlockTrail, and explore the design space further by assessing the functional and security properties the consensus algorithms through comprehensive evaluations. The first component of this work is BlockAudit, a design blueprint that enumerates structural, functional, and security requirements for blockchain-based audit logs. BlockAudit uses a consensus-driven approach to replicate audit logs across multiple application peers to prevent the single-point-of-failure. BlockAudit also uses the Practical Byzantine Fault Tolerance (PBFT) protocol to achieve consensus over the state of the audit log data. We evaluate the performance of BlockAudit using event-driven simulations, abstracted from IBM Hyperledger. Through the performance evaluation of BlockAudit, we pinpoint a need for high scalability and high throughput. We achieve those requirements by exploring various design optimizations to the flat structure of BlockAudit inspired by real-world application characteristics. Namely, enterprise business applications often operate across non-overlapping geographical hierarchies including cities, counties, states, and federations. Leveraging that, we applied a similar transformation to BlockAudit to fragment the flat blockchain system into layers of codependent hierarchies, capable of processing transactions in parallel. Our hierarchical design, called BlockTrail, reduced the storage and search complexity for blockchains substantially while increasing the throughput and scalability of the audit log system. We prototyped BlockTrail on a custom-built blockchain simulator and analyzed its performance under varying transactions and network sizes demonstrating its advantages over BlockAudit. A recurring limitation in both BlockAudit and BlockTrail is the use of the PBFT consensus protocol, which has high complexity and low scalability features. Moreover, the performance of our proposed designs was only evaluated in computer simulations, which sidestepped the complexities of the real-world blockchain system. To address those shortcomings, we created a generic cloud-based blockchain testbed capable of executing five well-known consensus algorithms including Proof-of-Work, Proof-of-Stake, Proof-of-Elapsed Time, Clique, and PBFT. For each consensus protocol, we instrumented our auditing system with various benchmarks to measure the latency, throughput, and scalability, highlighting the trade-off between the different protocols.
Show less - Date Issued
- 2019
- Identifier
- CFE0007773, ucf:52375
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007773
- Title
- Research on Improving Reliability, Energy Efficiency and Scalability in Distributed and Parallel File Systems.
- Creator
-
Zhang, Junyao, Wang, Jun, Zhang, Shaojie, Lee, Jooheung, University of Central Florida
- Abstract / Description
-
With the increasing popularity of cloud computing and "Big data" applications, current data centers are often required to manage petabytes or exabytes of data. To store this huge amount of data, thousands or tens of thousands storage nodes are required at a single site. This imposes three major challenges for storage system designers: (1) Reliability---node failure in these datacenters is a normal occurrence rather than a rare situation. This makes data reliability a great concern. (2) Energy...
Show moreWith the increasing popularity of cloud computing and "Big data" applications, current data centers are often required to manage petabytes or exabytes of data. To store this huge amount of data, thousands or tens of thousands storage nodes are required at a single site. This imposes three major challenges for storage system designers: (1) Reliability---node failure in these datacenters is a normal occurrence rather than a rare situation. This makes data reliability a great concern. (2) Energy efficiency---a data center can consume up to 100 times more energy than a standard office building. More than 10% of this energy consumption can be attributed to storage systems. Thus, reducing the energy consumption of the storage system is key to reducing the overall consumption of the data center.(3) Scalability---with the continuously increasing size of data, maintaining the scalability of the storage systems is essential. That is, the expansion of the storage system should be completed efficiently and without limitations on the total number of storage nodes or performance.This thesis proposes three ways to improve the above three key features for current large-scale storage systems. Firstly, we define the problem of "reverse lookup", namely finding the list of objects (blocks) for a failed node. As the first step of failure recovery, this process is directly related to the recovery/reconstruction time. While existing solutions use metadata traversal or data distribution reversing methods for reverse lookup, which are either time consuming or expensive, a deterministic block placement can achieve fast and efficient reverse lookup.However, the deterministic placement solutions are designed for centralized, small-scale storage architectures such as RAID etc.. Due to their lacking of scalability, they cannot be directly applied in large-scale storage systems. In this paper, we propose Group-Shifted Declustering (G-SD), a deterministic data layout for multi-way replication. G-SD addresses the scalability issue of our previous Shifted Declustering layout and supports fast and efficient reverse lookup.Secondly, we define a problem: "how to balance the performance, energy, and recovery in degradation mode for an energy efficient storage system?". While extensive researches have been proposed to tradeoff performance for energy efficiency under normal mode, the system enters degradation mode when node failure occurs, in which node reconstruction is initiated. This very process requires a number of disks to be spun up and requires a substantial amount of I/O bandwidth, which will not only compromise energy efficiency but also performance. Without considering the I/O bandwidth contention between recovery and performance, we find that the current energy proportional solutions cannot answer this question accurately. This thesis present PERP, a mathematical model to minimize the energy consumption for a storage systems with respect to performance and recovery. PERP answers this problem by providing the accurate number of nodes and the assigned recovery bandwidth at each time frame.Thirdly, current distributed file systems such as Google File System(GFS) and Hadoop Distributed File System (HDFS), employ a pseudo-random method for replica distribution and a centralized lookup table (block map) to record all replica locations. This lookup table requires a large amount of memory and consumes a considerable amount of CPU/network resources on the metadata server. With the booming size of "Big Data", the metadata server becomes a scalability and performance bottleneck. While current approaches such as HDFS Federation attempt to "horizontally" extend scalability by allowing multiple metadata servers, we believe a more promising optimization option is to "vertically" scale up each metadata server. We propose Deister, a novel block management scheme that builds on top of a deterministic declustering distribution method Intersected Shifted Declustering (ISD). Thus both replica distribution and location lookup can be achieved without a centralized lookup table.
Show less - Date Issued
- 2015
- Identifier
- CFE0006238, ucf:51082
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006238
- Title
- Exploring new boundaries in team cognition: Integrating knowledge in distributed teams.
- Creator
-
Zajac, Stephanie, Salas, Eduardo, Bowers, Clint, Burke, Shawn, University of Central Florida
- Abstract / Description
-
Distributed teams continue to emerge in response to the complex organizational environments brought about by globalization, technological advancements, and the shift toward a knowledge-based economy. These teams are comprised of members who hold the disparate knowledge necessary to take on cognitively demanding tasks. However, knowledge coordination between team members who are not co-located is a significant challenge, often resulting in process loss and decrements to the effectiveness of...
Show moreDistributed teams continue to emerge in response to the complex organizational environments brought about by globalization, technological advancements, and the shift toward a knowledge-based economy. These teams are comprised of members who hold the disparate knowledge necessary to take on cognitively demanding tasks. However, knowledge coordination between team members who are not co-located is a significant challenge, often resulting in process loss and decrements to the effectiveness of team level knowledge structures. The current effort explores the configuration dimension of distributed teams, and specifically how subgroup formation based on geographic location, may impact the effectiveness of a team's transactive memory system and subsequent team process. In addition, the role of task cohesion as a buffer to negative intergroup interaction is explored.
Show less - Date Issued
- 2014
- Identifier
- CFE0005449, ucf:50393
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005449
- Title
- EXPLORING CONFIDENCE INTERVALS IN THE CASE OF BINOMIAL AND HYPERGEOMETRIC DISTRIBUTIONS.
- Creator
-
Mojica, Irene, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
The objective of this thesis is to examine one of the most fundamental and yet important methodologies used in statistical practice, interval estimation of the probability of success in a binomial distribution. The textbook confidence interval for this problem is known as the Wald interval as it comes from the Wald large sample test for the binomial case. It is generally acknowledged that the actual coverage probability of the standard interval is poor for values of p near 0 or 1. Moreover,...
Show moreThe objective of this thesis is to examine one of the most fundamental and yet important methodologies used in statistical practice, interval estimation of the probability of success in a binomial distribution. The textbook confidence interval for this problem is known as the Wald interval as it comes from the Wald large sample test for the binomial case. It is generally acknowledged that the actual coverage probability of the standard interval is poor for values of p near 0 or 1. Moreover, recently it has been documented that the coverage properties of the standard interval can be inconsistent even if p is not near the boundaries. For this reason, one would like to study the variety of methods for construction of confidence intervals for unknown probability p in the binomial case. The present thesis accomplishes the task by presenting several methods for constructing confidence intervals for unknown binomial probability p. It is well known that the hypergeometric distribution is related to the binomial distribution. In particular, if the size of the population, N, is large and the number of items of interest k is such that k/N tends to p as N grows, then the hypergeometric distribution can be approximated by the binomial distribution. Therefore, in this case, one can use the confidence intervals constructed for p in the case of the binomial distribution as a basis for construction of the confidence intervals for the unknown value k = pN. The goal of this thesis is to study this approximation and to point out several confidence intervals which are designed specifically for the hypergeometric distribution. In particular, this thesis considers several confidence intervals which are based on estimation of a binomial proportion as well as Bayesian credible sets based on various priors.
Show less - Date Issued
- 2011
- Identifier
- CFE0003919, ucf:48740
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003919
- Title
- A Multiagent Q-learning-based Restoration Algorithm for Resilient Distribution System Operation.
- Creator
-
Hong, Jungseok, Sun, Wei, Zhou, Qun, Zheng, Qipeng, University of Central Florida
- Abstract / Description
-
Natural disasters, human errors, and technical issues have caused disastrous blackouts to power systems and resulted in enormous economic losses. Moreover, distributed energy resources have been integrated into distribution systems, which bring extra uncertainty and challenges to system restoration. Therefore, the restoration of power distribution systems requires more efficient and effective methods to provide resilient operation.In the literature, using Q-learning and multiagent system (MAS...
Show moreNatural disasters, human errors, and technical issues have caused disastrous blackouts to power systems and resulted in enormous economic losses. Moreover, distributed energy resources have been integrated into distribution systems, which bring extra uncertainty and challenges to system restoration. Therefore, the restoration of power distribution systems requires more efficient and effective methods to provide resilient operation.In the literature, using Q-learning and multiagent system (MAS) to restore power systems has the limitation in real system application, without considering power system operation constraints. In order to adapt to system condition changes quickly, a restoration algorithm using Q-learning and MAS, together with the combination method and battery algorithm is proposed in this study. The developed algorithm considers voltage and current constraints while finding system switching configuration to maximize the load pick-up after faults happen to the given system. The algorithm consists of three parts. First, it finds switching configurations using Q-learning. Second, the combination algorithm works as a back-up plan in case of the solution from Q-learning violates system constraints. Third, the battery algorithm is applied to determine the charging or discharging schedule of battery systems. The obtained switching configuration provides restoration solutions without violating system constraints. Furthermore, the algorithm can adjust switching configurations after the restoration. For example, when renewable output changes, the algorithm provides an adjusted solution to avoid violating system constraints.The proposed algorithm has been tested in the modified IEEE 9-bus system using the real-time digital simulator. Simulation results demonstrate that the algorithm offers an efficient and effective restoration strategy for resilient distribution system operation.
Show less - Date Issued
- 2017
- Identifier
- CFE0006746, ucf:51856
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006746
- Title
- A FRAMEWORK TO MODEL COMPLEX SYSTEMS VIA DISTRIBUTED SIMULATION A CASE STUDY OF THE VIRTUAL TEST BED SIMULATION SYSTEM USING THE HIGH LEVEL ARCHITECTURE.
- Creator
-
Park, Jaebok, Sepulveda, Jose, University of Central Florida
- Abstract / Description
-
As the size, complexity, and functionality of systems we need to model and simulate con-tinue to increase, benefits such as interoperability and reusability enabled by distributed discrete-event simulation are becoming extremely important in many disciplines, not only military but also many engineering disciplines such as distributed manufacturing, supply chain management, and enterprise engineering, etc. In this dissertation we propose a distributed simulation framework for the development...
Show moreAs the size, complexity, and functionality of systems we need to model and simulate con-tinue to increase, benefits such as interoperability and reusability enabled by distributed discrete-event simulation are becoming extremely important in many disciplines, not only military but also many engineering disciplines such as distributed manufacturing, supply chain management, and enterprise engineering, etc. In this dissertation we propose a distributed simulation framework for the development of modeling and the simulation of complex systems. The framework is based on the interoperability of a simulation system enabled by distributed simulation and the gateways which enable Com-mercial Off-the-Shelf (COTS) simulation packages to interconnect to the distributed simulation engine. In the case study of modeling Virtual Test Bed (VTB), the framework has been designed as a distributed simulation to facilitate the integrated execution of different simulations, (shuttle process model, Monte Carlo model, Delay and Scrub Model) each of which is addressing differ-ent mission components as well as other non-simulation applications (Weather Expert System and Virtual Range). Although these models were developed independently and at various times, the original purposes have been seamlessly integrated, and interact with each other through Run-time Infrastructure (RTI) to simulate shuttle launch related processes. This study found that with the framework the defining properties of complex systems - interaction and emergence are realized and that the software life cycle models (including the spiral model and prototyping) can be used as metaphors to manage the complexity of modeling and simulation of the system. The system of systems (a complex system is intrinsically a "system of systems") continuously evolves to accomplish its goals, during the evolution subsystems co-ordinate with one another and adapt with environmental factors such as policies, requirements, and objectives. In the case study we first demonstrate how the legacy models developed in COTS simulation languages/packages and non-simulation tools can be integrated to address a compli-cated system of systems. We then describe the techniques that can be used to display the state of remote federates in a local federate in the High Level Architecture (HLA) based distributed simulation using COTS simulation packages.
Show less - Date Issued
- 2005
- Identifier
- CFE0000534, ucf:46416
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000534
- Title
- EXTENDING DISTRIBUTED TEMPORAL PROTOCOL LOGIC TO A PROOF BASED FRAMEWORK FOR AUTHENTICATION PROTOCOLS.
- Creator
-
Muhammad, Shahabuddin, Guha, Ratan, University of Central Florida
- Abstract / Description
-
Running critical applications, such as e-commerce, in a distributed environment requires assurance of the identities of the participants communicating with each other. Providing such assurance in a distributed environment is a difficult task. The goal of a security protocol is to overcome the vulnerabilities of a distributed environment by providing a secure way to disseminate critical information into the network. However, designing a security protocol is itself an error-prone process. In...
Show moreRunning critical applications, such as e-commerce, in a distributed environment requires assurance of the identities of the participants communicating with each other. Providing such assurance in a distributed environment is a difficult task. The goal of a security protocol is to overcome the vulnerabilities of a distributed environment by providing a secure way to disseminate critical information into the network. However, designing a security protocol is itself an error-prone process. In addition to employing an authentication protocol, one also needs to make sure that the protocol successfully achieves its authentication goals. The Distributed Temporal Protocol Logic (DTPL) provides a language for formalizing both local and global properties of distributed communicating processes. The DTPL can be effectively applied to security protocol analysis as a model checker. Although, a model checker can determine flaws in a security protocol, it can not provide proof of the security properties of a protocol. In this research, we extend the DTPL language and construct a set of axioms by transforming the unified framework of SVO logic into DTPL. This results into a deductive style proof-based framework for the verification of authentication protocols. The proposed framework represents authentication protocols and concisely proves their security properties. We formalize various features essential for achieving authentication, such as message freshness, key association, and source association in our framework. Since analyzing security protocols greatly depends upon associating a received message to its source, we separately analyze the source association axioms, translate them into our framework, and extend the idea for public-key protocols. Developing a proof-based framework in temporal logic gives us another verification tool in addition to the existing model checker. A security property of a protocol can either be verified using our approach, or a design flaw can be identified using the model checker. In this way, we can analyze a security protocol from both perspectives while benefiting from the representation of distributed temporal protocol logic. A challenge-response strategy provides a higher level of abstraction for authentication protocols. Here, we also develop a set of formulae using the challenge-response strategy to analyze a protocol at an abstract level. This abstraction has been adapted from the authentication tests of the graph-theoretic approach of strand space method. First, we represent a protocol in logic and then use the challenge-response strategy to develop authentication tests. These tests help us find the possibility of attacks on authentication protocols by investigating the originator of its received messages. Identifying the unintended originator of a received message indicates the existence of possible flaws in a protocol. We have applied our strategy on several well-known protocols and have successfully identified the attacks.
Show less - Date Issued
- 2007
- Identifier
- CFE0001799, ucf:47281
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001799