Current Search: system (x)
Pages
-
-
Title
-
Machine Learning from Casual Conversation.
-
Creator
-
Mohammed Ali, Awrad, Sukthankar, Gita, Wu, Annie, Boloni, Ladislau, University of Central Florida
-
Abstract / Description
-
Human social learning is an effective process that has inspired many existing machine learning techniques, such as learning from observation and learning by demonstration. In this dissertation, we introduce another form of social learning, Learning from a Casual Conversation (LCC). LCC is an open-ended machine learning system in which an artificially intelligent agent learns from an extended dialog with a human. Our system enables the agent to incorporate changes into its knowledge base,...
Show moreHuman social learning is an effective process that has inspired many existing machine learning techniques, such as learning from observation and learning by demonstration. In this dissertation, we introduce another form of social learning, Learning from a Casual Conversation (LCC). LCC is an open-ended machine learning system in which an artificially intelligent agent learns from an extended dialog with a human. Our system enables the agent to incorporate changes into its knowledge base, based on the human's conversational text input. This system emulates how humans learn from each other through a dialog. LCC closes the gap in the current research that is focused on teaching specific tasks to computer agents. Furthermore, LCC aims to provide an easy way to enhance the knowledge of the system without requiring the involvement of a programmer. This system does not require the user to enter specific information; instead, the user can chat naturally with the agent. LCC identifies the inputs that contain information relevant to its knowledge base in the learning process. LCC's architecture consists of multiple sub-systems combined to perform the task. Its learning component can add new knowledge to existing information in the knowledge base, confirm existing information, and/or update existing information found to be related to the user input. %The test results indicate that the prototype was successful in learning from a conversation. The LCC system functionality was assessed using different evaluation methods. This includes tests performed by the developer, as well as by 130 human test subjects. Thirty of those test subjects interacted directly with the system and completed a survey of 13 questions/statements that asked the user about his/her experience using LCC. A second group of 100 human test subjects evaluated the dialogue logs of a subset of the first group of human testers. The collected results were all found to be acceptable and within the range of our expectations.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007503, ucf:52634
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007503
-
-
Title
-
Sensemaking In Honors Scheduling.
-
Creator
-
Rowland, James, Musambira, George, Hastings, Sally, Katt, James, University of Central Florida
-
Abstract / Description
-
Honors courses offer students unique opportunities such as smaller class sizes, applied application of knowledge, and a closer mentorship with the faculty member. Through observation, there are some cases where courses have regularly been cancelled every term due to low enrollment. When these courses are often canceled, it can impact the honors program ability to continue to offer courses to the students. Using Weick's work on Sensemaking and principles of analyzing organizational culture,...
Show moreHonors courses offer students unique opportunities such as smaller class sizes, applied application of knowledge, and a closer mentorship with the faculty member. Through observation, there are some cases where courses have regularly been cancelled every term due to low enrollment. When these courses are often canceled, it can impact the honors program ability to continue to offer courses to the students. Using Weick's work on Sensemaking and principles of analyzing organizational culture, the study addressed how honors students are impacted by course cancellations and how they communicate about the impact. Through two focus groups with a total of eleven participants, information was gathered on how they constructed and communicated about their identity as honors students; their individual campus environments, and how those environments help to shape the communication culture they were part of; how they make scheduling decisions by extracting plausible cues from the communication they receive about course scheduling; and the impact of course cancellations on their honors experience.In defining honors and its incorporation into their identity, the students described how being in honors was a challenge to make themselves the best that they can be which included being part of an engaging community of scholars and of use to the community around. The two focus groups noted differences on how each campus provided a slightly different organizational culture: one more familiar and inviting, the other massive and resource filled, and with diversity in the type of students encountered. Course scheduling messages often were extracted from the course scheduling website, with little communication about what would be offered into the future beyond the immediate term. Students had to gather additional data from their fellow students, faculty, and the honors office. Students often searched for cues regarding time and location of the class, the impact to the degree program, and if the class will push the student in new and innovative ways to provide a deeper engagement with the material. Students were often impacted by course cancellations and the added stress of having to find replacement courses to avoid extending the time to complete the degree or risk financial repercussions with the loss of financial aid. These stressors do provide cues that can influence the degree of challenge a student is willing to accept or even degree completion.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006652, ucf:51249
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006652
-
-
Title
-
Research on High-performance and Scalable Data Access in Parallel Big Data Computing.
-
Creator
-
Yin, Jiangling, Wang, Jun, Jin, Yier, Lin, Mingjie, Qi, GuoJun, Wang, Chung-Ching, University of Central Florida
-
Abstract / Description
-
To facilitate big data processing, many dedicated data-intensive storage systems such as Google File System(GFS), Hadoop Distributed File System(HDFS) and Quantcast File System(QFS) have been developed. Currently, the Hadoop Distributed File System(HDFS) [20] is the state-of-art and most popular open-source distributed file system for big data processing. It is widely deployed as the bedrock for many big data processing systems/frameworks, such as the script-based pig system, MPI-based...
Show moreTo facilitate big data processing, many dedicated data-intensive storage systems such as Google File System(GFS), Hadoop Distributed File System(HDFS) and Quantcast File System(QFS) have been developed. Currently, the Hadoop Distributed File System(HDFS) [20] is the state-of-art and most popular open-source distributed file system for big data processing. It is widely deployed as the bedrock for many big data processing systems/frameworks, such as the script-based pig system, MPI-based parallel programs, graph processing systems and scala/java-based Spark frameworks. These systems/applications employ parallel processes/executors to speed up data processing within scale-out clusters.Job or task schedulers in parallel big data applications such as mpiBLAST and ParaView can maximize the usage of computing resources such as memory and CPU by tracking resource consumption/availability for task assignment. However, since these schedulers do not take the distributed I/O resources and global data distribution into consideration, the data requests from parallel processes/executors in big data processing will unfortunately be served in an imbalanced fashion on the distributed storage servers. These imbalanced access patterns among storage nodes are caused because a). unlike conventional parallel file system using striping policies to evenly distribute data among storage nodes, data-intensive file systems such as HDFS store each data unit, referred to as chunk or block file, with several copies based on a relative random policy, which can result in an uneven data distribution among storage nodes; b). based on the data retrieval policy in HDFS, the more data a storage node contains, the higher the probability that the storage node could be selected to serve the data. Therefore, on the nodes serving multiple chunk files, the data requests from different processes/executors will compete for shared resources such as hard disk head and network bandwidth. Because of this, the makespan of the entire program could be significantly prolonged and the overall I/O performance will degrade.The first part of my dissertation seeks to address aspects of these problems by creating an I/O middleware system and designing matching-based algorithms to optimize data access in parallel big data processing. To address the problem of remote data movement, we develop an I/O middleware system, called SLAM, which allows MPI-based analysis and visualization programs to benefit from locality read, i.e, each MPI process can access its required data from a local or nearby storage node. This can greatly improve the execution performance by reducing the amount of data movement over network. Furthermore, to address the problem of imbalanced data access, we propose a method called Opass, which models the data read requests that are issued by parallel applications to cluster nodes as a graph data structure where edges weights encode the demands of load capacity. We then employ matching-based algorithms to map processes to data to achieve data access in a balanced fashion. The final part of my dissertation focuses on optimizing sub-dataset analyses in parallel big data processing. Our proposed methods can benefit different analysis applications with various computational requirements and the experiments on different cluster testbeds show their applicability and scalability.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006021, ucf:51008
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006021
-
-
Title
-
A framework to generate a smart manufacturing system configurations using agents and optimization.
-
Creator
-
Nagadi, Khalid, Rabelo, Luis, Lee, Gene, Elshennawy, Ahmad, Ahmad, Ali, University of Central Florida
-
Abstract / Description
-
Manufacturing is a crucial element in the global economy. During the last decade, the national manufacturing sector loses nearly 30% of its workforce and investments. Consequently, the quality of the domestic goods, global share, and manufacturing capabilities has been declined. Therefore, innovative ways to optimize the usage of the Smart Manufacturing Systems (SMS) are required to form a new manufacturing era. This research is presenting a framework to optimize the design of SMS. This...
Show moreManufacturing is a crucial element in the global economy. During the last decade, the national manufacturing sector loses nearly 30% of its workforce and investments. Consequently, the quality of the domestic goods, global share, and manufacturing capabilities has been declined. Therefore, innovative ways to optimize the usage of the Smart Manufacturing Systems (SMS) are required to form a new manufacturing era. This research is presenting a framework to optimize the design of SMS. This includes the determination of the suitable machines that can perform the job efficiently, the quantity of those machines, and the potential messaging system required for sharing information.Multiple reviews are used to form the framework. Expert machine selection matrix identifies the required machines and machine parameter matrix defines the specifications of those machines. While business process modeling and notation (BPMN) captures the process plan in object-oriented fashion. In addition, to agent unified modeling language (AUML) that guides the application of message sequence diagram and statecharts. Finally, the configuration is obtained from a hybrid simulation model. Agent based-modeling is used to capture the behavior of the machines where discrete event simulation mimics the process flow. A case study of a manufacturing system is used to verify the study. As a result, the framework shows positive outcomes in supporting upper management in the planning phase of establishing a SMS or evaluating an existing one.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006540, ucf:51311
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006540
-
-
Title
-
Traversing the Terrain: A Least Cost Analysis on Intersite Causeways in the Maya Region.
-
Creator
-
Rivas, Alexander, Chase, Arlen, Chase, Diane, Walker, John, University of Central Florida
-
Abstract / Description
-
The study of ancient Maya causeways is crucial for understanding Maya social and spatial organization. Archaeologists have been interested in Maya causeways for decades, specifically documenting their locations. More recently, the use of Geographic Information Systems, or GIS, has been used for understanding the spatial organization of archaeological sites. GIS analyses on ancient Maya causeways however have been very limited. This thesis aims to evaluate ancient Maya causeways through GIS...
Show moreThe study of ancient Maya causeways is crucial for understanding Maya social and spatial organization. Archaeologists have been interested in Maya causeways for decades, specifically documenting their locations. More recently, the use of Geographic Information Systems, or GIS, has been used for understanding the spatial organization of archaeological sites. GIS analyses on ancient Maya causeways however have been very limited. This thesis aims to evaluate ancient Maya causeways through GIS analysis. Specifically, five intersite causeway systems are looked at: the Mirador Basin, Yaxuna-Coba-Ixil, Uxmal-Nohpat-Kabah, Ake-Izamal-Kantunil, and Uci-Kancab-Ukana- Cansahcab. These causeway systems were evaluated using least-cost paths based on the terrain. In this thesis, I argue that the intersite causeways do not follow a least-cost path based on terrain and that the purpose of these roads varies between sites and regions.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005404, ucf:50426
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005404
-
-
Title
-
The Development and Testing of a Measurement System to Assess Intensive Care Unit Team Performance.
-
Creator
-
Dietz, Aaron, Salas, Eduardo, Jentsch, Florian, Sims, Valerie, Rosen, Michael, Burke, Shawn, University of Central Florida
-
Abstract / Description
-
Teamwork is essential for ensuring the quality and safety of healthcare delivery in the intensive care unit (ICU). Complex procedures are conducted with a diverse team of clinicians with unique roles and responsibilities. Information about care plans and goals must also be developed, communicated, and coordinated across multiple disciplines and transferred effectively between shifts and personnel. The intricacies of routine care are compounded during emergency events, which require ICU teams...
Show moreTeamwork is essential for ensuring the quality and safety of healthcare delivery in the intensive care unit (ICU). Complex procedures are conducted with a diverse team of clinicians with unique roles and responsibilities. Information about care plans and goals must also be developed, communicated, and coordinated across multiple disciplines and transferred effectively between shifts and personnel. The intricacies of routine care are compounded during emergency events, which require ICU teams to adapt to rapidly changing patient conditions while facing intense time pressure and conditional stress. Realities such as these emphasize the need for teamwork skills in the ICU. The measurement of teamwork serves a number of different purposes, including routine assessment, directing feedback, and evaluating the impact of improvement initiatives. Yet no behavioral marker system exists in critical care for quantifying teamwork across multiple task types. This study contributes to the state of science and practice in critical care by taking a (1) theory-driven, (2) context-driven, and (3) psychometrically-driven approach to the development of a teamwork measure. The development of the marker system for the current study considered the state of science and practice surrounding teamwork in critical care, the application of behavioral marker systems across the healthcare community, and interviews with front line clinicians. The ICU behavioral marker system covers four core teamwork dimensions especially relevant to critical care teams: Communication, Leadership, Backup and Supportive Behavior, and Team Decision Making, with each dimension subsuming other relevant subdimensions. This study provided an initial assessment of the reliability and validity of the marker system by focusing on a subset of teamwork competencies relevant to subset of team tasks. Two raters scored the performance of 50 teams along six subdimensions during rounds (n=25) and handoffs (n=25). In addition to calculating traditional forms of reliability evidence [intraclass correlations (ICCs) and percent agreement], this study modeled the systematic variance in ratings associated with raters, instances of teamwork, subdimensions, and tasks by applying generalizability (G) theory. G theory was also employed to provide evidence that the marker system adequately distinguishes teamwork competencies targeted for measurement. The marker system differentiated teamwork subdimensions when the data for rounds and handoffs were combined and when the data were examined separately by task (G coefficient greater than 0.80). Additionally, variance associated with instances of teamwork, subdimensions, and their interaction constituted the greatest proportion of variance in scores while variance associated with rater and task effects were minimal. That said, there remained a large percentage of residual error across analyses. Single measures ICCs were fair to good when the data for rounds and handoffs were combined depending on the competency assessed (0.52 to 0.74). The ICCs ranged from fair to good when only examining handoffs (0.47 to 0.69) and fair to excellent when only considering rounds (0.53 to 0.79). Average measures ICCs were always greater than single measures for each analysis, ranging from good to excellent (overall: 0.69 to 0.85, handoffs: 0.64 to 0.81, rounds: 0.70 to 0.89). In general, the percent of overall agreement was substandard, ranging from 0.44 to 0.80 across each task analysis. The percentage of scores within a single point, however, was nearly perfect, ranging from 0.80 to 1.00 for rounds and handoffs, handoffs, and rounds. The confluence of evidence supported the expectation that the marker system differentiates among teamwork subdmensions. Yet different reliability indices suggested varying levels of confidence in rater consistency depending on the teamwork competency that was measured. Because this study applied a psychometric approach, areas for future development and testing to redress these issues were identified. There also is a need to assess the viability of this tool in other research contexts to evaluate its generalizability in places with different norms and organizational policies as well as for different tasks that emphasize different teamwork skills. Further, it is important to increase the number of users able to make assessments through low-cost, easily accessible rater training and guidance materials. Particular emphasis should be given to areas where rater reliability was less than ideal. This would allow future researchers to evaluate team performance, provide developmental feedback, and determine the impact of future teamwork improvement initiatives.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005482, ucf:50356
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005482
-
-
Title
-
Optimization and design of photovoltaic micro-inverter.
-
Creator
-
Zhang, Qian, Batarseh, Issa, Shen, Zheng, Wu, Xinzhang, Lotfifard, Saeed, Kutkut, Nasser, University of Central Florida
-
Abstract / Description
-
To relieve energy shortage and environmental pollution issues, renewable energy, especially PV energy has developed rapidly in the last decade. The micro-inverter systems, with advantages in dedicated PV power harvest, flexible system size, simple installation, and enhanced safety characteristics are the future development trend of the PV power generation systems. The double-stage structure which can realize high efficiency with nice regulated sinusoidal waveforms is the mainstream for the...
Show moreTo relieve energy shortage and environmental pollution issues, renewable energy, especially PV energy has developed rapidly in the last decade. The micro-inverter systems, with advantages in dedicated PV power harvest, flexible system size, simple installation, and enhanced safety characteristics are the future development trend of the PV power generation systems. The double-stage structure which can realize high efficiency with nice regulated sinusoidal waveforms is the mainstream for the micro-inverter.This thesis studied a double stage micro-inverter system. Considering the intermittent nature of PV power, a PFC was analyzed to provide additional electrical power to the system. When the solar power is less than the load required, PFC can drag power from the utility grid.In the double stage micro-inverter, the DC/DC stage was realized by a LLC converter, which could realize soft switching automatically under frequency modulation. However it has a complicated relationship between voltage gain and load. Thus conventional variable step P(&)O MPPT techniques for PWM converter were no longer suitable for the LLC converter. To solve this problem, a novel MPPT was proposed to track MPP efficiently. Simulation and experimental results verified the effectiveness of the proposed MPPT.The DC/AC stage of the micro-inverter was realized by a BCM inverter. With duty cycle and frequency modulation, ZVS was achieved through controlling the inductor current bi-directional in every switching cycle. This technique required no additional resonant components and could be employed for low power applications on conventional full-bridge and half-bridge inverter topologies. Three different current mode control schemes were derived from the basic theory of the proposed technique. They were referred to as Boundary Current Mode (BCM), Variable Hysteresis Current Mode (VHCM), and Constant Hysteresis Current Mode (CHCM) individually in this paper with their advantages and disadvantages analyzed in detail. Simulation and experimental results demonstrated the feasibilities of the proposed soft-switching technique with the digital control schemes.The PFC converter was applied by a single stage biflyback topology, which combined the advantages of single stage PFC and flyback topology together, with further advantages in low intermediate bus voltage and current stresses. A digital controller without current sampling requirement was proposed based on the specific topology. To reduce the voltage spike caused by the leakage inductor, a novel snubber cell combining soft switching technique with snubber technique together was proposed. Simulation and experimental waveforms illustrated the same as characteristics as the theoretical analysis.In summary, the dissertation analyzed each power stage of photovoltaic micro-inverter system from efficiency and effectiveness optimization perspectives. Moreover their advantages were compared carefully with existed topologies and control techniques. Simulation and experiment results were provided to support the theoretical analysis.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005286, ucf:50540
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005286
-
-
Title
-
Differential Games for Multi-Agent Systems under Distributed Information.
-
Creator
-
Lin, Wei, Qu, Zhihua, Simaan, Marwan, Haralambous, Michael, Das, Tuhin, Yong, Jiongmin, University of Central Florida
-
Abstract / Description
-
In this dissertation, we consider differential games for multi-agent systems under distributed information where every agent is only able to acquire information about the others according to a directed information graph of local communication/sensor networks. Such games arise naturally from many applications including mobile robot coordination, power system optimization, multi-player pursuit-evasion games, etc. Since the admissible strategy of each agent has to conform to the information...
Show moreIn this dissertation, we consider differential games for multi-agent systems under distributed information where every agent is only able to acquire information about the others according to a directed information graph of local communication/sensor networks. Such games arise naturally from many applications including mobile robot coordination, power system optimization, multi-player pursuit-evasion games, etc. Since the admissible strategy of each agent has to conform to the information graph constraint, the conventional game strategy design approaches based upon Riccati equation(s) are not applicable because all the agents are required to have the information of the entire system. Accordingly, the game strategy design under distributed information is commonly known to be challenging. Toward this end, we propose novel open-loop and feedback game strategy design approaches for Nash equilibrium and noninferior solutions with a focus on linear quadratic differential games. For the open-loop design, approximate Nash/noninferior game strategies are proposed by integrating distributed state estimation into the open-loop global-information Nash/noninferior strategies such that, without global information, the distributed game strategies can be made arbitrarily close to and asymptotically converge over time to the global-information strategies. For the feedback design, we propose the best achievable performance indices based approach under which the distributed strategies form a Nash equilibrium or noninferior solution with respect to a set of performance indices that are the closest to the original indices. This approach overcomes two issues in the classical optimal output feedback approach: the simultaneous optimization and initial state dependence. The proposed open-loop and feedback design approaches are applied to an unmanned aerial vehicle formation control problem and a multi-pursuer single-evader differential game problem, respectively. Simulation results of several scenarios are presented for illustration.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005025, ucf:49991
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005025
-
-
Title
-
A Systems Approach to Sustainable Energy Portfolio Development.
-
Creator
-
Hadian Niasar, Saeed, Reinhart, Debra, Madani Larijani, Kaveh, Wang, Dingbao, Lee, Woo Hyoung, Pazour, Jennifer, University of Central Florida
-
Abstract / Description
-
Adequate energy supply has become one of the vital components of human development and economic growth of nations. In fact, major components of the global economy such as transportation services, communications, industrial processes, and construction activities are dependent on adequate energy resources. Even mining and extraction of energy resources, including harnessing the forces of nature to produce energy, are dependent on accessibility of sufficient energy in the appropriate form at the...
Show moreAdequate energy supply has become one of the vital components of human development and economic growth of nations. In fact, major components of the global economy such as transportation services, communications, industrial processes, and construction activities are dependent on adequate energy resources. Even mining and extraction of energy resources, including harnessing the forces of nature to produce energy, are dependent on accessibility of sufficient energy in the appropriate form at the desired location. Therefore, energy resource planning and management to provide appropriate energy in terms of both quantity and quality has become a priority at the global level. The increasing demand for energy due to growing population, higher living standards, and economic development magnifies the importance of reliable energy plans. In addition, the uneven distribution of traditional fossil fuel energy sources on the Earth and the resulting political and economic interactions are other sources of complexity within energy planning. The competition over fossil fuels that exists due to gradual depletion of such sources and the tremendous thirst of current global economic operations for these sources, as well as the sensitivity of fossil fuel supplies and prices to global conditions, all add to the complexity of effective energy planning. In addition to diversification of fossil fuel supply sources as a means of increasing national energy security, many governments are investing in non-fossil fuels, especially renewable energy sources, to combat the risks associated with adequate energy supply. Moreover, increasing the number of energy sources also adds further complication to energy planning. Global warming, resulting from concentration of greenhouse gas emissions in the atmosphere, influences energy infrastructure investments and operations management as a result of international treaty obligations and other regulations requiring that emissions be cut to sustainable levels. Burning fossil fuel, as one of the substantial driving factors of global warming and energy insecurity, is mostly impacted by such policies, pushing forward the implementation of renewable energy polices. Thus, modern energy portfolios comprise a mix of renewable energy sources and fossil fuels, with an increasing share of renewables over time. Many governments have been setting renewable energy targets that mandate increasing energy production from such sources over time. Reliance on renewable energy sources certainly helps with reduction of greenhouse gas emissions while improving national energy security. However, the growing implementation of renewable energy has some limitations. Such energy technologies are not always as cheap as fossil fuel sources, mostly due to immaturity of these energy sources in most locations as well as high prices of the materials and equipment to harness the forces of nature and transform them to usable energy. In addition, despite the fact that renewable energy sources are traditionally considered to be environmentally friendly, compared to fossil fuels, they sometimes require more natural resources such as water and land to operate and produce energy. Hence, the massive production of energy from these sources may lead to water shortage, land use change, increasing food prices, and insecurity of water supplies. In other words, the energy production from renewables might be a solution to reduce greenhouse gas emissions, but it might become a source of other problems such as scarcity of natural resources.The fact that future energy mix will rely more on renewable sources is undeniable, mostly due to depletion of fossil fuel sources over time. However, the aforementioned limitations pose a challenge to general policies that encourage immediate substitution of fossil fuels with renewables to battle climate change. In fact, such limitations should be taken into account in developing reliable energy policies that seek adequate energy supply with minimal secondary effects. Traditional energy policies have been suggesting the expansion of least cost energy options, which were mostly fossil fuels. Such sources used to be considered riskless energy options with low volatility in the absence of competitive energy markets in which various energy technologies are competing over larger market shares. Evolution of renewable energy technologies, however, complicated energy planning due to emerging risks that emanated mostly from high price volatility. Hence, energy planning began to be seen as investment problems in which the costs of energy portfolio were minimized while attempting to manage associated price risks. So, energy policies continued to rely on risky fossil fuel options and small shares of renewables with the primary goal to reduce generation costs. With emerging symptoms of climate change and the resulting consequences, the new policies accounted for the costs of carbon emissions control in addition to other costs. Such policies also encouraged the increased use of renewable energy sources. Emissions control cost is not an appropriate measure of damages because these costs are substantially less than the economic damages resulting from emissions. In addition, the effects of such policies on natural resources such as water and land is not directly taken into account. However, sustainable energy policies should be able to capture such complexities, risks, and tradeoffs within energy planning. Therefore, there is a need for adequate supply of energy while addressing issues such as global warming, energy security, economy, and environmental impacts of energy production processes. The effort in this study is to develop an energy portfolio assessment model to address the aforementioned concerns.This research utilized energy performance data, gathered from extensive review of articles and governmental institution reports. The energy performance values, namely carbon footprint, water footprint, land footprint, and cost of energy production were carefully selected in order to have the same basis for comparison purposes. If needed, adjustment factors were applied. In addition, the Energy Information Administration (EIA) energy projection scenarios were selected as the basis for estimating the share of the energy sources over the years until 2035. Furthermore, the resource availability in different states within the U.S. was obtained from publicly available governmental institutions that provide such statistics. Specifically, the carbon emissions magnitudes (metric tons per capita) for different states were extracted from EIA databases, states' freshwater withdrawals (cubic meters per capita) were found from USGS databases, states' land availability values (square kilometers) were obtained from the U.S. Census Bureau, and economic resource availability (GDP per capita) for different states were acquired from the Bureau of Economic Analysis.In this study, first, the impacts of energy production processes on global freshwater resources are investigated based on different energy projection scenarios. Considering the need for investing on energy sources with minimum environmental impacts while securing maximum efficiency, a systems approach is adopted to quantify the resource use efficiency of energy sources under sustainability indicators. The sensitivity and robustness of the resource use efficiency scores are then investigated versus existing energy performance uncertainties and varying resource availability conditions. The resource use efficiency of the energy sources is then regionalized for different resource limitation conditions in states within the U.S. Finally, a sustainable energy planning framework is developed based on Modern Portfolio Theory (MPT) and Post-Modern Portfolio Theory (PMPT) with consideration of the resource use efficiency measures and associated efficiency risks.In the energy-water nexus investigation, the energy sources are categorized into 10 major groups with distinct water footprint magnitudes and associated uncertainties. The global water footprint of energy production processes are then estimated for different EIA energy mix scenarios over the 2012-2035 period. The outcomes indicate that the water footprint of energy production increases by almost 50% depending on the scenario. In fact, growing energy production is not the only reason for increasing the energy related water footprint. Increasing the share of water intensive energy sources in the future energy mix is another driver of increasing global water footprint of energy in the future. The results of the energies' water footprint analysis demonstrate the need for a policy to reduce the water use of energy generation. Furthermore, the outcomes highlight the importance of considering the secondary impacts of energy production processes besides their carbon footprint and costs. The results also have policy implications for future energy investments in order to increase the water use efficiency of energy sources per unit of energy production, especially those with significant water footprint such as hydropower and biofuels.In the next step, substantial efforts have been dedicated to evaluating the efficiency of different energy sources from resource use perspective. For this purpose, a system of systems approach is adopted to measure the resource use efficiency of energy sources in the presence of trade-offs between independent yet interacting systems (climate, water, land, economy). Hence, a stochastic multi-criteria decision making (MCDM) framework is developed to compute the resource use efficiency scores for four sustainability assessment criteria, namely carbon footprint, water footprint, land footprint, and cost of energy production considering existing performance uncertainties. The energy sources' performances under aforementioned sustainability criteria are represented in ranges due to uncertainties that exist because of technological and regional variations. Such uncertainties are captured by the model based on Monte-Carlo selection of random values and are translated into stochastic resource use efficiency scores. As the notion of optimality is not unique, five MCDM methods are exploited in the model to counterbalance the bias toward definition of optimality. This analysis is performed under (")no resource limitation(") conditions to highlight the quality of different energy sources from a resource use perspective. The resource use efficiency is defined as a dimensionless number in scale of 0-100, with greater numbers representing a higher efficiency. The outcomes of this analysis indicate that despite increasing popularity, not all renewable energy sources are more resource use efficient than non-renewable sources. This is especially true for biofuels and different types of ethanol that demonstrate lower resource use efficiency scores compared to natural gas and nuclear energy. It is found that geothermal energy and biomass energy from miscanthus are the most and least resource use efficient energy alternatives based on the performance data available in the literature. The analysis also shows that none of the energy sources are strictly dominant or strictly dominated by other energy sources. Following the resource use efficiency analysis, sensitivity and robustness analyses are performed to determine the impacts of resource limitations and existing performance uncertainties on resource use efficiency, respectively. Sensitivity analysis indicates that geothermal energy and ethanol from sugarcane have the lowest and highest resource use efficiency sensitivity, respectively. Also, it is found that from a resource use perspective, concentrated solar power (CSP) and hydropower are respectively the most and least robust energy options with respect to the existing performance uncertainties in the literature.In addition to resource use efficiency analysis, sensitivity analysis and robustness analysis, of energy sources, this study also investigates the scheme of the energy production mix within a specific region with certain characteristics, resource limitations, and availabilities. In fact, different energy sources, especially renewables, vary in demand for natural resources (such as water and land), environmental impacts, geographic requirements, and type of infrastructure required for energy production. In fact, the efficiency of energy sources from a resource use perspective is dependent upon regional specifications, so the energy portfolio varies for different regions due to varying resource availability conditions. Hence, the resource use efficiency scores of different energy technologies are calculated based on the aforementioned sustainability criteria and regional resource availability and limitation conditions (emissions, water resources, land, and GDP) within different U.S. states, regardless of the feasibility of energy alternatives in each state. Sustainability measures are given varying weights based on the emissions cap, available economic resources, land, and water resources in each state, upon which the resource use efficiency of energy sources is calculated by utilizing the system of systems framework developed in the previous step. Efficiency scores are graphically illustrated on GIS-based maps for different states and different energy sources. The results indicate that for some states, fossil fuels such as coal and natural gas are as efficient as renewables like wind and solar energy technologies from resource use perspective. In other words, energy sources' resource use efficiency is significantly sensitive to available resources and limitations in a certain location.Moreover, energy portfolio development models have been created in order to determine the share of different energy sources of total energy production, in order to meet energy demand, maintain energy security, and address climate change with the least possible adverse impacts on the environment. In fact, the traditional (")least cost(") energy portfolios are outdated and should be replaced with (")most efficient(") ones that are not only cost-effective, but also environmentally friendly. Hence, the calculated resource use efficiency scores and associated statistical analysis outcomes for a range of renewable and nonrenewable energy sources are fed into a portfolio selection framework to choose the appropriate energy mixes associated with the risk attitudes of decision makers. For this purpose, Modern Portfolio Theory (MPT) and Post-Modern Portfolio Theory (PMPT) are both employed to illustrate how different interpretations of (")risk of return(") yield different energy portfolios. The results indicate that 2012 energy mix and projected world's 2035 energy portfolio are not sustainable in terms of resource use efficiency and could be substituted with more reliable, more effective portfolios that address energy security and global warming with minimal environmental and economic impacts.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005001, ucf:50020
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005001
-
-
Title
-
Strategic Improvement: A Systems Approach using the Balanced Scorecard Methodology to Increase Federally Financed Research at the University of Central Florida.
-
Creator
-
Walters, Joseph, Rabelo, Luis, Ajayi, Richard, Calabrese, Mark, University of Central Florida
-
Abstract / Description
-
The University of Central Florida has many successful measures to reflect on as it celebrates its 50th year in 2013. It is the university with the 2nd largest student population in the U. S. and its overall ranking in the U.S. News (&) World Report has improved 4 years in a row. However, with respect to research, the federally funded research and development for the University of Central Florida (UCF) has remained flat. In addition, when compared to other schools, its portion of those federal...
Show moreThe University of Central Florida has many successful measures to reflect on as it celebrates its 50th year in 2013. It is the university with the 2nd largest student population in the U. S. and its overall ranking in the U.S. News (&) World Report has improved 4 years in a row. However, with respect to research, the federally funded research and development for the University of Central Florida (UCF) has remained flat. In addition, when compared to other schools, its portion of those federal research dollars is small. This thesis lays the groundwork for developing a model for improving the federally financed academic research and development. A systems approach using the balanced scorecard methodology was used to develop causal loop relationships between the many factors that influence the federal funding process. Measures are proposed that link back to the objectives and mission of the university. One particular measure found in the literature was refined to improve its integration into this model. The resulting work provides a framework with specific measures that can be incorporated at the university to improve their share of the federally financed research and development. Although developed for UCF this work could be applied to any university that desires to improve their standing in the federal financed academic research and development market.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005069, ucf:49955
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005069
-
-
Title
-
Suction Detection and Feedback Control for the Rotary Left Ventricular Assist Device.
-
Creator
-
Wang, Yu, Simaan, Marwan, Qu, Zhihua, Haralambous, Michael, Kassab, Alain, Divo, Eduardo, University of Central Florida
-
Abstract / Description
-
The Left Ventricular Assist Device (LVAD) is a rotary mechanical pump that is implanted in patients with congestive heart failure to help the left ventricle in pumping blood in the circulatory system. The rotary type pumps are controlled by varying the pump motor current to adjust the amount of blood flowing through the LVAD. One important challenge in using such a device is the desire to provide the patient with as close to a normal lifestyle as possible until a donor heart becomes available...
Show moreThe Left Ventricular Assist Device (LVAD) is a rotary mechanical pump that is implanted in patients with congestive heart failure to help the left ventricle in pumping blood in the circulatory system. The rotary type pumps are controlled by varying the pump motor current to adjust the amount of blood flowing through the LVAD. One important challenge in using such a device is the desire to provide the patient with as close to a normal lifestyle as possible until a donor heart becomes available. The development of an appropriate feedback controller that is capable of automatically adjusting the pump current is therefore a crucial step in meeting this challenge. In addition to being able to adapt to changes in the patient's daily activities, the controller must be able to prevent the occurrence of excessive pumping of blood from the left ventricle (a phenomenon known as ventricular suction) that may cause collapse of the left ventricle and damage to the heart muscle and tissues.In this dissertation, we present a new suction detection system that can precisely classify pump flow patterns, based on a Lagrangian Support Vector Machine (LSVM) model that combines six suction indices extracted from the pump flow signal to make a decision about whether the pump is not in suction, approaching suction, or in suction. The proposed method has been tested using in vivo experimental data based on two different LVAD pumps. The results show that the system can produce superior performance in terms of classification accuracy, stability, learning speed, and good robustness compared to three other existing suction detection methods and the original SVM-based algorithm. The ability of the proposed algorithm to detect suction provides a reliable platform for the development of a feedback control system to control the current of the pump (input variable) while at the same time ensuring that suction is avoided.Based on the proposed suction detector, a new control system for the rotary LVAD was developed to automatically regulate the pump current of the device to avoid ventricular suction. The control system consists of an LSVM suction detector and a feedback controller. The LSVM suction detector is activated first so as to correctly classify the pump status as No Suction (NS) or Suction (S). When the detection is (")No Suction("), the feedback controller is activated so as to automatically adjust the pump current in order that the blood flow requirements of the patient's body at different physiological states are met according to the patient's activity level. When the detection is (")Suction("), the pump current is immediately decreased in order to drive the pump back to a normal No Suction operating condition. The performance of the control system was tested in simulations over a wide range of physiological conditions.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005070, ucf:49956
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005070
-
-
Title
-
Novel Immunogens of Cellular Immunity Revealed using in vitro Human Cell-Based Approach.
-
Creator
-
Schanen, Brian, Self, William, Warren, William, Khaled, Annette, Seal, Sudipta, Zervos, Antonis, University of Central Florida
-
Abstract / Description
-
Nanotechnology has undergone rapid expansion largely as a result of its enormous potential for applications as biomaterials, drug delivery vehicles, cancer therapeutics, and immunopotentiators. Despite this wave of interest and broad appeal for nanoparticles, evidence of their effect to the human immune system remains scarce. Concerns rise as studies on nanoparticle toxicology continue to emerge indicating that nanomaterials can be acutely toxic and can have long term inflammatory effects as...
Show moreNanotechnology has undergone rapid expansion largely as a result of its enormous potential for applications as biomaterials, drug delivery vehicles, cancer therapeutics, and immunopotentiators. Despite this wave of interest and broad appeal for nanoparticles, evidence of their effect to the human immune system remains scarce. Concerns rise as studies on nanoparticle toxicology continue to emerge indicating that nanomaterials can be acutely toxic and can have long term inflammatory effects as seen in animal models. Based on these findings and the rise in the development of nanoparticle technologies targeting in vivo applications, the urgency to characterize nanomaterial immunogenicity is paramount.Nanoparticles harbor great potential because they possess unique physicochemical properties compared to their larger counter parts as a result of quantum-size effects and their inherent large surface area to volume ratio. These physicochemical properties govern how a nanoparticle will behave in its environment. However, researchers have only just begun to catalogue the biological effect these properties illicit. We took it upon ourselves to investigate nanoparticle size-induced effects using TiO2, one of the most widely manufactured nanoparticles, as a model. We studied these effects in dendritic cells across a human donor pool. We examined dendritic cells because they have an inimitable functional role bridging the innate and adaptive arms of immunity. From this work we found that TiO2 nanoparticles can activate human dendritic cells to become pro-inflammatory in a size-dependent manner as compared to its micron-sized counterpart, revealing novel immune cell recognition and activation by a crystalline nanomaterial.Having identified nanomaterial size as a contributing feature of nanoparticle induced immunopotentiation, we became interested if additional physicochemical properties such as surface reactivity or catalytic behavior could also be immunostimulatory. Moreover, because we witnessed a stimulatory effect to dendritic cells following nanoparticle treatment, we were curious how these nanoparticle-touched dendritic cells would impact adaptive immunity. Since TiO2 acts as an oxidant we chose an antioxidant nanoparticle, CeO2, as a counterpart to explore how divergent nanoparticle surface reactivity impacts innate and adaptive immunity. We focused on the effect these nanoparticles had on human dendritic cells and TH cells as a strategy towards defining their impact to cellular immunity. Combined, we report that TiO2 nanoparticles potentiate DC maturation inducing the secretion of IL-12p70 and IL-1?, while treatment with CeO2 nanoparticles induced IL-10, a hallmark of suppression. When delivered to T cells alone TiO2 nanoparticles induced stronger proliferation in comparison to CeO2 which stimulated TReg differentiation. When co-cultured in allogeneic T cell assays, the materials directed alternate TH polarization whereby TiO2 drives largely a TH1 dominate response, whereas CeO2 was largely TH2 bias. Combined, we report a novel immunomodulatory capacity of nanomaterials with catalytic activity. While unintentional exposure to these nanomaterials could pose a serious health risk, development and targeted use of such immunomodulatory nanoparticles could provide researchers with new tools for novel adjuvant strategies or therapeutics.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004629, ucf:49927
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004629
-
-
Title
-
A Framework for the Development of a Model for Successful, Sustained Lean Implementation and Improvement.
-
Creator
-
Sisson, Julie, Elshennawy, Ahmad, Rabelo, Luis, Xanthopoulos, Petros, Porter, Robert, University of Central Florida
-
Abstract / Description
-
Lean is a business philosophy focused on shortening lead times by removing waste and concentrating on value-added processes. When implemented successfully, it not only allows for cost reduction while improving quality, but it can also position a company to achieve tremendous growth. The problem is that though many companies are attempting to implement lean, it is estimated that only 2-3% are achieving the desired level of success. The purpose of this research is to identify the key...
Show moreLean is a business philosophy focused on shortening lead times by removing waste and concentrating on value-added processes. When implemented successfully, it not only allows for cost reduction while improving quality, but it can also position a company to achieve tremendous growth. The problem is that though many companies are attempting to implement lean, it is estimated that only 2-3% are achieving the desired level of success. The purpose of this research is to identify the key interrelated components of successful lean transformation. To this end, a thorough literature review was conducted and the findings indicate six key constructs that can act as enablers or inhibitors to implementing and sustaining lean. A theoretical framework was developed that integrates these constructs and develops research propositions for each. A multiple-case study analysis then was used to test the framework on four companies that have achieved successful, sustained results from their lean implementation in order to validate the model. The resulting model provides companies who are planning to implement lean with tangible actions that can be taken to make their lean transformations more successful.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005262, ucf:50608
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005262
-
-
Title
-
Lyapunov-Based Robust and Adaptive Control Design for nonlinear Uncertain Systems.
-
Creator
-
Zhang, Kun, Behal, Aman, Haralambous, Michael, Xu, Yunjun, Boloni, Ladislau, Marzocca, Piergiovanni, University of Central Florida
-
Abstract / Description
-
The control of systems with uncertain nonlinear dynamics is an important field of control scienceattracting decades of focus. In this dissertation, four different control strategies are presentedusing sliding mode control, adaptive control, dynamic compensation, and neural network for a nonlinear aeroelastic system with bounded uncertainties and external disturbance. In Chapter 2, partial state feedback adaptive control designs are proposed for two different aeroelastic systems operating in...
Show moreThe control of systems with uncertain nonlinear dynamics is an important field of control scienceattracting decades of focus. In this dissertation, four different control strategies are presentedusing sliding mode control, adaptive control, dynamic compensation, and neural network for a nonlinear aeroelastic system with bounded uncertainties and external disturbance. In Chapter 2, partial state feedback adaptive control designs are proposed for two different aeroelastic systems operating in unsteady flow. In Chapter 3, a continuous robust control design is proposed for a class of single input and single output system with uncertainties. An aeroelastic system with a trailingedge flap as its control input will be considered as the plant for demonstration of effectiveness of the controller. The controller is proved to be robust by both athematical proof and simulation results. In Chapter 3, a robust output feedback control strategy is discussed for the vibration suppression of an aeroelastic system operating in an unsteady incompressible flowfield. The aeroelastic system is actuated using a combination of leading-edge (LE) and trailing-edge (TE) flaps in the presence of different kinds of gust disturbances. In Chapter 5, a neural-network based model-free controller is designed for an aeroelastic system operating at supersonic speed. The controller is shown to be able to effectively asymptotically stabilize the system via both a Lyapunov-based stability proof and numerical simulation results.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005748, ucf:50110
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005748
-
-
Title
-
Identifying Influential Agents in Social Systems.
-
Creator
-
Maghami, Mahsa, Sukthankar, Gita, Turgut, Damla, Wu, Annie, Boloni, Ladislau, Garibay, Ivan, University of Central Florida
-
Abstract / Description
-
This dissertation addresses the problem of influence maximization in social networks. Influence maximization is applicable to many types of real-world problems, including modeling contagion, technology adoption, and viral marketing. Here we examine an advertisement domain in which the overarching goal is to find the influential nodes in a social network, based on the network structure and the interactions, as targets of advertisement. The assumption is that advertisement budget limits prevent...
Show moreThis dissertation addresses the problem of influence maximization in social networks. Influence maximization is applicable to many types of real-world problems, including modeling contagion, technology adoption, and viral marketing. Here we examine an advertisement domain in which the overarching goal is to find the influential nodes in a social network, based on the network structure and the interactions, as targets of advertisement. The assumption is that advertisement budget limits prevent us from sending the advertisement to everybody in the network. Therefore, a wise selection of the people can be beneficial in increasing the product adoption. To model these social systems, agent-based modeling, a powerful tool for the study of phenomena that are difficult to observe within the confines of the laboratory, is used.To analyze marketing scenarios, this dissertation proposes a new method for propagating information through a social system and demonstrates how it can be used to develop a product advertisement strategy in a simulated market. We consider the desire of agents toward purchasing an item as a random variable and solve the influence maximization problem in steady state using an optimization method to assign the advertisement of available products to appropriate messenger agents. Our market simulation 1) accounts for the effects of group membership on agent attitudes 2) has a network structure that is similar to realistic human systems 3) models inter-product preference correlations that can be learned from market data. The results on synthetic data show that this method is significantly better than network analysis methods based on centrality measures.The optimized influence maximization (OIM) described above, has some limitations. For instance, it relies on a global estimation of the interaction among agents in the network, rendering it incapable of handling large networks. Although OIM is capable of finding the influential nodes in the social network in an optimized way and targeting them for advertising, in large networks, performing the matrix operations required to find the optimized solution is intractable.To overcome this limitation, we then propose a hierarchical influence maximization (HIM) algorithm for scaling influence maximization to larger networks. In the hierarchical method the network is partitioned into multiple smaller networks that can be solved exactly with optimization techniques, assuming a generalized IC model, to identify a candidate set of seed nodes. The candidate nodes are used to create a distance-preserving abstract version of the network that maintains an aggregate influence model between partitions. The budget limitation for the advertising dictates the algorithm's stopping point. On synthetic datasets, we show that our method comes close to the optimal node selection, at substantially lower runtime costs.We present results from applying the HIM algorithm to real-world datasets collected from social media sites with large numbers of users (Epinions, SlashDot, and WikiVote) and compare it with two benchmarks, PMIA and DegreeDiscount, to examine the scalability and performance.Our experimental results reveal that HIM scales to larger networks but is outperformed by degree-based algorithms in highly-connected networks. However, HIM performs well in modular networks where the communities are clearly separable with small number of cross-community edges. This finding suggests that for practical applications it is useful to account for network properties when selecting an influence maximization method.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005205, ucf:50647
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005205
-
-
Title
-
Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems.
-
Creator
-
Groves, Curtis, Kassab, Alain, Das, Tuhin, Kauffman, Jeffrey, Moore, Brian, University of Central Florida
-
Abstract / Description
-
Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data...
Show moreSpacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional (")validation by test only(") mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in (")Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations("). This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions. The method accounts for all uncertainty terms from both numerical and input variables. Objective three is to compile a table of uncertainty parameters that could be used to estimate the error in a Computational Fluid Dynamics model of the Environmental Control System /spacecraft system.Previous studies have looked at the uncertainty in a Computational Fluid Dynamics model for a single output variable at a single point, for example the re-attachment length of a backward facing step. For the flow regime being analyzed (turbulent, three-dimensional, incompressible), the error at a single point can propagate into the solution both via flow physics and numerical methods. Calculating the uncertainty in using Computational Fluid Dynamics to accurately predict airflow speeds around encapsulated spacecraft in is imperative to the success of future missions.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005174, ucf:50662
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005174
-
-
Title
-
Factors Affecting Systems Engineering Rigor in Launch Vehicle Organizations.
-
Creator
-
Gibson, Denton, Karwowski, Waldemar, Rabelo, Luis, Kotnour, Timothy, Kern, David, University of Central Florida
-
Abstract / Description
-
Systems engineering is a methodical multi-disciplinary approach to design, build, and operate complex systems. Launch vehicles are considered by many extremely complex systems that have greatly impacted where the systems engineering industry is today. Launch vehicles are used to transport payloads from the ground to a location in space. Satellites launched by launch vehicles can range from commercial communications to national security payloads. Satellite costs can range from a few million...
Show moreSystems engineering is a methodical multi-disciplinary approach to design, build, and operate complex systems. Launch vehicles are considered by many extremely complex systems that have greatly impacted where the systems engineering industry is today. Launch vehicles are used to transport payloads from the ground to a location in space. Satellites launched by launch vehicles can range from commercial communications to national security payloads. Satellite costs can range from a few million dollars to billions of dollars. Prior research suggests that lack of systems engineering rigor as one of the leading contributors to launch vehicle failures. A launch vehicle failure could have economic, societal, scientific, and national security impacts. This is why it is critical to understand the factors that affect systems engineering rigor in U.S. launch vehicle organizations.The current research examined organizational factors that influence systems engineering rigor in launch vehicle organizations. This study examined the effects of the factors of systems engineering culture and systems engineering support on systems engineering rigor. Particularly, the effects of top management support, organizational commitment, systems engineering support, and value of systems engineering were examined. This research study also analyzed the mediating role of systems engineering support between top management support and systems engineering rigor, as well as between organizational commitment and systems engineering rigor. A quantitative approach was used for this. Data for the study was collected via survey instrument. A total of 203 people in various systems engineering roles in launch vehicle organizations throughout the United States voluntarily participated. Each latent construct of the study was validated using confirmatory factor analysis (CFA). Structural equation modeling (SEM) was used to examine the relationships between the variables of the study. The IBM SPSS Amos 25 software was used to analyze the CFA and SEM.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007806, ucf:52348
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007806
-
-
Title
-
Security of Autonomous Systems under Physical Attacks: With application to Self-Driving Cars.
-
Creator
-
Dutta, Raj, Jin, Yier, Sundaram, Kalpathy, DeMara, Ronald, Zhang, Shaojie, Zhang, Teng, University of Central Florida
-
Abstract / Description
-
The drive to achieve trustworthy autonomous cyber-physical systems (CPS), which can attain goals independently in the presence of significant uncertainties and for long periods of time without any human intervention, has always been enticing. Significant progress has been made in the avenues of both software and hardware for fulfilling these objectives. However, technological challenges still exist and particularly in terms of decision making under uncertainty. In an autonomous system,...
Show moreThe drive to achieve trustworthy autonomous cyber-physical systems (CPS), which can attain goals independently in the presence of significant uncertainties and for long periods of time without any human intervention, has always been enticing. Significant progress has been made in the avenues of both software and hardware for fulfilling these objectives. However, technological challenges still exist and particularly in terms of decision making under uncertainty. In an autonomous system, uncertainties can arise from the operating environment, adversarial attacks, and from within the system. As a result of these concerns, human-beings lack trust in these systems and hesitate to use them for day-to-day use.In this dissertation, we develop algorithms to enhance trust by mitigating physical attacks targeting the integrity and security of sensing units of autonomous CPS. The sensors of these systems are responsible for gathering data of the physical processes. Lack of measures for securing their information can enable malicious attackers to cause life-threatening situations. This serves as a motivation for developing attack resilient solutions.Among various security solutions, attention has been recently paid toward developing system-level countermeasures for CPS whose sensor measurements are corrupted by an attacker. Our methods are along this direction as we develop an active and multiple passive algorithm to detect the attack and minimize its effect on the internal state estimates of the system. In the active approach, we leverage a challenge authentication technique for detection of two types of attacks: The Denial of Service (DoS) and the delay injection on active sensors of the systems. Furthermore, we develop a recursive least square estimator for recovery of system from attacks. The majority of the dissertation focuses on designing passive approaches for sensor attacks. In the first method, we focus on a linear stochastic system with multiple sensors, where measurements are fused in a central unit to estimate the state of the CPS. By leveraging Bayesian interpretation of the Kalman filter and combining it with the Chi-Squared detector, we recursively estimate states within an error bound and detect the DoS and False Data Injection attacks. We also analyze the asymptotic performance of the estimator and provide conditions for resilience of the state estimate.Next, we propose a novel distributed estimator based on l1 norm optimization, which could recursively estimate states within an error bound without restricting the number of agents of the distributed system that can be compromised. We also extend this estimator to a vehicle platoon scenario which is subjected to sparse attacks. Furthermore, we analyze the resiliency and asymptotic properties of both the estimators. Finally, at the end of the dissertation, we make an initial effort to formally verify the control system of the autonomous CPS using the statistical model checking method. It is done to ensure that a real-time and resource constrained system such as a self-driving car, with controllers and security solutions, adheres to strict timing constrains.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007174, ucf:52253
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007174
-
-
Title
-
On RADAR DECEPTION, AS MOTIVATION FOR CONTROL OF CONSTRAINED SYSTEMS.
-
Creator
-
Hajieghrary, Hadi, Jayasuriya, Suhada, Xu, Yunjun, Das, Tuhin, University of Central Florida
-
Abstract / Description
-
This thesis studies the control algorithms used by a team of ECAVs (Electronic Combat Air Vehicle) to deceive a network of radars to detect a phantom track. Each ECAV has the electronic capability of intercepting the radar waves, and introducing an appropriate time delay before transmitting it back, and deceiving the radar into seeing a spurious target beyond its actual position. On the other hand, to avoid the errors and increase the reliability, have a complete coverage in various...
Show moreThis thesis studies the control algorithms used by a team of ECAVs (Electronic Combat Air Vehicle) to deceive a network of radars to detect a phantom track. Each ECAV has the electronic capability of intercepting the radar waves, and introducing an appropriate time delay before transmitting it back, and deceiving the radar into seeing a spurious target beyond its actual position. On the other hand, to avoid the errors and increase the reliability, have a complete coverage in various atmosphere conditions, and confronting the effort of the belligerent intruders to delude the sentinel and enter the area usually a network of radars are deployed to guard the region. However, a team of cooperating ECAVs could exploit this arrangement and plans their trajectories in a way all the radars in the network vouch for seeing a single and coherent spurious track of a phantom. Since each station in the network confirms the other, the phantom track is considered valid. This problem serves as a motivating example in trajectory planning for the multi-agent system in highly constrained operation conditions. The given control command to each agent should be a viable one in the agent limited capabilities, and also drives it in a cumulative action to keep the formation.In this thesis, three different approaches to devise a trajectory for each agent is studied, and the difficulties for deploying each one are addressed. In the first one, a command center has all information about the state of the agents, and in every step decides about the control each agent should apply. This method is very effective and robust, but needs a reliable communication. In the second method, each agent decides on its own control, and the members of the group just communicate and agree on the range of control they like to apply on the phantom. Although in this method much less data needs to communicate between the agents, it is very sensitive to the disturbances and miscalculations, and could be easily fell apart or come to a state with no feasible solution to continue. In the third method a differential geometric approach to the problem is studied. This method has a very strong backbone, and minimizes the communication needed to a binary one. However, less data provided to the agents about the system, more sensitive and infirm the system is when it faced with imperfectionalities. In this thesis, an object oriented program is developed in the Matlab software area to simulate all these three control strategies in a scalable fashion. Object oriented programming is a naturally suitable method to simulate a multi-agent system. It gives the flexibility to make the code more close to a real scenario with defining each agent as a separated and independent identity. The main objective is to understand the nature of the constrained dynamic problems, and examine various solutions in different situations. Using the flexibility of this code, we could simulate several scenarios, and incorporate various conditions on the system. Also, we could have a close look at each agent to observe its behavior in these situations. In this way we will gain a good insight of the system which could be used in designing of the agents for specific missions.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004857, ucf:49683
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004857
-
-
Title
-
Remediation of Polychlorinated Biphenyl (PCB) Contaminated Building Materials Using Non-metal and Activated Metal Treatment Systems.
-
Creator
-
Legron-Rodriguez, Tamra, Yestrebsky, Cherie, Clausen, Christian, Elsheimer, Seth, Sigman, Michael, Chopra, Manoj, Quinn, Jacqueline, University of Central Florida
-
Abstract / Description
-
PCBs are recalcitrant compounds of no known natural origin that persist in the environment despite their ban by the United States Environmental Protection Agency in 1979 due to negative health effects. Transport of PCBs from elastic sealants into concrete, brick, and granite structures has resulted in the need for a technology capable of removing these PCBs from the materials. This research investigated the use of a nonmetal treatment system (NMTS) and an activated metal treatment system ...
Show morePCBs are recalcitrant compounds of no known natural origin that persist in the environment despite their ban by the United States Environmental Protection Agency in 1979 due to negative health effects. Transport of PCBs from elastic sealants into concrete, brick, and granite structures has resulted in the need for a technology capable of removing these PCBs from the materials. This research investigated the use of a nonmetal treatment system (NMTS) and an activated metal treatment system (AMTS) for the remediation and degradation of PCBs from concrete, brick, and granite affixed with PCB-laden caulking. The adsorption of PCBs onto the components of concrete and the feasibility of ethanol washing were also investigated.NMTS is a sorbent paste containing ethanol, acetic acid, and fillers that was developed at the University of Central Florida Environmental Chemistry Laboratory for the in situ remediation of PCBs. Combining NMTS with magnesium results in an activated treatment system used for reductive dechlorination of PCBs. NMTS was applied to laboratory-prepared concrete as well as field samples by direct contact as well as by a novel sock-type delivery. The remediation of PCBs from field samples using NMTS and AMTS resulted in a 33-98% reduction for concrete, a 65-70% reduction for brick, and an 89% reduction in PCB concentration for granite. The limit of NMTS for absorption of Aroclor 1254 was found to be roughly 22,000 mg Aroclor 1254 per kg of treatment system or greater. The activated treatment system resulted in a 94% or greater degradation of PCBs after seven days with the majority of degradation occurring in the first 24 hours. The adsorption of PCBs to individual concrete components (hydrated cement, sand, crushed limestone, and crushed granite) was found to follow the Freundlich isotherm model with greater adsorption to crushed limestone and crushed granite compared to hydrated cement and sand. Ethanol washing was shown to decrease the concentration of laboratory-prepared concrete by 68% and the concentration of PCBs in the ethanol wash were reduced by 77% via degradation with an activated magnesium system.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005197, ucf:50625
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005197
Pages