Current Search: Reilly, Charles (x)
View All Items
- Title
- SIMULATION OF RANDOM SET COVERING PROBLEMS WITH KNOWN OPTIMAL SOLUTIONS AND EXPLICITLY INDUCED CORRELATIONS AMOONG COEFFICIENTS.
- Creator
-
Sapkota, Nabin, Reilly, Charles, University of Central Florida
- Abstract / Description
-
The objective of this research is to devise a procedure to generate random Set Covering Problem (SCP) instances with known optimal solutions and correlated coefficients. The procedure presented in this work can generate a virtually unlimited number of SCP instances with known optimal solutions and realistic characteristics, thereby facilitating testing of the performance of SCP heuristics and algorithms. A four-phase procedure based on the Karush-Kuhn-Tucker (KKT) conditions is proposed to...
Show moreThe objective of this research is to devise a procedure to generate random Set Covering Problem (SCP) instances with known optimal solutions and correlated coefficients. The procedure presented in this work can generate a virtually unlimited number of SCP instances with known optimal solutions and realistic characteristics, thereby facilitating testing of the performance of SCP heuristics and algorithms. A four-phase procedure based on the Karush-Kuhn-Tucker (KKT) conditions is proposed to generate SCP instances with known optimal solutions and correlated coefficients. Given randomly generated values for the objective function coefficients and the sum of the binary constraint coefficients for each variable and a randomly selected optimal solution, the procedure: (1) calculates the range for the number of possible constraints, (2) generates constraint coefficients for the variables with value one in the optimal solution, (3) assigns values to the dual variables, and (4) generates constraint coefficients for variables with value 0 in the optimal solution so that the KKT conditions are satisfied. A computational demonstration of the procedure is provided. A total of 525 SCP instances are simulated under seven correlation levels and three levels for the number of constraints. Each of these instances is solved using three simple heuristic procedures. The performance of the heuristics on the SCP instances generated is summarized and analyzed. The performance of the heuristics generally worsens as the expected correlation between the coefficients increases and as the number of constraints increases. The results provide strong evidence of the benefits of the procedure for generating SCP instances with correlated coefficients, and in particular SCP instances with known optimal solutions.
Show less - Date Issued
- 2006
- Identifier
- CFE0001416, ucf:47037
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001416
- Title
- EVALUATING THE PERFORMANCE OF ANIMAL SHELTERS: AN APPLICATION OF DATA ENVELOPMENT ANALYSIS.
- Creator
-
Heyde, Brandy, Reilly, Charles, University of Central Florida
- Abstract / Description
-
The focus of this thesis is the application of data envelopment analysis to understand and evaluate the performance of diverse animal welfare organizations across the United States. The results include identification of the most efficient animal welfare organizations, at least among those that post statistics on their operations, and a discussion of various partnerships that may improve the performance of the more inefficient organizations. The Humane Society of the United States estimates...
Show moreThe focus of this thesis is the application of data envelopment analysis to understand and evaluate the performance of diverse animal welfare organizations across the United States. The results include identification of the most efficient animal welfare organizations, at least among those that post statistics on their operations, and a discussion of various partnerships that may improve the performance of the more inefficient organizations. The Humane Society of the United States estimates that there are 4000 - 6000 independently-run animal shelters across the United States, with an estimated 6-8 million companion animals entering them each year. Unfortunately, more than half of these animals are euthanized. The methods shared in this research illustrate how data envelopment analysis may help shelters improve these statistics through evaluation and cooperation. Data envelopment analysis (DEA) is based on the principle that the efficiency of an organization depends on its ability to transform its inputs into the desired outputs. The result of a DEA model is a single measure that summarizes the relative efficiency of each decision making unit (DMU) when compared with similar organizations. The DEA linear program defines an efficiency frontier with the most efficient animal shelters that are put into the model that "envelops" the other DMUs. Individual efficiency scores are calculated by determining how close each DMU is to reaching the frontier. The results shared in this research focus on the performance of 15 animal shelters. Lack of standardized data regarding individual animal shelter performance limited the ability to review a larger number of shelters and provide more robust results. Various programs are in place within the United States to improve the collection and availability of individual shelter performance. Specifically, the Asilomar Accords provide a strong framework for doing this and could significantly reduce euthanasia of companion animals if more shelters would adopt the practice of collecting and reporting their data in this format. It is demonstrated in this research that combining performance data with financial data within the data envelopment analysis technique can be powerful in helping shelters identify how to better deliver results. The addition of data from other organizations will make the results even more robust and useful for each shelter involved.
Show less - Date Issued
- 2008
- Identifier
- CFE0002101, ucf:47557
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002101
- Title
- THE APPLICATION OF "CRASHING" A PROJECT NETWORK TO SOLVE THE TIME/COST TRADEOFF IN RECAPITALIZATION OF THE UH-60A HELICOPTER.
- Creator
-
Fortier, Gregory, Reilly, Charles, University of Central Florida
- Abstract / Description
-
Since the beginning of project management, people have been asked to perform "more with less" in expeditious time while attempting to balance the inevitable challenge of the time/cost tradeoff. This is especially true within the Department of Defense today in prosecuting the Global War on Terrorism both in Afghanistan and Iraq. An unprecedented and consistent level of Operational Tempo has generated heavy demands on current equipment and has subsequently forced the need to recapitalize...
Show moreSince the beginning of project management, people have been asked to perform "more with less" in expeditious time while attempting to balance the inevitable challenge of the time/cost tradeoff. This is especially true within the Department of Defense today in prosecuting the Global War on Terrorism both in Afghanistan and Iraq. An unprecedented and consistent level of Operational Tempo has generated heavy demands on current equipment and has subsequently forced the need to recapitalize several legacy systems until suitable replacements can be implemented. This paper targets the UH-60A:A Recapitalization Program based at the Corpus Christi Army Depot in Corpus Christi, Texas. More specifically, we examine one of the nine existing project sub-networks within the UH-60A:A program, the structural/electrical upgrade phase. In crashing (i.e. adding manpower or labor hours) the network, we determine the minimal cost required to reduce the total completion time of the 68 activities within the network before a target completion time. A linear programming model is formulated and then solved for alternative scenarios. The first scenario is prescribed by the program manager and consists of simply hiring additional contractors to augment the existing personnel. The second and third scenarios consist of examining the effects of overtime, both in an aggressive situation (with limited longevity) and a more moderate situation (displaying greater sustainability over time). The initial linear programming model (Scenario 1) is crashed using estimates given from the program scheduler. The overtime models are crashed using reduced-time crash estimates. For Scenarios 2 and 3, the crashable times themselves are reduced by 50% and 75%, respectively. Initial results indicate that a completion time of 79.5 days is possible without crashing any activities in the network. The five-year historical average completion time is 156 days for this network. We continue to crash the network in each of the three scenarios and determine that the absolute shortest feasible completion times, 73 days for Scenario 1, 76 days for Scenario 2, and 77.5 days for Scenario 3. We further examine the models to observe similarities and differences in which activities get targeted for crashing and how that reduction affects the critical path of the network. These results suggests an in-depth study of using linear programming and applying it to project networks to grant project managers more critical insight that may help them better achieve their respective objectives. This work may also be useful as the groundwork for further refinement and application for maintenance managers conducting day-to-day unit level maintenance operations.
Show less - Date Issued
- 2006
- Identifier
- CFE0001381, ucf:47008
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001381
- Title
- MODELS TO ESTIMATE ARRIVAL COUNTS AND STAFFING REQUIREMENTS IN NONSTATIONARY QUEUEING SYSTEMS APPLIED TO LONG DISTANCE ROAD RACES.
- Creator
-
Fairweather, Lindon, Reilly, Charles, University of Central Florida
- Abstract / Description
-
We examine the problem of staffing refreshment stations at a long distance road race. A race is modeled as a mixed queueing network in which the required number of servers at each service station has to be estimated. Two models to represent the progress of runners along a long distance road race course are developed. One model is a single-class model that allows a road race manager to staff service stations assuming the runners are identical to those in some historical dataset. Another model...
Show moreWe examine the problem of staffing refreshment stations at a long distance road race. A race is modeled as a mixed queueing network in which the required number of servers at each service station has to be estimated. Two models to represent the progress of runners along a long distance road race course are developed. One model is a single-class model that allows a road race manager to staff service stations assuming the runners are identical to those in some historical dataset. Another model is a multi-class simulation model that allows a road race manager to simulate a race of any number of runners, classified based on their running pace into different runner classes. Both the single-class model and the multi-class model include estimates for the rates at which the runners arrive at specified locations along the course. The arrival rates, combined with assumed service rates, allow us to base staffing decisions on the Erlang loss formula or a lesser known staffing rule that gives a lower bound for the required number of servers. We develop a staffing strategy that we call the Peak Arrival Staffing Bound (PASB), which is based on this staffing bound. The PASB and the Erlang loss formula are implemented in the single-class model and the multi-class simulation model. By way of numerical experiments, we find that the PASB is numerically stable and can be used to get staffing results regardless of the traffic intensity. This finding is in contrast to the Erlang loss formula, which is known to become numerically unstable and overflows when the traffic intensity exceeds 171. We compare numerical results of the PASB and the Erlang loss formula with a blocking probability level of 5% and find that when the traffic intensity is high, staffing results based on the PASB are more conservative than staffing results based on the Erlang loss formula. As the traffic intensity gets lower, we find that staffing results based on the PASB are similar to staffing results based on the Erlang loss formula. These findings suggest that the PASB can be a valuable tool to aid race directors in making staffing decisions for races of all traffic intensities.
Show less - Date Issued
- 2011
- Identifier
- CFE0004055, ucf:49154
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004055
- Title
- TESTING THE IMPACT OF TRAINING WITH SIMULATED SCENARIOS FOR INFORMATION SECURITY AWARENESS ON VIRTUAL COMMUNITY OF PRACTICE MEMBERS.
- Creator
-
Tidwell, Craig, Reilly, Charles, University of Central Florida
- Abstract / Description
-
Information security has become a major challenge for all private and public organizations. The protection of proprietary and secret data and the proper awareness of what is entailed in protecting this data are necessary in all organizations. This treatise examines how simulation and training would influence information security awareness over time in virtual communities of practice under a variety of security threats. The hypothesis of the study was that security-trained members of a virtual...
Show moreInformation security has become a major challenge for all private and public organizations. The protection of proprietary and secret data and the proper awareness of what is entailed in protecting this data are necessary in all organizations. This treatise examines how simulation and training would influence information security awareness over time in virtual communities of practice under a variety of security threats. The hypothesis of the study was that security-trained members of a virtual community of practice would respond significantly better to routine security processes and attempts to breach security or to violate the security policy of their organization or of their virtual community of practice. Deterrence theory was used as the grounded theory and integrated in the information security awareness training with simulated scenarios. The study provided training with simulated scenarios and then tested the users of a virtual community of practice over an approximately twelve-week period to see if the planned security awareness training with simulated security problem scenarios would be effective in improving their responses to the follow-up tests. The research subjects were divided into four groups, the experimental group and three control groups. The experimental group received all of the training and testing events throughout the twelve-week period. The three control groups received various portions of the training and testing. The data from all of the tests were analyzed using the Kruskal-Wallis ranked order test, and it was determined that there was no significant difference between the groups at the end of the data collection. Even though the null hypothesis, which stated that there would be no difference between the groups scores on the information security awareness tests, was not rejected, the groups that received the initial training with the simulated scenarios did perform slightly better from the pre-training test to the post-training test when compared with the control group that did not receive the initial training. More research is suggested to determine how information security awareness training with simulated scenarios and follow-up testing can be used to improve and sustain the security practices of members of virtual communities of practice. Specifically, additional research could include: comparing the effect of training with the simulated scenarios and with training that would not use the simulated security scenarios; the potential benefits of using adaptive and intelligent training to focus on the individual subjects' weaknesses and strengths; the length of the training with simulated scenarios events, the time between each training event, and the overall length of the training; the demographics of the groups used in the training, and how different user characteristics impact the efficacy of the training with simulated scenarios and testing; and lastly examining how increasing the fidelity of the simulated scenarios might impact the results of the follow-up tests.
Show less - Date Issued
- 2011
- Identifier
- CFE0003566, ucf:48923
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003566
- Title
- MULTIOBJECTIVE DESIGN OPTIMIZATION OF GAS TURBINE BLADE WITH EMPHASIS ON INTERNAL COOLING.
- Creator
-
Nagaiah, Narasimha, Geiger, Christopher, Nazzal, Dima, Reilly, Charles, Kapat, Jayanta, University of Central Florida
- Abstract / Description
-
In the design of mechanical components, numerical simulations and experimental methods are commonly used for design creation (or modification) and design optimization. However, a major challenge of using simulation and experimental methods is that they are time-consuming and often cost-prohibitive for the designer. In addition, the simultaneous interactions between aerodynamic, thermodynamic and mechanical integrity objectives for a particular component or set of components are difficult to...
Show moreIn the design of mechanical components, numerical simulations and experimental methods are commonly used for design creation (or modification) and design optimization. However, a major challenge of using simulation and experimental methods is that they are time-consuming and often cost-prohibitive for the designer. In addition, the simultaneous interactions between aerodynamic, thermodynamic and mechanical integrity objectives for a particular component or set of components are difficult to accurately characterize, even with the existing simulation tools and experimental methods. The current research and practice of using numerical simulations and experimental methods do little to address the simultaneous (")satisficing(") of multiple and often conflicting design objectives that influence the performance and geometry of a component. This is particularly the case for gas turbine systems that involve a large number of complex components with complicated geometries.Numerous experimental and numerical studies have demonstrated success in generating effective designs for mechanical components; however, their focus has been primarily on optimizing a single design objective based on a limited set of design variables and associated values. In this research, a multiobjective design optimization framework to solve a set of user-specified design objective functions for mechanical components is proposed. The framework integrates a numerical simulation and a nature-inspired optimization procedure that iteratively perturbs a set of design variables eventually converging to a set of tradeoff design solutions. In this research, a gas turbine engine system is used as the test application for the proposed framework. More specifically, the optimization of the gas turbine blade internal cooling channel configuration is performed. This test application is quite relevant as gas turbine engines serve a critical role in the design of the next-generation power generation facilities around the world. Furthermore, turbine blades require better cooling techniques to increase their cooling effectiveness to cope with the increase in engine operating temperatures extending the useful life of the blades.The performance of the proposed framework is evaluated via a computational study, where a set of common, real-world design objectives and a set of design variables that directly influence the set of objectives are considered. Specifically, three objectives are considered in this study: (1) cooling channel heat transfer coefficient, which measures the rate of heat transfer and the goal is to maximize this value; (2) cooling channel air pressure drop, where the goal is to minimize this value; and (3) cooling channel geometry, specifically the cooling channel cavity area, where the goal is to maximize this value. These objectives, which are conflicting, directly influence the cooling effectiveness of a gas turbine blade and the material usage in its design. The computational results show the proposed optimization framework is able to generate, evaluate and identify thousands of competitive tradeoff designs in a fraction of the time that it would take designers using the traditional simulation tools and experimental methods commonly used for mechanical component design generation. This is a significant step beyond the current research and applications of design optimization to gas turbine blades, specifically, and to mechanical components, in general.
Show less - Date Issued
- 2012
- Identifier
- CFE0004495, ucf:49282
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004495
- Title
- Modeling and Analysis of Automated Storage and Retrievals System with Multiple in-the-aisle Pick Positions.
- Creator
-
Ramtin, Faraz, Pazour, Jennifer, Reilly, Charles, Xanthopoulos, Petros, Goodman, Stephen, University of Central Florida
- Abstract / Description
-
This dissertation focuses on developing analytical models for automated storage and retrieval system with multiple in-the-aisle pick positions (MIAPP-AS/RS). Specifically, our first contribution develops an expected travel time model for different pick positions and different physical configurations for a random storage policy. This contribution has been accepted for publication in IIE Transactions (Ramtin (&) Pazour, 2014) and was the featured article in the IE Magazine (Askin (&) Nussbaum,...
Show moreThis dissertation focuses on developing analytical models for automated storage and retrieval system with multiple in-the-aisle pick positions (MIAPP-AS/RS). Specifically, our first contribution develops an expected travel time model for different pick positions and different physical configurations for a random storage policy. This contribution has been accepted for publication in IIE Transactions (Ramtin (&) Pazour, 2014) and was the featured article in the IE Magazine (Askin (&) Nussbaum, 2014). The second contribution addresses an important design question associated with MIAPP-AS/RS, which is the assignment of items to pick positions in an MIAPP-AS/RS. This contribution has been accepted for publication in IIE Transactions (Ramtin (&) Pazour, 2015). Finally, the third contribution is to develop travel time models and to determine the optimal SKUs to storage locations assignment under different storage assignment polies such as dedicated and class-based storage policies for MIAPP-AS/RS.An MIAPP-AS/RS is a case-level order-fulfillment technology that enables order picking via multiple pick positions (outputs) located in the aisle. We develop expected travel time models for different operating policies and different physical configurations. These models can be used to analyze MIAPP-AS/RS throughput performance during peak and non-peak hours. Moreover, closed-form approximations are derived for the case of an infinite number of pick positions, which enable us to derive the optimal shape configuration that minimizes expected travel times. We compare our expected travel time models with a simulation model of a discrete rack, and the results validate that our models provide good estimates. Finally, we conduct a numerical experiment to illustrate the trade-offs between performance of operating policies and design configurations. We find that MIAPP-AS/RS with a dual picking floor and input point is a robust configuration because a single command operating policy has comparable throughput performance to a dual command operating policy.As a second contribution, we study the impact of selecting different pick position assignments on system throughput, as well as system design trade-offs that occur when MIAPP-AS/RS is running under different operating policies and different demand profiles. We study the impact of product to pick position assignments on the expected throughput for different operating policies, demand profiles, and shape factors. We develop efficient algorithms of complexity O(nlog(n)) that provide the assignment that minimizes the expected travel time. Also, for different operating policies, shape configurations, and demand curves, we explore the structure of the optimal assignment of products to pick positions and quantify the difference between using a simple, practical assignment policy versus the optimal assignment. Finally, we derive closed-form analytical travel time models by approximating the optimal assignment's expected travel time using continuous demand curves and assuming an infinite number of pick positions in the aisle. We illustrate that these continuous models work well in estimating the travel time of a discrete rack and use them to find optimal design configurations.As the third and final contribution, we study the impact of dedicated and class-based storage policy on the performance of MIAPP-AS/RS. We develop mathematical optimization models to minimize the travel time of the crane by changing the assignment of the SKUs to pick positions and storage locations simultaneously. We develop a more tractable solution approach by applying a Benders decomposition approach, as well as an accelerated procedure for the Benders algorithm. We observe high degeneracy for the optimal solution when we use chebyshev metric to calculate the distances. As the result of this degeneracy, we realize that the assignment of SKUs to pick positions does not impact the optimal solution. We also develop closed-form travel time models for MIAPP-AS/RS under a class-based storage policy.
Show less - Date Issued
- 2015
- Identifier
- CFE0005695, ucf:50142
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005695
- Title
- Semiconductor Design and Manufacturing Interplay to Achieve Higher Yields at Reduced Costs using SMART Techniques.
- Creator
-
Oberai, Ankush Bharati, Yuan, Jiann-Shiun, Abdolvand, Reza, Georgiopoulos, Michael, Sundaram, Kalpathy, Reilly, Charles, University of Central Florida
- Abstract / Description
-
Since the outset of IC Semiconductor market there has been a gap between its design and manufacturing communities. This gap continued to grow as the device geometries started to shrink and the manufacturing processes and tools got more complex. This gap lowered the manufacturing yield, leading to higher cost of ICs and delay in their time to market. It also impacted performance of the ICs, impacting the overall functionality of the systems they were integrated in. However, in the recent years...
Show moreSince the outset of IC Semiconductor market there has been a gap between its design and manufacturing communities. This gap continued to grow as the device geometries started to shrink and the manufacturing processes and tools got more complex. This gap lowered the manufacturing yield, leading to higher cost of ICs and delay in their time to market. It also impacted performance of the ICs, impacting the overall functionality of the systems they were integrated in. However, in the recent years there have been major efforts to bridge the gap between design and manufacturing using software solutions by providing closer collaborations techniques between design and manufacturing communities. The root cause of this gap is inherited by the difference in the knowledge and skills required by the two communities. The IC design community is more microelectronics, electrical engineering and software driven whereas the IC manufacturing community is more driven by material science, mechanical engineering, physics and robotics. The cross training between the two is almost nonexistence and not even mandated. This gap is deemed to widen, with demand for more complex designs and miniaturization of electronic appliance-products. Growing need for MEMS, 3-D NANDS and IOTs are other drivers that could widen the gap between design and manufacturing. To bridge this gap, it is critical to have close loop solutions between design and manufacturing This could be achieved by SMART automation on both sides by using Artificial Intelligence, Machine Learning and Big Data algorithms. Lack of automation and predictive capabilities have even made the situation worse on the yield and total turnaround times. With the growing fabless and foundry business model, bridging the gap has become even more critical. Smart Manufacturing philosophy must be adapted to make this bridge possible. We need to understand the Fab-fabless collaboration requirements and the mechanism to bring design to the manufacturing floor for yield improvement. Additionally, design community must be educated with manufacturing process and tool knowledge, so they can design for improved manufacturability. This study will require understanding of elements impacting manufacturing on both ends of the design and manufacturing process. Additionally, we need to understand the process rules that need to be followed closely in the design phase. Best suited SMART automation techniques to bridge the gap need to be studied and analyzed for their effectiveness.
Show less - Date Issued
- 2018
- Identifier
- CFE0007351, ucf:52096
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007351
- Title
- Design of the layout of a manufacturing facility with a closed loop conveyor with shortcuts using queueing theory and genetic algorithms.
- Creator
-
Lasrado, Vernet, Nazzal, Dima, Mollaghasemi, Mansooreh, Reilly, Charles, Garibay, Ivan, Sivo, Stephen, Armacost, Robert, University of Central Florida
- Abstract / Description
-
With the ongoing technology battles and price wars in today's competitive economy, every company is looking for an advantage over its peers. A particular choice of facility layout can have a significant impact on the ability of a company to maintain lower operational expenses under uncertain economic conditions. It is known that systems with less congestion have lower operational costs. Traditionally, manufacturing facility layout problem methods aim at minimizing the total distance traveled,...
Show moreWith the ongoing technology battles and price wars in today's competitive economy, every company is looking for an advantage over its peers. A particular choice of facility layout can have a significant impact on the ability of a company to maintain lower operational expenses under uncertain economic conditions. It is known that systems with less congestion have lower operational costs. Traditionally, manufacturing facility layout problem methods aim at minimizing the total distance traveled, the material handling cost, or the time in the system (based on distance traveled at a specific speed). The proposed methodology solves the looped layout design problem for a looped layout manufacturing facility with a looped conveyor material handling system with shortcuts using a system performance metric, i.e. the work in process (WIP) on the conveyor and at the input stations to the conveyor, as a factor in the minimizing function for the facility layout optimization problem which is solved heuristically using a permutation genetic algorithm. The proposed methodology also presents the case for determining the shortcut locations across the conveyor simultaneously (while determining the layout of the stations around the loop) versus the traditional method which determines the shortcuts sequentially (after the layout of the stations has been determined). The proposed methodology also presents an analytical estimate for the work in process at the input stations to the closed looped conveyor.It is contended that the proposed methodology (using the WIP as a factor in the minimizing function for the facility layout while simultaneously solving for the shortcuts) will yield a facility layout which is less congested than a facility layout generated by the traditional methods (using the total distance traveled as a factor of the minimizing function for the facility layout while sequentially solving for the shortcuts). The proposed methodology is tested on a virtual 300mm Semiconductor Wafer Fabrication Facility with a looped conveyor material handling system with shortcuts. The results show that the facility layouts generated by the proposed methodology have significantly less congestion than facility layouts generated by traditional methods. The validation of the developed analytical estimate of the work in process at the input stations reveals that the proposed methodology works extremely well for systems with Markovian Arrival Processes.
Show less - Date Issued
- 2011
- Identifier
- CFE0004125, ucf:49088
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004125
- Title
- An SoS Conceptual Model, LVC Simulation Framework, and a Prototypical Implementation of Unmanned System Interventions for Nuclear Power Plant Disaster Preparedness, Response, and Mitigation.
- Creator
-
Davis, Matthew, Proctor, Michael, O'Neal, Thomas, Reilly, Charles, Sulfredge, C., Smith, Roger, University of Central Florida
- Abstract / Description
-
Nuclear power plant disasters can have severe and far-reaching consequences, thus emergency managers and first responders from utility owners to the DoD must be prepared to respond to and mitigate effects protecting the public and environment from further damage. Rapidly emerging unmanned systems promise significant improvement in response and mitigation of nuclear disasters. Models and simulations (M(&)S) may play a significant role in improving readiness and reducing risks through its use...
Show moreNuclear power plant disasters can have severe and far-reaching consequences, thus emergency managers and first responders from utility owners to the DoD must be prepared to respond to and mitigate effects protecting the public and environment from further damage. Rapidly emerging unmanned systems promise significant improvement in response and mitigation of nuclear disasters. Models and simulations (M(&)S) may play a significant role in improving readiness and reducing risks through its use in planning, analysis, preparation training, and mitigation rehearsal for a wide spectrum of derivate scenarios. Legacy nuclear reactor M(&)S lack interoperability between themselves and avatar or agent-based simulations of emergent unmanned systems. Bridging the gap between past and the evolving future, we propose a conceptual model (CM) using a System of System (SoS) approach, a simulation federation framework capable of supporting concurrent and interoperating live, virtual and constructive simulation (LVC), and demonstrate a prototypical implementation of an unmanned system intervention for nuclear power plant disaster using the constructive simulation component. The SoS CM, LVC simulation framework, and prototypical implementation are generalizable to other preparedness, response, and mitigation scenarios. The SoS CM broadens the current stovepipe reactor-based simulations to a system-of-system perspective. The framework enables distributed interoperating simulations with a network of legacy and emergent avatar and agent simulations. The unmanned system implementation demonstrates feasibility of the SoS CM and LVC framework through replication of selective Fukushima events. Further, the system-of-systems approach advances life cycle stages including concept exploration, system design, engineering, training, and mission rehearsal. Live, virtual, and constructive component subsystems of the CM are described along with an explanation of input/output requirements. Finally, applications to analysis and training, an evaluation of the SoS CM based on recently proposed criteria found in the literature, and suggestions for future research are discussed.
Show less - Date Issued
- 2017
- Identifier
- CFE0006732, ucf:51879
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006732
- Title
- Defining a Stakeholder-Relative Model to Measure Academic Department Efficiency at Achieving Quality in Higher Education.
- Creator
-
Robinson, Federica, Sepulveda, Jose, Reilly, Charles, Nazzal, Dima, Armacost, Robert, Feldheim, Mary, University of Central Florida
- Abstract / Description
-
In a time of strained resources and dynamic environments, the importance of effective and efficient systems is critical. This dissertation was developed to address the need to use feedback from multiple stakeholder groups to define quality and assess an entity's efficiency at achieving such quality.A decision support model with applicability to diverse domains was introduced to outline the approach. Three phases, (1) quality model development, (2) input-output selection and (3) relative...
Show moreIn a time of strained resources and dynamic environments, the importance of effective and efficient systems is critical. This dissertation was developed to address the need to use feedback from multiple stakeholder groups to define quality and assess an entity's efficiency at achieving such quality.A decision support model with applicability to diverse domains was introduced to outline the approach. Three phases, (1) quality model development, (2) input-output selection and (3) relative efficiency assessment, captured the essence of the process which also delineates the approach per tool applied. This decision support model was adapted in higher education to assess academic departmental efficiency at achieving stakeholder-relative quality. Phase 1 was accomplished through a three round, Delphi-like study which involved user group refinement. Those results were compared to the criteria of an engineering accreditation body (ABET) to support the model's validity to capture quality in the College of Engineering (&) Computer Science, its departments and programs. In Phase 2 the Analytic Hierarchy Process (AHP) was applied to the validated model to quantify the perspective of students, administrators, faculty and employers (SAFE). Using the composite preferences for the collective group (n=74), the model was limited to the top 7 attributes which accounted for about 55% of total preferences. Data corresponding to the resulting variables, referred to as key performance indicators, was collected using various information sources and infused in the data envelopment analysis (DEA) methodology (Phase 3). This process revealed both efficient and inefficient departments while offering transparency of opportunities to maximize quality outputs. Findings validate the potential of the Delphi-like, analytic hierarchical, data envelopment analysis approach for administrative decision-making in higher education. However, the availability of more meaningful metrics and data is required to adapt the model for decision making purposes. Several recommendations were included to improve the usability of the decision support model and future research opportunities were identified to extend the analyses inherent and apply the model to alternative areas.
Show less - Date Issued
- 2013
- Identifier
- CFE0004921, ucf:49636
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004921