Current Search: Analysis (x)
Pages
-
-
Title
-
A BURKEAN METHOD FOR ANALYZING ENVIRONMENTAL RHETORIC.
-
Creator
-
Stewart, John, Dombrowski, Paul, University of Central Florida
-
Abstract / Description
-
The work of Kenneth Burke provides a method of rhetorical analysis that is useful in terms of bringing features of texts to the surface that are not readily apparent, such as how they produce identification in their audiences, and in revealing rhetorical factors related to but outside the text, for example the authors' motives. Burke's work is wide-ranging and open to many interpretations, so it can be difficult to apply. This study condenses some of his more important concepts into a...
Show moreThe work of Kenneth Burke provides a method of rhetorical analysis that is useful in terms of bringing features of texts to the surface that are not readily apparent, such as how they produce identification in their audiences, and in revealing rhetorical factors related to but outside the text, for example the authors' motives. Burke's work is wide-ranging and open to many interpretations, so it can be difficult to apply. This study condenses some of his more important concepts into a simplified method which has several practical applications; it focuses on how Burke's theories can be applied to analyzing environmental texts, and helps reveal how those texts are rhetorically effective. This method is also shown to be useful for rhetoricians and other students of language in analyzing the motives and meanings behind complicated texts. An example analysis is developed in detail to demonstrate the utility of this approach for analyzing environmental rhetoric and help clarify how to apply it to other texts. A publication by the Center for Ecoliteracy (CEL), a nonprofit organization engaged in environmental education, provides the basis for a concrete example of applying this method to a current work of environmental rhetoric. The CEL serves as an example of current environmental organizations and their rhetoric, and a Burkean analysis of its publications begins by revealing some of the principles operating in the texts that make them rhetorically effective. This analysis also goes beyond basic dialectics to question how the texts function as "symbolic action" and how they fit into Burke's hierarchic system of language. The method developed in this study not only determines how the text produces identification in an audience, but also the motives behind producing the text. The CEL's publications are good representative examples of current environmental writing, so the conclusions drawn from an analysis of the CEL's texts can be applied to other environmental rhetoric.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002594, ucf:48248
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002594
-
-
Title
-
DEVELOPMENT OF NOVEL DART TOFMS ANALYTICAL TECHNIQUES FOR THE IDENTIFICATION OF ORGANIC CONTAMINATION ON SPACEFLIGHT-RELATED SUBSTRATES AND AQUEOUS MEDIA.
-
Creator
-
Loftin, Kathleen, Clausen, Christian, University of Central Florida
-
Abstract / Description
-
ABSTRACT Organic contamination on spaceflight hardware is an ongoing concern for spaceflight safety. In addition, for the goal of analyzing for possible evidence of extra-terrestrial life, it is necessary to consider the presence of terrestrial contamination. This paper will introduce and evaluate a new method using a direct analysis real time (DART) ionization source paired with a high resolution time of flight mass spectrometer (TOFMS) for the determination of organic contamination...
Show moreABSTRACT Organic contamination on spaceflight hardware is an ongoing concern for spaceflight safety. In addition, for the goal of analyzing for possible evidence of extra-terrestrial life, it is necessary to consider the presence of terrestrial contamination. This paper will introduce and evaluate a new method using a direct analysis real time (DART) ionization source paired with a high resolution time of flight mass spectrometer (TOFMS) for the determination of organic contamination involved in spaceflight hardware and ground support materials. This novel analytical technique has significant advantages over current methodologies. Materials analyzed in this study were historically considered as probable contaminants in spaceflight related substrates. A user determined library was generated due to the non-traditional mass spectra generated by the DART. Continual improvement of analytical methods for the detection of trace levels of contaminants in potential drinking water sources is of extreme importance to both regulatory communities and concerned citizens. This paper will evaluate a novel analytical method using stir bar sorbtive (SBSE) extraction techniques combined with analysis with a DARTTOFMS. Compounds of interest will include several representative pharmaceutical contaminants of emerging concern listed in EPA method 1694. Optimal SBSE and DART experimental parameters will be investigated along with accuracy, precision, limits of detection and calibration linearity.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002714, ucf:48187
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002714
-
-
Title
-
INFLUENCE OF THE CSI EFFECT ON EDUCATION AND MASS MEDIA.
-
Creator
-
McManus, Sarah, Schultz, John, University of Central Florida
-
Abstract / Description
-
Forensic science television shows, especially CSI: Crime Scene Investigation, have been said to influence the public's perception of how forensic science is used and create interest in studying forensic science and pursuing jobs in the field. This study investigates this claim through a variety of methods. First, definitions of the CSI effect are discussed, including how it was first used and mentioned in the media. Second, survey data from students in a forensic anthropology course regarding...
Show moreForensic science television shows, especially CSI: Crime Scene Investigation, have been said to influence the public's perception of how forensic science is used and create interest in studying forensic science and pursuing jobs in the field. This study investigates this claim through a variety of methods. First, definitions of the CSI effect are discussed, including how it was first used and mentioned in the media. Second, survey data from students in a forensic anthropology course regarding interest in forensic science media and educational and career choices are analyzed. Third, the number and debut dates of forensic science non-fiction books, novels, non-fiction television shows, and television dramas are investigated. Finally, a content analysis of the television show Bones is undertaken in order to understand how the forensic anthropology presented in this show differs from the actual practice of forensic anthropology. Results of this study indicate that, overall, students who wanted to pursue forensic science careers and graduate study did not watch more forensic science television shows and read more forensic science novels than those who did not want to pursue forensic science careers and graduate study. Also, based on the decreased interest in a number of forensic careers, it appears that respondents may have started the course with false perceptions regarding the actual job descriptions of these careers. Regarding the number and debut dates of forensic science media, this study found that the majority of non-fiction forensic anthropology books, non-fiction television shows, television dramas debuted after CSI appeared, corroborating the claim that CSI led to an increase in interest in forensic anthropology. In addition, this study found that while much of Bones is fictionalized for entertainment purposes, many of the techniques and analyses presented on the show have a peripheral basis in scientific methods.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003184, ucf:48596
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003184
-
-
Title
-
Leadership and Subordinate Engagement: A Meta-Analytic Examination of its Mechanisms using Self-Determination Theory.
-
Creator
-
Young, Henry, Wang, Wei, Joseph, Dana, Fritzsche, Barbara, University of Central Florida
-
Abstract / Description
-
Although past research has suggested ineffective leadership to be the most common reason for low levels of employee engagement, little is known about the mediating mechanisms underlying this relationship. To address this gap in research, I tested a theoretical model based on Self-Determination Theory (SDT; Deci (&) Ryan, 2000) in which two focal mechanisms, leader-member exchange (LMX) and empowerment, functioned in sequential order to predict the relationship between Full Range Leadership...
Show moreAlthough past research has suggested ineffective leadership to be the most common reason for low levels of employee engagement, little is known about the mediating mechanisms underlying this relationship. To address this gap in research, I tested a theoretical model based on Self-Determination Theory (SDT; Deci (&) Ryan, 2000) in which two focal mechanisms, leader-member exchange (LMX) and empowerment, functioned in sequential order to predict the relationship between Full Range Leadership and subordinate engagement. Results showed that transactional leadership had positive and negative indirect effects on engagement, suggesting that transactional leadership comprises a (")double-edged sword(") as a predictor of subordinate engagement. In contrast, the indirect effects between transformational leadership and engagement were consistently positive. As such, current mediation models used in leadership can benefit by drawing from SDT to investigate the unfolding process of leadership through sequential mediation.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006675, ucf:51250
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006675
-
-
Title
-
Expansive Learning in FYC: Using Linguistic Discourse Analysis to Measure the Effects of Threshold Concepts in Facilitating Generalization.
-
Creator
-
Morrow, Allison, Wardle, Elizabeth, Roozen, Kevin, Hall, Mark, University of Central Florida
-
Abstract / Description
-
This study examines how and if threshold concepts enable expansive learning and generalization. Expansive learning and generalization are part of the highly contested conceptions of transfer, and these specific conceptions offer a more complex conception of transfer that deals with knowledge transformation (Tuomi-Gr?hn and Engestr?m, Beach). One way that we can see expansive learning and generalization transform knowledge is through the teaching of threshold concepts. In the last decade,...
Show moreThis study examines how and if threshold concepts enable expansive learning and generalization. Expansive learning and generalization are part of the highly contested conceptions of transfer, and these specific conceptions offer a more complex conception of transfer that deals with knowledge transformation (Tuomi-Gr?hn and Engestr?m, Beach). One way that we can see expansive learning and generalization transform knowledge is through the teaching of threshold concepts. In the last decade, there has been a movement toward using threshold concepts in FYC's that take up writing studies as their curricula (Wardle and Downs, Dew). Even though using threshold concepts seems to be one interesting way of specifically studying expansive learning and generalization, we have no studies examining whether or not teaching threshold concepts encourages expansive learning. The studies we do have do not seem to offer any methodologies that would enable us to study threshold concepts and generalization. Past methods, such as case studies, interviews, and surveys have included small sample sizes to collect their data from (Wardle, Dively and Nelms, Nowacek). A lot of the transfer data does not actually focus on the writing or the texts themselves or the reoccurring moves that students use in those texts. Linguistic discourse analysis offers a promising avenue for examining the generalization of threshold concepts. Using research methods like linguistic discourse analysis in marriage with the best qualitative methods of transfer, like case studies or interviews, could allow for a larger sample size of data collection and allows for us to see how students use these threshold concepts in their writing. Through linguistic discourse analysis and interviews, this study suggests that students' perceptions of writing change after being introduced to some threshold concepts from the Writing About Writing curriculum. The threshold concepts that students are presented to in the Writing About Writing curriculum at UCF tackles misconceptions and helps students change how they view writing. Once they can change this view, they are able to generalize the knowledge they have into their own writing. If students do not use the exact terminology from the curriculum, they are able to generalize those threshold concepts through using their own language or even through analogies. ?
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005849, ucf:50937
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005849
-
-
Title
-
Investigation of Real Gas Effects on Centrifugal Compressor Analytical Methods for Supercritical CO2 Power Cycles.
-
Creator
-
Blanchette, Lauren, Kapat, Jayanta, Kassab, Alain, Vasu Sumathi, Subith, University of Central Florida
-
Abstract / Description
-
As supercritical carbon dioxide (sCO2) power cycles have shown potential to be the next generation power cycle, an immense amount of research has gone into developing this system. One of the setbacks facing development and optimization of this cycle is the unknown in current design and analysis methods ability to accurately model turbomachinery working with sCO2. Due to the desired inlet conditions to the compressor close proximity to the critical point, accurate design and analysis of this...
Show moreAs supercritical carbon dioxide (sCO2) power cycles have shown potential to be the next generation power cycle, an immense amount of research has gone into developing this system. One of the setbacks facing development and optimization of this cycle is the unknown in current design and analysis methods ability to accurately model turbomachinery working with sCO2. Due to the desired inlet conditions to the compressor close proximity to the critical point, accurate design and analysis of this power cycle component is one of the main concerns. The present study provides aerodynamic analysis of a centrifugal compressor impeller blade with sCO2 as the working fluid through a comparative study between three dimensional (3D) computational fluid dynamics (CFD) and a one dimensional (1D) mean line analyses. The main centrifugal compressor in reference to a 100 MW sCO2 closed loop Recuperated Recompression Brayton cycle is investigated. Through the use of conventional loss correlations for centrifugal compressors found in the literature, and geometrical parameters developed through a past mean line design, losses were calculated for the specified compressor impeller. The aerodynamic performance is then predicted through the 1D analysis. Furthermore, the boundary conditions for the CFD analysis were derived through the mean line analysis of the centrifugal compressor to carry out the 3D study of the sCO2 impeller blade. As the Span and Wagner equation of state has been proven to be the most accurate when working in the vicinity of the critical point, this real gas equation of state was implemented in both analyses. Consequently, a better understanding was developed on best practices for modeling a real gas sCO2 centrifugal compressor along with the limitations that currently exist when utilizing commercial CFD solvers. Furthermore, the resulting performance and aerodynamic behavior from the 1D analysis were compared with the predicted conclusions from the CFD analysis. Past literature studies on sCO2 compressor analysis methodology have been focused on small scale power cycles. This work served as the first comparison of 1D and 3D analysis methodology for large scale sCO2 centrifugal compressors. The lack of commercial CFD codes able to model phase change within sCO2 turbomachinery and the possible breach of flow properties into the saturation region at the leading edge of the impeller blade creates a limit to the operating conditions that can be simulated within these analysis tools. Further, the rapid expansion rate within this region has been predicted to cause non-equilibrium condensation leading the fluid to a metastable vapor state. Due to the complexity of two phase models, a proposed methodology to model sCO2 compressors as single phase is to represent metastable properties through the extrapolation of equilibrium properties onto the liquid domain up until the spinodal limit. This equation of state definition with metastable properties was used to model a 3D converging-diverging nozzle due to the similar flow dynamics occurring when compared to a compressor blade channel. The equation of state was implemented through a temperature and pressure dependent property table amended with metastable properties using the NIST REFPROP Database. Modeling was performed for inlet conditions with varied closeness to the fluid's critical point. Investigation on the accuracy of utilizing this table to define sCO2 properties with respect to its resolution was executed. Through this, it was determined that the resulting interpolation error was highly influenced on the closeness to the critical point. Additionally, the effect on the capable modeling operating region when utilizing the metastable real gas property table through single phase modeling was examined.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006442, ucf:51466
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006442
-
-
Title
-
The Nuts and Bolts of Leadership Training: A Meta-Analysis.
-
Creator
-
Lacerenza, Christina, Salas, Eduardo, Joseph, Dana, Burke, Shawn, University of Central Florida
-
Abstract / Description
-
Organizations within the United States spent over $70 billion on corporate training in 2013; 35% of this budget was allocated to management and leadership, making this field the leading training area for organizations (O'Leonard, 2014). Despite this spending, only 13% of companies believe that they have done a quality job training their leaders (Schwartz, Bersin, (&) Pelster, 2014). This calls into question the utility and effectiveness of current initiatives. In response, this study meta...
Show moreOrganizations within the United States spent over $70 billion on corporate training in 2013; 35% of this budget was allocated to management and leadership, making this field the leading training area for organizations (O'Leonard, 2014). Despite this spending, only 13% of companies believe that they have done a quality job training their leaders (Schwartz, Bersin, (&) Pelster, 2014). This calls into question the utility and effectiveness of current initiatives. In response, this study meta-analytically organizes leadership training literature to identify the conditions under which these programs are most effective. Thus, the current meta-analysis provides the following contributions to the field: (1) meta-analytic data across years (1887 (-) 2014) and organization types, utilizing only employee personnel data; (2) investigation of training effectiveness across all Kirkpatrick (1959) evaluation levels (i.e., trainee reactions, learning, transfer, and results); (3) meta-analytic data computed using updated procedures identified by Morris and DeShon (2002); and (4) an examination of moderators not previously investigated. Based on data from 335 independent samples, results suggest that leadership training is effective across reactions (d = .63), learning (d = .73), transfer (d =. 82), and results (d = .72). The strength of these effect sizes is dependent upon several moderators, but the pattern of results is not consistent across all outcomes. For learning outcomes, programs incorporating information-, demonstration-, and practiced-based delivery methods were most effective while other design and delivery features did not affect results. In regards to transfer, programs that utilized information-, demonstration-, and practice-based methods, feedback, content based on a needs analysis, face-to-face settings, and a voluntary attendance policy produced the largest effect sizes. For results, longer programs that were mandatory, spanned weekly sessions, incorporated practice-based methods, and located on-site produced the largest effect sizes.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006341, ucf:51578
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006341
-
-
Title
-
Accelerated Life Model with Various Types of Censored Data.
-
Creator
-
Pridemore, Kathryn, Pensky, Marianna, Mikusinski, Piotr, Swanson, Jason, Nickerson, David, University of Central Florida
-
Abstract / Description
-
The Accelerated Life Model is one of the most commonly used tools in the analysis of survival data which are frequently encountered in medical research and reliability studies. In these types of studies we often deal with complicated data sets for which we cannot observe the complete data set in practical situations due to censoring. Such difficulties are particularly apparent by the fact that there is little work in statistical literature on the Accelerated Life Model for complicated types...
Show moreThe Accelerated Life Model is one of the most commonly used tools in the analysis of survival data which are frequently encountered in medical research and reliability studies. In these types of studies we often deal with complicated data sets for which we cannot observe the complete data set in practical situations due to censoring. Such difficulties are particularly apparent by the fact that there is little work in statistical literature on the Accelerated Life Model for complicated types of censored data sets, such as doubly censored data, interval censored data, and partly interval censored data.In this work, we use the Weighted Empirical Likelihood approach (Ren, 2001) to construct tests, confidence intervals, and goodness-of-fit tests for the Accelerated Life Model in a unified way for various types of censored data. We also provide algorithms for implementation and present relevant simulation results.I began working on this problem with Dr. Jian-Jian Ren. Upon Dr. Ren's departure from the University of Central Florida I completed this dissertation under the supervision of Dr. Marianna Pensky.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004913, ucf:49613
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004913
-
-
Title
-
How Do Teams Become Cohesive? A Meta-Analysis of Cohesion's Antecedents.
-
Creator
-
Grossman, Rebecca, Salas, Eduardo, Burke, Shawn, Carter, Nathan, University of Central Florida
-
Abstract / Description
-
While a wealth of research has deemed cohesion critical for team effectiveness (e.g., Mullen (&) Copper, 1994; Beal, et al., 2003), less emphasis has been placed on understanding how to get it. Multiple studies do examine cohesion antecedents, but these studies have not yet been integrated in either theoretical or empirical manners. The purpose of this study was thus to begin addressing this gap in the literature. I conducted a series of meta-analyses to identify and explore various...
Show moreWhile a wealth of research has deemed cohesion critical for team effectiveness (e.g., Mullen (&) Copper, 1994; Beal, et al., 2003), less emphasis has been placed on understanding how to get it. Multiple studies do examine cohesion antecedents, but these studies have not yet been integrated in either theoretical or empirical manners. The purpose of this study was thus to begin addressing this gap in the literature. I conducted a series of meta-analyses to identify and explore various antecedents of cohesion, as well as moderators of antecedent-cohesion relationships. Findings revealed a variety of cohesion antecedents. Specifically, team behaviors, emergent states, team composition variables, leadership variables, team interventions, and situational variables, as well as specific variables within each of these categories, were all explored as cohesion antecedents. In most cases, significant relationships with cohesion were demonstrated, and did not differ across levels of analysis or based on cohesion type (i.e., task cohesion, social cohesion, group pride). Hypotheses pertaining to moderators of antecedent-cohesion relationships (e.g., theoretical match between antecedent and cohesion) generally were not supported. Thus, while most antecedents appeared to be important for cohesion's formation and sustainment, some interesting differences emerged, providing insight as to where attention should be focused when enhanced cohesion is desired. Results provide a foundation for the development of more comprehensive models of team cohesion, as well as insight into the mechanisms through which cohesion can be facilitated in practice. Ultimately, findings suggest that teams can become cohesive through the presence of various processes and emergent states, team interventions, and components of their situational context.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005499, ucf:50357
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005499
-
-
Title
-
Characterization of Dynamic Structures Using Parametric and Non-parametric System Identification Methods.
-
Creator
-
Al Rumaithi, Ayad, Yun, Hae-Bum, Catbas, Necati, Mackie, Kevin, University of Central Florida
-
Abstract / Description
-
The effects of soil-foundation-structure (SFS) interaction and extreme loading on structural behaviors are important issues instructural dynamics. System identification is an important technique to characterize linear and nonlinear dynamic structures.The identification methods are usually classified into the parametric and non-parametric approaches based on how to modeldynamic systems. The objective of this study is to characterize the dynamic behaviors of two realistic civil...
Show moreThe effects of soil-foundation-structure (SFS) interaction and extreme loading on structural behaviors are important issues instructural dynamics. System identification is an important technique to characterize linear and nonlinear dynamic structures.The identification methods are usually classified into the parametric and non-parametric approaches based on how to modeldynamic systems. The objective of this study is to characterize the dynamic behaviors of two realistic civil engineeringstructures in SFS configuration and subjected to impact loading by comparing different parametric and non-parametricidentification results. First, SFS building models were studied to investigate the effects of the foundation types on the structural behaviors underseismic excitation. Three foundation types were tested including the fixed, pile and box foundations on a hydraulic shaketable, and the dynamic responses of the SFS systems were measured with the instrumented sensing devices.Parametric modal analysis methods, including NExT-ERA, DSSI, and SSI, were studied as linear identification methodswhose governing equations were modeled based on linear equations of motion. NExT-ERA, DSSI, and SSI were used toanalyze earthquake-induced damage effects on the global behavior of the superstructures for different foundation types.MRFM was also studied to characterize the nonlinear behavior of the superstructure during the seismic events. MRFM is anonlinear non-parametric identification method which has advantages to characterized local nonlinear behaviors using theinterstory stiffness and damping phase diagrams. The major findings from the SFS study are: *The investigated modal analysis methods identified the linearized version of the model behavior. The change of globalstructural behavior induced by the seismic damage could be quantified through the modal parameter identification. Thefoundation types also affected the identification results due to different SFS interactions. The identification accuracy wasreduced as the nonlinear effects due to damage increased. *MRFM could characterize the nonlinear behavior of the interstory restoring forces. The localized damage could bequantified by measuring dissipated energy of each floor. The most severe damage in the superstructure was observed withthe fixed foundation. Second, the responses of a full-scale suspension bridge in a ship-bridge collision accident were analyzed to characterizethe dynamic properties of the bridge. Three parametric and non-parametric identification methods, NExT-ERA, PCA andICA were used to process the bridge response data to evaluate the performance of mode decomposition of these methodsfor traffic, no-traffic, and collision loading conditions. The PCA and ICA identification results were compared with those ofNExT-ERA method for different excitation, response types, system damping and sensor spatial resolution. The major findings from the ship-bridge collision study include: *PCA was able to characterize the mode shapes and modal coordinates for velocity and displacement responses. Theresults using the acceleration were less accurate. The inter-channel correlation and sensor spatial resolution had significanteffects on the mode decomposition accuracy. *ICA showed the lowest performance in this mode decomposition study. It was observed that the excitation type andsystem characteristics significantly affected the ICA accuracy.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005567, ucf:50295
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005567
-
-
Title
-
The Effects of Assumption on Subspace Identification Methods Using Simulation and Experimental Data.
-
Creator
-
Kim, Yoonhwak, Yun, Hae-Bum, Catbas, Fikret, Mackie, Kevin, Nam, Boo Hyun, Behal, Aman, University of Central Florida
-
Abstract / Description
-
In the modern dynamic engineering field, experimental dynamics is an important area of study. This area includes structural dynamics, structural control, and structural health monitoring. In experimental dynamics, methods to obtain measured data have seen a great influx of research efforts to develop an accurate and reliable experimental analysis result. A technical challenge is the procurement of informative data that exhibits the desired system information. In many cases, the number of...
Show moreIn the modern dynamic engineering field, experimental dynamics is an important area of study. This area includes structural dynamics, structural control, and structural health monitoring. In experimental dynamics, methods to obtain measured data have seen a great influx of research efforts to develop an accurate and reliable experimental analysis result. A technical challenge is the procurement of informative data that exhibits the desired system information. In many cases, the number of sensors is limited by cost and difficulty of data archive. Furthermore, some informative data has technical difficulty when measuring input force and, even if obtaining the desired data were possible, it could include a lot of noise in the measuring data. As a result, researchers have developed many analytical tools with limited informative data. Subspace identification method is used one of tools in these achievements.Subspace identification method includes three different approaches: Deterministic Subspace Identification (DSI), Stochastic Subspace Identification (SSI), and Deterministic-Stochastic Subspace Identification (DSSI). The subspace identification method is widely used for fast computational speed and its accuracy. Based on the given information, such as output only, input/output, and input/output with noises, DSI, SSI, and DSSI are differently applied under specific assumptions, which could affect the analytical results. The objective of this study is to observe the effect of assumptions on subspace identification with various data conditions. Firstly, an analytical simulation study is performed using a six-degree-of-freedom mass-damper-spring system which is created using MATLAB. Various conditions of excitation insert to the simulation test model, and its excitation and response are analyzed using the subspace identification method. For stochastic problems, artificial noise is contained to the excitation and followed the same steps. Through this simulation test, the effects of assumption on subspace identification are quantified.Once the effects of the assumptions are studied using the simulation model, the subspace identification method is applied to dynamic response data collected from large-scale 12-story buildings with different foundation types that are tested at Tongji University, Shanghai, China. Noise effects are verified using three different excitation types. Furthermore, using the DSSI, which has the most accurate result, the effect of different foundations on the superstructure are analyzed.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004703, ucf:49822
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004703
-
-
Title
-
Performing on the Screen: An Exploration of Gender Representation in Technology Video Advertisements.
-
Creator
-
Persaud, Subriena, Grauerholz, Elizabeth, Carter, Shannon, Huff-Corzine, Lin, University of Central Florida
-
Abstract / Description
-
This study investigates the representation of gender in technology-related video advertisements. This thesis quantitatively and qualitatively examined 54 of the most recent commercials by the top nine Fortune 500 technology companies. A total of 407 characters were coded and quantitatively analyzed while the videos themselves were qualitatively assessed with particular attention given to the videos' themes and the interactions between the characters and the technology products and services....
Show moreThis study investigates the representation of gender in technology-related video advertisements. This thesis quantitatively and qualitatively examined 54 of the most recent commercials by the top nine Fortune 500 technology companies. A total of 407 characters were coded and quantitatively analyzed while the videos themselves were qualitatively assessed with particular attention given to the videos' themes and the interactions between the characters and the technology products and services. Results of the analyses showed that there were more male, Caucasian characters than any other character type based on gender and race/ethnicity. Females were mainly characterized according to traditional stereotypes, such as being linked to the home and expressing emotions. On the other hand, males were most often presented outdoors and conveyed confidence. Overall, the advertisements targeted upper class, Caucasian males while technology itself was associated with power, speed, and progress. These findings have important implications for understanding the persistence of gender inequality in the field of technology and in existing cultural beliefs surrounding gender and technology.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005228, ucf:50580
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005228
-
-
Title
-
EXTRACTING QUANTITATIVE INFORMATIONFROM NONNUMERIC MARKETING DATA: AN AUGMENTEDLATENT SEMANTIC ANALYSIS APPROACH.
-
Creator
-
Arroniz, Inigo, Michaels, Ronald, University of Central Florida
-
Abstract / Description
-
Despite the widespread availability and importance of nonnumeric data, marketers do not have the tools to extract information from large amounts of nonnumeric data. This dissertation attempts to fill this void: I developed a scalable methodology that is capable of extracting information from extremely large volumes of nonnumeric data. The proposed methodology integrates concepts from information retrieval and content analysis to analyze textual information. This approach avoids a pervasive...
Show moreDespite the widespread availability and importance of nonnumeric data, marketers do not have the tools to extract information from large amounts of nonnumeric data. This dissertation attempts to fill this void: I developed a scalable methodology that is capable of extracting information from extremely large volumes of nonnumeric data. The proposed methodology integrates concepts from information retrieval and content analysis to analyze textual information. This approach avoids a pervasive difficulty of traditional content analysis, namely the classification of terms into predetermined categories, by creating a linear composite of all terms in the document and, then, weighting the terms according to their inferred meaning. In the proposed approach, meaning is inferred by the collocation of the term across all the texts in the corpus. It is assumed that there is a lower dimensional space of concepts that underlies word usage. The semantics of each word are inferred by identifying its various contexts in a document and across documents (i.e., in the corpus). After the semantic similarity space is inferred from the corpus, the words in each document are weighted to obtain their representation on the lower dimensional semantic similarity space, effectively mapping the terms to the concept space and ultimately creating a score that measures the concept of interest. I propose an empirical application of the outlined methodology. For this empirical illustration, I revisit an important marketing problem, the effect of movie critics on the performance of the movies. In the extant literature, researchers have used an overall numerical rating of the review to capture the content of the movie reviews. I contend that valuable information present in the textual materials remains uncovered. I use the proposed methodology to extract this information from the nonnumeric text contained in a movie review. The proposed setting is particularly attractive to validate the methodology because the setting allows for a simple test of the text-derived metrics by comparing them to the numeric ratings provided by the reviewers. I empirically show the application of this methodology and traditional computer-aided content analytic methods to study an important marketing topic, the effect of movie critics on movie performance. In the empirical application of the proposed methodology, I use two datasets that combined contain more than 9,000 movie reviews nested in more than 250 movies. I am restudying this marketing problem in the light of directly obtaining information from the reviews instead of following the usual practice of using an overall rating or a classification of the review as either positive or negative. I find that the addition of direct content and structure of the review adds a significant amount of exploratory power as a determinant of movie performance, even in the presence of actual reviewer overall ratings (stars) and other controls. This effect is robust across distinct opertaionalizations of both the review content and the movie performance metrics. In fact, my findings suggest that as we move from sales to profitability to financial return measures, the role of the content of the review, and therefore the critic's role, becomes increasingly important.
Show less
-
Date Issued
-
2007
-
Identifier
-
CFE0001617, ucf:47164
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001617
-
-
Title
-
MODELING CRASH FREQUENCIES AT SIGNALIZED INTERSECTIONS IN CENTRAL FLORIDA.
-
Creator
-
Kowdla, Smitha, Abdel-Aty, Mohamed, University of Central Florida
-
Abstract / Description
-
A high percentage of highway crashes in the United States occur at intersections. These crashes result in property damage, lost productivity, injury, and even death. Identifying intersections associated with high crash rate is very important to minimize future crashes. The purpose of this study is to develop efficient means to evaluate intersections, which may require safety improvements. The area covered by the analysis in this thesis includes Orange and Seminole Counties and the City of...
Show moreA high percentage of highway crashes in the United States occur at intersections. These crashes result in property damage, lost productivity, injury, and even death. Identifying intersections associated with high crash rate is very important to minimize future crashes. The purpose of this study is to develop efficient means to evaluate intersections, which may require safety improvements. The area covered by the analysis in this thesis includes Orange and Seminole Counties and the City of Orlando. The aforementioned counties and city thus represent Central Florida. Each County/City provided data that consisted of signalized intersection drawings that were either in the form of electronic or hard copies, the county's extensive crash database and a list of intersections that underwent modifications during the study period. A total of 786 intersections were used in the analysis and the crash database was made up of 4271 crashes. From the signalized intersection drawings obtained from the county's traffic engineering department, a geometry database was created to classify all intersections by the number of through lanes, number of left turning lanes, Average Annual Daily Traffic and Posted Speed limits on the Major road of the intersection. In this research, crashes and their type, e.g., rear-end, left-turn and angle as well as total crashes were investigated. Numerous models were developed first using the Poisson regression and then using the Negative Binomial approach as the data showed overdispersion. The modeling process aimed to relate geometric and traffic factors to the frequency of crashes at intersections. Expected value analysis tables were also developed to determine if an intersection had an abnormally high number of crashes. These tables can be used in assisting Traffic Engineers in identifying serious safety problems at intersections. The general models illustrated that rear-end crashes were associated with high natural logarithm of AADT on the major road and the number of lanes (major intersections, e.g. 6x4/6x6), whereas AADT on the major road did not affect left-turn crashes. Intersections with the configuration 4x2/6x2 (2 through lanes at the minor roadway) or T intersections as another category experienced an increase in left-turn crashes. Angle crashes were most frequent at one-way intersections especially in the case of 4x4 intersections. Individual models that included interaction terms with one variable at a time concluded that AADT on the major road positively influenced rear-end crashes more compared to angle and left-turn crashes. As the speed increases on the minor road, the left turn crashes are affected more when compared to angle and rear-end crashes, therefore it can be concluded that left-turn crashes are most influenced by the speed limit on the minor road compared to angle crashes and then followed by rear-end crashes. As the total number of left turn lanes increased at the intersection, thereby increasing the size of the intersection, the number of rear-end crashes increased. An overall model that contained natural logarithm of AADT on major road, total number of left turn lanes at the intersection, number of through lanes on the minor road and configuration of the intersection, as independent variables, along with interaction terms, further concluded and supported the individual models that the number of crashes (rear-end, left-turn and angle) increased as the AADT on the major road increased and the number of crashes decreased as the total number of left turn lanes at the intersection increased. Also, crashes increased as the number of through lanes on the minor road increased. The variables' interaction effects with dummies representing rear-end and left-turn crashes in the final model showed that as the AADT on the major road increased, the number of rear-end crashes increased compared to left-turn and angle crashes and also that as the total number of left turn lanes at the intersection increased, the number of left-turn crashes decreased when compared to rear-end and angle crashes. Also the number of rear-end crashes increased at major four leg intersections e.g. 6x4, 6x6 etc. This thesis demonstrated the superiority of Negative Binomial regression in modeling the frequency of crashes at signalized intersections.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000267, ucf:46224
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000267
-
-
Title
-
LEVELS OF LINE GRAPH QUESTION INTERPRETATION WITH INTERMEDIATE ELEMENTARY STUDENTS OF VARYING SCIENTIFIC AND MATHEMATICAL KNOWLEDGE AND ABILITY: A THINK ALOUD STUDY.
-
Creator
-
Keller, Stacy, Biraimah, Karen, University of Central Florida
-
Abstract / Description
-
This study examined how intermediate elementary students' mathematics and science background knowledge affected their interpretation of line graphs and how their interpretations were affected by graph question levels. A purposive sample of 14 6th-grade students engaged in think aloud interviews (Ericsson & Simon, 1993) while completing an excerpted Test of Graphing in Science (TOGS) (McKenzie & Padilla, 1986). Hand gestures were video recorded. Student performance on the TOGS was assessed...
Show moreThis study examined how intermediate elementary students' mathematics and science background knowledge affected their interpretation of line graphs and how their interpretations were affected by graph question levels. A purposive sample of 14 6th-grade students engaged in think aloud interviews (Ericsson & Simon, 1993) while completing an excerpted Test of Graphing in Science (TOGS) (McKenzie & Padilla, 1986). Hand gestures were video recorded. Student performance on the TOGS was assessed using an assessment rubric created from previously cited factors affecting students' graphing ability. Factors were categorized using Bertin's (1983) three graph question levels. The assessment rubric was validated by Padilla and a veteran mathematics and science teacher. Observational notes were also collected. Data were analyzed using Roth and Bowen's semiotic process of reading graphs (2001). Key findings from this analysis included differences in the use of heuristics, self-generated questions, science knowledge, and self-motivation. Students with higher prior achievement used a greater number and variety of heuristics and more often chose appropriate heuristics. They also monitored their understanding of the question and the adequacy of their strategy and answer by asking themselves questions. Most used their science knowledge spontaneously to check their understanding of the question and the adequacy of their answers. Students with lower and moderate prior achievement favored one heuristic even when it was not useful for answering the question and rarely asked their own questions. In some cases, if students with lower prior achievement had thought about their answers in the context of their science knowledge, they would have been able to recognize their errors. One student with lower prior achievement motivated herself when she thought the questions were too difficult. In addition, students answered the TOGS in one of three ways: as if they were mathematics word problems, science data to be analyzed, or they were confused and had to guess. A second set of findings corroborated how science background knowledge affected graph interpretation: correct science knowledge supported students' reasoning, but it was not necessary to answer any question correctly; correct science knowledge could not compensate for incomplete mathematics knowledge; and incorrect science knowledge often distracted students when they tried to use it while answering a question. Finally, using Roth and Bowen's (2001) two-stage semiotic model of reading graphs, representative vignettes showed emerging patterns from the study. This study added to our understanding of the role of science content knowledge during line graph interpretation, highlighted the importance of heuristics and mathematics procedural knowledge, and documented the importance of perception attentions, motivation, and students' self-generated questions. Recommendations were made for future research in line graph interpretation in mathematics and science education and for improving instruction in this area.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002356, ucf:47810
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002356
-
-
Title
-
Human Group Behavior Modeling for Virtual Worlds.
-
Creator
-
Shah, Syed Fahad Allam, Sukthankar, Gita, Georgiopoulos, Michael, Foroosh, Hassan, Anagnostopoulos, Georgios, University of Central Florida
-
Abstract / Description
-
Virtual worlds and massively-multiplayer online games are rich sources of information about large-scale teams and groups, offering the tantalizing possibility of harvesting data about group formation, social networks, and network evolution. They provide new outlets for human social interaction that differ from both face-to-face interactions and non-physically-embodied social networking tools such as Facebook and Twitter. We aim to study group dynamics in these virtual worlds by collecting and...
Show moreVirtual worlds and massively-multiplayer online games are rich sources of information about large-scale teams and groups, offering the tantalizing possibility of harvesting data about group formation, social networks, and network evolution. They provide new outlets for human social interaction that differ from both face-to-face interactions and non-physically-embodied social networking tools such as Facebook and Twitter. We aim to study group dynamics in these virtual worlds by collecting and analyzing public conversational patterns of users grouped in close physical proximity. To do this, we created a set of tools for monitoring, partitioning, and analyzing unstructured conversations between changing groups of participants in Second Life, a massively multi-player online user-constructed environment that allows users to construct and inhabit their own 3D world. Although there are some cues in the dialog, determining social interactions from unstructured chat data alone is a difficult problem, since these environments lack many of the cues that facilitate natural language processing in other conversational settings and different types of social media. Public chat data often features players who speak simultaneously, use jargon and emoticons, and only erratically adhere to conversational norms.Humans are adept social animals capable of identifying friendship groups from a combination of linguistic cues and social network patterns. But what is more important, the content of what people say or their history of social interactions? Moreover, is it possible to identify whether people are part of a group with changing membership merely from general network properties, such as measures of centrality and latent communities? These are the questions that we aim to answer in this thesis. The contributions of this thesis include: 1) a link prediction algorithm for identifying friendship relationships from unstructured chat data 2) a method for identifying social groups based on the results of community detection and topic analysis.The output of these two algorithms (links and group membership) are useful for studying a variety of research questions about human behavior in virtual worlds. To demonstrate this we have performed a longitudinal analysis of human groups in different regions of the Second Life virtual world. We believe that studies performed with our tools in virtual worlds will be a useful stepping stone toward creating a rich computational model of human group dynamics.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0004164, ucf:49074
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004164
-
-
Title
-
A Posteriori and Interactive Approaches for Decision-Making with Multiple Stochastic Objectives.
-
Creator
-
Bakhsh, Ahmed, Geiger, Christopher, Mollaghasemi, Mansooreh, Xanthopoulos, Petros, Wiegand, Rudolf, University of Central Florida
-
Abstract / Description
-
Computer simulation is a popular method that is often used as a decision support tool in industry to estimate the performance of systems too complex for analytical solutions. It is a tool that assists decision-makers to improve organizational performance and achieve performance objectives in which simulated conditions can be randomly varied so that critical situations can be investigated without real-world risk. Due to the stochastic nature of many of the input process variables in simulation...
Show moreComputer simulation is a popular method that is often used as a decision support tool in industry to estimate the performance of systems too complex for analytical solutions. It is a tool that assists decision-makers to improve organizational performance and achieve performance objectives in which simulated conditions can be randomly varied so that critical situations can be investigated without real-world risk. Due to the stochastic nature of many of the input process variables in simulation models, the output from the simulation model experiments are random. Thus, experimental runs of computer simulations yield only estimates of the values of performance objectives, where these estimates are themselves random variables.Most real-world decisions involve the simultaneous optimization of multiple, and often conflicting, objectives. Researchers and practitioners use various approaches to solve these multiobjective problems. Many of the approaches that integrate the simulation models with stochastic multiple objective optimization algorithms have been proposed, many of which use the Pareto-based approaches that generate a finite set of compromise, or tradeoff, solutions. Nevertheless, identification of the most preferred solution can be a daunting task to the decision-maker and is an order of magnitude harder in the presence of stochastic objectives. However, to the best of this researcher's knowledge, there has been no focused efforts and existing work that attempts to reduce the number of tradeoff solutions while considering the stochastic nature of a set of objective functions.In this research, two approaches that consider multiple stochastic objectives when reducing the set of the tradeoff solutions are designed and proposed. The first proposed approach is an a posteriori approach, which uses a given set of Pareto optima as input. The second approach is an interactive-based approach that articulates decision-maker preferences during the optimization process. A detailed description of both approaches is given, and computational studies are conducted to evaluate the efficacy of the two approaches. The computational results show the promise of the proposed approaches, in that each approach effectively reduces the set of compromise solutions to a reasonably manageable size for the decision-maker. This is a significant step beyond current applications of decision-making process in the presence of multiple stochastic objectives and should serve as an effective approach to support decision-making under uncertainty.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0004973, ucf:49574
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004973
-
-
Title
-
OPTIMAL DUAL FRAMES FOR ERASURES AND DISCRETE GABOR FRAMES.
-
Creator
-
Lopez, Jerry, Han, Deguang, University of Central Florida
-
Abstract / Description
-
Since their discovery in the early 1950's, frames have emerged as an important tool in areas such as signal processing, image processing, data compression and sampling theory, just to name a few. Our purpose of this dissertation is to investigate dual frames and the ability to find dual frames which are optimal when coping with the problem of erasures in data transmission. In addition, we study a special class of frames which exhibit algebraic structure, discrete Gabor frames. Much work...
Show moreSince their discovery in the early 1950's, frames have emerged as an important tool in areas such as signal processing, image processing, data compression and sampling theory, just to name a few. Our purpose of this dissertation is to investigate dual frames and the ability to find dual frames which are optimal when coping with the problem of erasures in data transmission. In addition, we study a special class of frames which exhibit algebraic structure, discrete Gabor frames. Much work has been done in the study of discrete Gabor frames in $\mathbb^n$, but very little is known about the $\ell^2(\mathbb)$ case or the $\ell^2(\mathbb^d)$ case. We establish some basic Gabor frame theory for $\ell^2(\mathbb)$ and then generalize to the $\ell^2(\mathbb^d)$ case.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002614, ucf:48274
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002614
-
-
Title
-
DETERMINING THE PRESENCE OF AN IGNITABLE LIQUID RESIDUE IN FIRE DEBRIS SAMPLES UTILIZING TARGET FACTOR ANALYSIS.
-
Creator
-
McHugh, Kelly, Sigman, Michael, University of Central Florida
-
Abstract / Description
-
Current fire debris analysis procedure involves using the chromatographic patterns of total ion chromatograms, extracted ion chromatograms, and target compound analysis to identify an ignitable liquid according to the American Society for Testing and Materials (ASTM) E 1618 standard method. Classifying the ignitable liquid is accomplished by a visual comparison of chromatographic data obtained from any extracted ignitable liquid residue in the debris to the chromatograms of ignitable liquids...
Show moreCurrent fire debris analysis procedure involves using the chromatographic patterns of total ion chromatograms, extracted ion chromatograms, and target compound analysis to identify an ignitable liquid according to the American Society for Testing and Materials (ASTM) E 1618 standard method. Classifying the ignitable liquid is accomplished by a visual comparison of chromatographic data obtained from any extracted ignitable liquid residue in the debris to the chromatograms of ignitable liquids in a database, i.e. by visual pattern recognition. Pattern recognition proves time consuming and introduces potential for human error. One particularly difficult aspect of fire debris analysis is recognizing an ignitable liquid residue when the intensity of its chromatographic pattern is extremely low or masked by pyrolysis products. In this research, a unique approach to fire debris analysis was applied by utilizing the samplesÃÂ' total ion spectrum (TIS) to identify an ignitable liquid, if present. The TIS, created by summing the intensity of each ion across all elution times in a gas chromatography-mass spectrometry (GC-MS) dataset retains sufficient information content for the identification of complex mixtures . Computer assisted spectral comparison was then performed on the samplesÃÂ' TIS by target factor analysis (TFA). This approach allowed rapid automated searching against a library of ignitable liquid summed ion spectra. Receiver operating characteristic (ROC) curves measured how well TFA identified ignitable liquids in the database that were of the same ASTM classification as the ignitable liquid in fire debris samples, as depicted in their corresponding area under the ROC curve. This study incorporated statistical analysis to aid in classification of an ignitable liquid, therefore alleviating interpretive error inherent in visual pattern recognition. This method could allow an analyst to declare an ignitable liquid present when utilization of visual pattern recognition alone is not sufficient.
Show less
-
Date Issued
-
2010
-
Identifier
-
CFE0003042, ucf:48337
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003042
-
-
Title
-
A Framework for Measuring Return on Investment for Healthcare Simulation-Based Training.
-
Creator
-
Bukhari, Hatim, Rabelo, Luis, Elshennawy, Ahmad, Goldiez, Brian, Andreatta, Pamela, University of Central Florida
-
Abstract / Description
-
In the healthcare sector, providing high-quality service in a safe environment for both patient and staff is an obvious and ultimate major objective. Training is an essential component for achieving this important objective. Most organizations acknowledge that employee simulation-based training programs are an important part of the human capital strategy, yet few have effectively succeeded in quantifying the real and precise ROI of this type of investment. Therefore, if the training is...
Show moreIn the healthcare sector, providing high-quality service in a safe environment for both patient and staff is an obvious and ultimate major objective. Training is an essential component for achieving this important objective. Most organizations acknowledge that employee simulation-based training programs are an important part of the human capital strategy, yet few have effectively succeeded in quantifying the real and precise ROI of this type of investment. Therefore, if the training is perceived as a waste of resources and its ROI is not clearly recognized, it will be the first option to cut when the budget cut is needed.The various intangible benefits of healthcare simulation-based training are very difficult to quantify. In addition, there was not a unified way to count for the different cost and benefits to provide a justifiable ROI. Quantifying the qualitative and intangible benefits of medical training simulator needed a framework that helps to identify and convert qualitative and intangible benefits into monetary value so it can be considered in the ROI evaluation.This research is a response to the highlighted importance of developing a comprehensive framework that has the capability to take into consideration the wide range of benefits that simulation-based training can bring to the healthcare system taking into consideration the characteristics of this specific field of investment. The major characteristics of investment in this field include the uncertainty, the qualitative nature of the major benefits, and the diversity and the wide range of applications.This comprehensive framework is an integration of several methodologies and tools. It consists of three parts. The first part of the framework is the benefits and cost structure, which pays special attention to the qualitative and intangible benefits by considering the Value Measurement methodology (VMM) and other previously existing models. The second part of the framework is important to deal with the uncertainty associated with this type of investment. Monte Carlo simulation is a tool that considered multiple scenarios of input sets instead of a single set of inputs. The third part of the framework considers an advanced value analysis of the investment. It goes beyond the discounted cash flow (DCF) methodologies like net present value (NPV) that consider a single scenario for the cash flow to Real Options Analysis that consider the flexibility over the lifetime of the investment when evaluating the value of the investment. This framework has been validated through case studies.
Show less
-
Date Issued
-
2017
-
Identifier
-
CFE0006859, ucf:51750
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006859
Pages