Current Search: data analysis (x)
View All Items
Pages
- Title
- Robust, Scalable, and Provable Approaches to High Dimensional Unsupervised Learning.
- Creator
-
Rahmani, Mostafa, Atia, George, Vosoughi, Azadeh, Mikhael, Wasfy, Nashed, M, Pensky, Marianna, University of Central Florida
- Abstract / Description
-
This doctoral thesis focuses on three popular unsupervised learning problems: subspace clustering, robust PCA, and column sampling. For the subspace clustering problem, a new transformative idea is presented. The proposed approach, termed Innovation Pursuit, is a new geometrical solution to the subspace clustering problem whereby subspaces are identified based on their relative novelties. A detailed mathematical analysis is provided establishing sufficient conditions for the proposed method...
Show moreThis doctoral thesis focuses on three popular unsupervised learning problems: subspace clustering, robust PCA, and column sampling. For the subspace clustering problem, a new transformative idea is presented. The proposed approach, termed Innovation Pursuit, is a new geometrical solution to the subspace clustering problem whereby subspaces are identified based on their relative novelties. A detailed mathematical analysis is provided establishing sufficient conditions for the proposed method to correctly cluster the data points. The numerical simulations with both real and synthetic data demonstrate that Innovation Pursuit notably outperforms the state-of-the-art subspace clustering algorithms. For the robust PCA problem, we focus on both the outlier detection and the matrix decomposition problems. For the outlier detection problem, we present a new algorithm, termed Coherence Pursuit, in addition to two scalable randomized frameworks for the implementation of outlier detection algorithms. The Coherence Pursuit method is the first provable and non-iterative robust PCA method which is provably robust to both unstructured and structured outliers. Coherence Pursuit is remarkably simple and it notably outperforms the existing methods in dealing with structured outliers. In the proposed randomized designs, we leverage the low dimensional structure of the low rank component to apply the robust PCA algorithm to a random sketch of the data as opposed to the full scale data. Importantly, it is analytically shown that the presented randomized designs can make the computation or sample complexity of the low rank matrix recovery algorithm independent of the size of the data. At the end, we focus on the column sampling problem. A new sampling tool, dubbed Spatial Random Sampling, is presented which performs the random sampling in the spatial domain. The most compelling feature of Spatial Random Sampling is that it is the first unsupervised column sampling method which preserves the spatial distribution of the data.
Show less - Date Issued
- 2018
- Identifier
- CFE0007083, ucf:52010
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007083
- Title
- Accelerated Life Model with Various Types of Censored Data.
- Creator
-
Pridemore, Kathryn, Pensky, Marianna, Mikusinski, Piotr, Swanson, Jason, Nickerson, David, University of Central Florida
- Abstract / Description
-
The Accelerated Life Model is one of the most commonly used tools in the analysis of survival data which are frequently encountered in medical research and reliability studies. In these types of studies we often deal with complicated data sets for which we cannot observe the complete data set in practical situations due to censoring. Such difficulties are particularly apparent by the fact that there is little work in statistical literature on the Accelerated Life Model for complicated types...
Show moreThe Accelerated Life Model is one of the most commonly used tools in the analysis of survival data which are frequently encountered in medical research and reliability studies. In these types of studies we often deal with complicated data sets for which we cannot observe the complete data set in practical situations due to censoring. Such difficulties are particularly apparent by the fact that there is little work in statistical literature on the Accelerated Life Model for complicated types of censored data sets, such as doubly censored data, interval censored data, and partly interval censored data.In this work, we use the Weighted Empirical Likelihood approach (Ren, 2001) to construct tests, confidence intervals, and goodness-of-fit tests for the Accelerated Life Model in a unified way for various types of censored data. We also provide algorithms for implementation and present relevant simulation results.I began working on this problem with Dr. Jian-Jian Ren. Upon Dr. Ren's departure from the University of Central Florida I completed this dissertation under the supervision of Dr. Marianna Pensky.
Show less - Date Issued
- 2013
- Identifier
- CFE0004913, ucf:49613
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004913
- Title
- An Integrated Framework for Automated Data Collection and Processing for Discrete Event Simulation Models.
- Creator
-
Rodriguez, Carlos, Kincaid, John, Karwowski, Waldemar, O'Neal, Thomas, Kaup, David, Mouloua, Mustapha, University of Central Florida
- Abstract / Description
-
Discrete Events Simulation (DES) is a powerful tool of modeling and analysis used in different disciplines. DES models require data in order to determine the different parameters that drive the simulations. The literature about DES input data management indicates that the preparation of necessary input data is often a highly manual process, which causes inefficiencies, significant time consumption and a negative user experience.The focus of this research investigation is addressing the manual...
Show moreDiscrete Events Simulation (DES) is a powerful tool of modeling and analysis used in different disciplines. DES models require data in order to determine the different parameters that drive the simulations. The literature about DES input data management indicates that the preparation of necessary input data is often a highly manual process, which causes inefficiencies, significant time consumption and a negative user experience.The focus of this research investigation is addressing the manual data collection and processing (MDCAP) problem prevalent in DES projects. This research investigation presents an integrated framework to solve the MDCAP problem by classifying the data needed for DES projects into three generic classes. Such classification permits automating and streamlining the preparation of the data, allowing DES modelers to collect, update, visualize, fit, validate, tally and test data in real-time, by performing intuitive actions. In addition to the proposed theoretical framework, this project introduces an innovative user interface that was programmed based on the ideas of the proposed framework. The interface is called DESI, which stands for Discrete Event Simulation Inputs.The proposed integrated framework to automate DES input data preparation was evaluated against benchmark measures presented in the literature in order to show its positive impact in DES input data management. This research investigation demonstrates that the proposed framework, instantiated by the DESI interface, addresses current gaps in the field, reduces the time devoted to input data management within DES projects and advances the state-of-the-art in DES input data management automation.
Show less - Date Issued
- 2015
- Identifier
- CFE0005878, ucf:50861
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005878
- Title
- CENTRAL FLORIDA HIGH SCHOOL PRINCIPALS' PERCEPTIONS OF THE FLORIDA SCHOOL INDICATORS REPORT.
- Creator
-
Gaught, William, Hahs-Vaughn, Debbie, University of Central Florida
- Abstract / Description
-
The purpose of this study was to identify the perceptions that central Florida public high school principals had regarding the Florida School Indicators Report (FSIR) and its usefulness. The FSIR, published by the Florida Department of Education, was designed to be a comprehensive, single source document for parents, lawmakers, and school administrators to compare key performance indicators to similar schools or districts state wide. It provided information on 74 different indicators of...
Show moreThe purpose of this study was to identify the perceptions that central Florida public high school principals had regarding the Florida School Indicators Report (FSIR) and its usefulness. The FSIR, published by the Florida Department of Education, was designed to be a comprehensive, single source document for parents, lawmakers, and school administrators to compare key performance indicators to similar schools or districts state wide. It provided information on 74 different indicators of school or district performance. A total of 70 public high school principals from 13 central Florida school districts responded to a postal survey and provided their perceptions regarding the importance of indicators in the FSIR, how they used the FSIR at their schools, and what barriers they felt affected the ability of their administrative staffs to collect and analyze data on the FSIR indicators. Eighteen of the 70 principals participated in follow-up telephone interviews. Quantitative and qualitative analysis of the postal surveys and interviews revealed the principals perceived FSIR indicators related to Florida's mandated Florida Comprehensive Assessment Test (FCAT) as the most important indictors in the FSIR. The indicators FCAT Results and FCAT Writes were ranked first and second respectively in priority by the participating principals. This finding demonstrated the importance that principals placed on the state's high-stakes test. Other categories of FSIR indicators are were also ranked in the findings reported in this study, along with how the principals used the FSIR at their schools. The data collected from the postal survey revealed there was a statistically significant relationship between the priority principals assigned to the FSIR indicators and their ability to collect and analyze data related to them. In addition, survey data allowed development of multiple regression models that could be used to predict the priority principals assigned to several FSIR categories of indicators based on the ability to collect and analyze data. The study findings indicated that principals perceived lack of time for data analysis as the biggest barrier they faced when evaluating the FSIR indicators. After the lack of time, principals rated lack of administrator training in data analysis as the second biggest obstacle to using the FSIR. The findings indicated that principals felt the availability of data and technology were not significant barriers to their staff's ability to conduct data analysis on the FSIR. The conclusions drawn from the study were that central Florida high school principals perceived the results on the state's mandated Florida Comprehensive Assessment Test (FCAT) to be the most important indicators in the FSIR. In addition, the research identified that the lack of time was the single greatest barrier principals encountered when it came to collecting and analyzing data on the FSIR. A lack of training programs in data collection and analysis for administrators was also noted in the findings.
Show less - Date Issued
- 2007
- Identifier
- CFE0001688, ucf:47204
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001688
- Title
- DEVELOPMENT OF A SPANISH VERSION OF THE MAIN CONCEPT ANALYSIS FOR ANALYZING ORAL DISORDERED DISCOURSE.
- Creator
-
Simonet, Karla, Kong, Anthony Pak-Hin, University of Central Florida
- Abstract / Description
-
Aphasia is an acquired language impairment caused by damage in the regions of the brain that support language. The Main Concept Analysis (MCA) is a published formal assessment battery that allows the quantification of the presence, accuracy, completeness, and efficiency of content in spoken discourse produced by persons with aphasia (PWA). It utilizes a sequential picture description task (with four sets of pictures) for language sample elicitation. The MCA results can also be used clinically...
Show moreAphasia is an acquired language impairment caused by damage in the regions of the brain that support language. The Main Concept Analysis (MCA) is a published formal assessment battery that allows the quantification of the presence, accuracy, completeness, and efficiency of content in spoken discourse produced by persons with aphasia (PWA). It utilizes a sequential picture description task (with four sets of pictures) for language sample elicitation. The MCA results can also be used clinically for targeting appropriate interventions of aphasic output. The purpose of this research is to develop a Spanish adaptation of the MCA by establishing normative data based on native unimpaired speakers of Spanish. In the pilot study, thirty-eight unimpaired Spanish participants were recruited by previous student researchers. Each participant was asked to complete a demographic questionnaire and a short form of the Cognitive Linguistic Quick Test was administered to rule out any unidentified language problems. The MCA was then be administered to participants and their oral description was audio recorded for later orthographic transcription. A total of 81 unimpaired participants that consisted of different genders, ages (young, middle-aged, and older groups), levels of education (high versus low), and dialect origins (e.g., Spain, Puerto Rico, Columbia) were recruited in the main study to establish a more balanced set of data. One person with aphasia (PWA) was recruited for this study. Based on the collected normative samples, the essential information was identified for each participant. A dialect-specific scoring criteria including target main concepts and lexicons of the Spanish-MCA were developed. The Spanish-MCA was conducted to test the validity of the assessment battery. In the current study, a preliminary set of data using the MCA scoring criteria has been established. Similar to findings in Kong and Yeh 2015, the results of the Spanish-MCA showed age and education did impact discourse performance. Results from one-way ANOVA revealed statistical differences between age groups and education levels of the unimpaired participants recruited. The groups of participants with a higher education conveyed more AC concepts compared to the other dialect groups. To compare data for PWA, it is suggested that a larger sample size of PWA be recruited to validate the Spanish-MCA.
Show less - Date Issued
- 2019
- Identifier
- CFH2000553, ucf:45622
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH2000553
- Title
- detecting anomalies from big data system logs.
- Creator
-
Lu, Siyang, Wang, Liqiang, Zhang, Shaojie, Zhang, Wei, Wu, Dazhong, University of Central Florida
- Abstract / Description
-
Nowadays, big data systems (e.g., Hadoop and Spark) are being widely adopted by many domains for offering effective data solutions, such as manufacturing, healthcare, education, and media. A common problem about big data systems is called anomaly, e.g., a status deviated from normal execution, which decreases the performance of computation or kills running programs. It is becoming a necessity to detect anomalies and analyze their causes. An effective and economical approach is to analyze...
Show moreNowadays, big data systems (e.g., Hadoop and Spark) are being widely adopted by many domains for offering effective data solutions, such as manufacturing, healthcare, education, and media. A common problem about big data systems is called anomaly, e.g., a status deviated from normal execution, which decreases the performance of computation or kills running programs. It is becoming a necessity to detect anomalies and analyze their causes. An effective and economical approach is to analyze system logs. Big data systems produce numerous unstructured logs that contain buried valuable information. However manually detecting anomalies from system logs is a tedious and daunting task.This dissertation proposes four approaches that can accurately and automatically analyze anomalies from big data system logs without extra monitoring overhead. Moreover, to detect abnormal tasks in Spark logs and analyze root causes, we design a utility to conduct fault injection and collect logs from multiple compute nodes. (1) Our first method is a statistical-based approach that can locate those abnormal tasks and calculate the weights of factors for analyzing the root causes. In the experiment, four potential root causes are considered, i.e., CPU, memory, network, and disk I/O. The experimental results show that the proposed approach is accurate in detecting abnormal tasks as well as finding the root causes. (2) To give a more reasonable probability result and avoid ad-hoc factor weights calculating, we propose a neural network approach to analyze root causes of abnormal tasks. We leverage General Regression Neural Network (GRNN) to identify root causes for abnormal tasks. The likelihood of reported root causes is presented to users according to the weighted factors by GRNN. (3) To further improve anomaly detection by avoiding feature extraction, we propose a novel approach by leveraging Convolutional Neural Networks (CNN). Our proposed model can automatically learn event relationships in system logs and detect anomaly with high accuracy. Our deep neural network consists of logkey2vec embeddings, three 1D convolutional layers, a dropout layer, and max pooling. According to our experiment, our CNN-based approach has better accuracy compared to other approaches using Long Short-Term Memory (LSTM) and Multilayer Perceptron (MLP) on detecting anomaly in Hadoop DistributedFile System (HDFS) logs. (4) To analyze system logs more accurately, we extend our CNN-based approach with two attention schemes to detect anomalies in system logs. The proposed two attention schemes focus on different features from CNN's output. We evaluate our approaches with several benchmarks, and the attention-based CNN model shows the best performance among all state-of-the-art methods.
Show less - Date Issued
- 2019
- Identifier
- CFE0007673, ucf:52499
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007673
- Title
- Real Estate Investment Trust Performance, Efficiency and Internationalization.
- Creator
-
Harris, Joshua, Anderson, Randy, Schnitzlein, Charles, Turnbull, Geoffrey, Rottke, Nico, University of Central Florida
- Abstract / Description
-
Real Estate Investment Trusts (REITs) are firms that own and manage income producing commercial real estate for the benefit of their shareholders. The three studies in this dissertation explore topics relating to best practices of REIT management and portfolio composition. Managers and investors can use the findings herein to aide in analyzing a REIT's performance and determining optimal investment policies. Utilizing REIT from SNL Real Estate and CRSP, the first two studies examine the role...
Show moreReal Estate Investment Trusts (REITs) are firms that own and manage income producing commercial real estate for the benefit of their shareholders. The three studies in this dissertation explore topics relating to best practices of REIT management and portfolio composition. Managers and investors can use the findings herein to aide in analyzing a REIT's performance and determining optimal investment policies. Utilizing REIT from SNL Real Estate and CRSP, the first two studies examine the role of international diversification upon performance, technical efficiency, and scale efficiency. The third study utilizes REIT data to examine technical and scale efficiency over a 21 year window and investigates characteristics of the REITs that affect the levels of efficiency. CHAPTER 1 (-) PROFITABILITY OF REAL ESTATE INVESTMENT TRUST INTERNATIONALIZATIONReal Estate Investment Trusts (REITs) in the United States have grown extremely fast in terms of assets and market capitalization since the early 1990's. As with many industries, U.S. REITs began acquiring foreign properties as their size grew and they needed to seek new investment opportunities. This paper investigates the role of holding foreign assets upon the total return of U.S. based REITs from 1995 through 2010. We find that holding foreign properties in associated with negative relative performance when risk, size, and other common market factors are controlled for. Interestingly, the source of the negative performance is not related to the two largest areas for foreign investment, Europe and Canada. Instead, the negative performance is detected when a REIT begins acquiring properties in other global regions such as Latin America and Asia/Pacific. This paper has broad ramifications for REIT investors and managers alike.CHAPTER 2 (-) EFFECT OF INTERNATIONAL DIVERSIFICATION BY U.S. REAL ESTATE INVESTMENT TRUSTS ON COST EFFICIENCY AND SCALEAs U.S. based Real Estate Investment Trusts (REITs) have increased their degree and type of holdings overseas, there has yet to a study that has investigated such activity on the REIT's measures of cost efficiency and scale. Using data from 2010, Data Envelopment Analysis techniques are used to estimate measures of technical and scale efficiency that are then regressed against measures of international diversification and other controls to measure the impact of this global expansion. It is determined that REITs with foreign holdings are significantly larger than domestic REITs and are correspondingly 96% of foreign investing REITs are operating at decreasing returns to scale. Further almost every measure of foreign diversification is negative and significantly impacting scale efficiency. However, simply being a REIT with foreign holdings did positively and significantly associate with higher levels of technical efficiencies. Thus REITs that expand globally may have some advantages in operational efficiency but lose considerably in terms of scale efficiency by increasing their size as they move cross-border. ?CHAPTER 3 (-) THE EVOLUTION OF TECHNICAL EFFICIENCY AND ECONOMIES OF SCALE OF REAL ESTATE INVESTMENT TRUSTSData Envelopment Analysis (DEA) is used to measure technical and scale efficiency of 21 years of Real Estate Investment Trust (REIT) data. This is the longest, most complete dataset ever analyzed in the REIT efficiency literature and as such makes a significant contribution as prior efficiency studies' data windows end in the early 2000's at latest. Overall, REITs appear to continue to operate at decreasing returns to scale despite rapid growth in total assets. Further, there is some evidence of improving technical efficiency overtime; however the finding is not strong. In summation, it appears that REITs have not improved on a relative basis despite the rapid growth, a finding that suggests a potential of a high degree of firm competition in the REIT industry. Finally, firm characteristics such as debt utilization, management and advisory structure, and property type specialization are tested for their impact upon technical and scale efficiency.
Show less - Date Issued
- 2012
- Identifier
- CFE0004383, ucf:49399
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004383
- Title
- Research on High-performance and Scalable Data Access in Parallel Big Data Computing.
- Creator
-
Yin, Jiangling, Wang, Jun, Jin, Yier, Lin, Mingjie, Qi, GuoJun, Wang, Chung-Ching, University of Central Florida
- Abstract / Description
-
To facilitate big data processing, many dedicated data-intensive storage systems such as Google File System(GFS), Hadoop Distributed File System(HDFS) and Quantcast File System(QFS) have been developed. Currently, the Hadoop Distributed File System(HDFS) [20] is the state-of-art and most popular open-source distributed file system for big data processing. It is widely deployed as the bedrock for many big data processing systems/frameworks, such as the script-based pig system, MPI-based...
Show moreTo facilitate big data processing, many dedicated data-intensive storage systems such as Google File System(GFS), Hadoop Distributed File System(HDFS) and Quantcast File System(QFS) have been developed. Currently, the Hadoop Distributed File System(HDFS) [20] is the state-of-art and most popular open-source distributed file system for big data processing. It is widely deployed as the bedrock for many big data processing systems/frameworks, such as the script-based pig system, MPI-based parallel programs, graph processing systems and scala/java-based Spark frameworks. These systems/applications employ parallel processes/executors to speed up data processing within scale-out clusters.Job or task schedulers in parallel big data applications such as mpiBLAST and ParaView can maximize the usage of computing resources such as memory and CPU by tracking resource consumption/availability for task assignment. However, since these schedulers do not take the distributed I/O resources and global data distribution into consideration, the data requests from parallel processes/executors in big data processing will unfortunately be served in an imbalanced fashion on the distributed storage servers. These imbalanced access patterns among storage nodes are caused because a). unlike conventional parallel file system using striping policies to evenly distribute data among storage nodes, data-intensive file systems such as HDFS store each data unit, referred to as chunk or block file, with several copies based on a relative random policy, which can result in an uneven data distribution among storage nodes; b). based on the data retrieval policy in HDFS, the more data a storage node contains, the higher the probability that the storage node could be selected to serve the data. Therefore, on the nodes serving multiple chunk files, the data requests from different processes/executors will compete for shared resources such as hard disk head and network bandwidth. Because of this, the makespan of the entire program could be significantly prolonged and the overall I/O performance will degrade.The first part of my dissertation seeks to address aspects of these problems by creating an I/O middleware system and designing matching-based algorithms to optimize data access in parallel big data processing. To address the problem of remote data movement, we develop an I/O middleware system, called SLAM, which allows MPI-based analysis and visualization programs to benefit from locality read, i.e, each MPI process can access its required data from a local or nearby storage node. This can greatly improve the execution performance by reducing the amount of data movement over network. Furthermore, to address the problem of imbalanced data access, we propose a method called Opass, which models the data read requests that are issued by parallel applications to cluster nodes as a graph data structure where edges weights encode the demands of load capacity. We then employ matching-based algorithms to map processes to data to achieve data access in a balanced fashion. The final part of my dissertation focuses on optimizing sub-dataset analyses in parallel big data processing. Our proposed methods can benefit different analysis applications with various computational requirements and the experiments on different cluster testbeds show their applicability and scalability.
Show less - Date Issued
- 2015
- Identifier
- CFE0006021, ucf:51008
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006021
- Title
- Learning Kernel-based Approximate Isometries.
- Creator
-
Sedghi, Mahlagha, Georgiopoulos, Michael, Anagnostopoulos, Georgios, Atia, George, Liu, Fei, University of Central Florida
- Abstract / Description
-
The increasing availability of public datasets offers an inexperienced opportunity to conduct data-driven studies. Metric Multi-Dimensional Scaling aims to find a low-dimensional embedding of the data, preserving the pairwise dissimilarities amongst the data points in the original space. Along with the visualizability, this dimensionality reduction plays a pivotal role in analyzing and disclosing the hidden structures in the data. This work introduces Sparse Kernel-based Least Squares Multi...
Show moreThe increasing availability of public datasets offers an inexperienced opportunity to conduct data-driven studies. Metric Multi-Dimensional Scaling aims to find a low-dimensional embedding of the data, preserving the pairwise dissimilarities amongst the data points in the original space. Along with the visualizability, this dimensionality reduction plays a pivotal role in analyzing and disclosing the hidden structures in the data. This work introduces Sparse Kernel-based Least Squares Multi-Dimensional Scaling approach for exploratory data analysis and, when desirable, data visualization. We assume our embedding map belongs to a Reproducing Kernel Hilbert Space of vector-valued functions which allows for embeddings of previously unseen data. Also, given appropriate positive-definite kernel functions, it extends the applicability of our methodto non-numerical data. Furthermore, the framework employs Multiple Kernel Learning for implicitlyidentifying an effective feature map and, hence, kernel function. Finally, via the use ofsparsity-promoting regularizers, the technique is capable of embedding data on a, typically, lowerdimensionalmanifold by naturally inferring the embedding dimension from the data itself. In theprocess, key training samples are identified, whose participation in the embedding map's kernelexpansion is most influential. As we will show, such influence may be given interesting interpretations in the context of the data at hand. The resulting multi-kernel learning, non-convex framework can be effectively trained via a block coordinate descent approach, which alternates between an accelerated proximal average method-based iterative majorization for learning the kernel expansion coefficients and a simple quadratic program, which deduces the multiple-kernel learning coefficients. Experimental results showcase potential uses of the proposed framework on artificial data as well as real-world datasets, that underline the merits of our embedding framework. Our method discovers genuine hidden structure in the data, that in case of network data, matches the results of well-known Multi- level Modularity Optimization community structure detection algorithm.
Show less - Date Issued
- 2017
- Identifier
- CFE0007132, ucf:52315
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007132
- Title
- Data-Driven Modeling and Optimization of Building Energy Consumption.
- Creator
-
Grover, Divas, Pourmohammadi Fallah, Yaser, Vosoughi, Azadeh, Zhou, Qun, University of Central Florida
- Abstract / Description
-
Sustainability and reducing energy consumption are targets for building operations. The installation of smart sensors and Building Automation Systems (BAS) makes it possible to study facility operations under different circumstances. These technologies generate large amounts of data. That data can be scrapped and used for the analysis. In this thesis, we focus on the process of data-driven modeling and decision making from scraping the data to simulate the building and optimizing the...
Show moreSustainability and reducing energy consumption are targets for building operations. The installation of smart sensors and Building Automation Systems (BAS) makes it possible to study facility operations under different circumstances. These technologies generate large amounts of data. That data can be scrapped and used for the analysis. In this thesis, we focus on the process of data-driven modeling and decision making from scraping the data to simulate the building and optimizing the operation. The City of Orlando has similar goals of sustainability and reduction of energy consumption so, they provided us access to their BAS for the data and study the operation of its facilities. The data scraped from the City's BAS serves can be used to develop statistical/machine learning methods for decision making. We selected a mid-size pilot building to apply these techniques. The process begins with the collection of data from BAS. An Application Programming Interface (API) is developed to login to the servers and scrape data for all data points and store it on the local machine. Then data is cleaned to analyze and model. The dataset contains various data points ranging from indoor and outdoor temperature to fan speed inside the Air Handling Unit (AHU) which are operated by Variable Frequency Drive (VFD). This whole dataset is a time series and is handled accordingly. The cleaned dataset is analyzed to find different patterns and investigate relations between different data points. The analysis helps us in choosing parameters for models that are developed in the next step. Different statistical models are developed to simulate building and equipment behavior. Finally, the models along with the data are used to optimize the building Operation with the equipment constraints to make decisions for building operation which leads to a reduction in energy consumption while maintaining temperature and pressure inside the building.
Show less - Date Issued
- 2019
- Identifier
- CFE0007810, ucf:52335
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007810
- Title
- LEVELS OF LINE GRAPH QUESTION INTERPRETATION WITH INTERMEDIATE ELEMENTARY STUDENTS OF VARYING SCIENTIFIC AND MATHEMATICAL KNOWLEDGE AND ABILITY: A THINK ALOUD STUDY.
- Creator
-
Keller, Stacy, Biraimah, Karen, University of Central Florida
- Abstract / Description
-
This study examined how intermediate elementary students' mathematics and science background knowledge affected their interpretation of line graphs and how their interpretations were affected by graph question levels. A purposive sample of 14 6th-grade students engaged in think aloud interviews (Ericsson & Simon, 1993) while completing an excerpted Test of Graphing in Science (TOGS) (McKenzie & Padilla, 1986). Hand gestures were video recorded. Student performance on the TOGS was assessed...
Show moreThis study examined how intermediate elementary students' mathematics and science background knowledge affected their interpretation of line graphs and how their interpretations were affected by graph question levels. A purposive sample of 14 6th-grade students engaged in think aloud interviews (Ericsson & Simon, 1993) while completing an excerpted Test of Graphing in Science (TOGS) (McKenzie & Padilla, 1986). Hand gestures were video recorded. Student performance on the TOGS was assessed using an assessment rubric created from previously cited factors affecting students' graphing ability. Factors were categorized using Bertin's (1983) three graph question levels. The assessment rubric was validated by Padilla and a veteran mathematics and science teacher. Observational notes were also collected. Data were analyzed using Roth and Bowen's semiotic process of reading graphs (2001). Key findings from this analysis included differences in the use of heuristics, self-generated questions, science knowledge, and self-motivation. Students with higher prior achievement used a greater number and variety of heuristics and more often chose appropriate heuristics. They also monitored their understanding of the question and the adequacy of their strategy and answer by asking themselves questions. Most used their science knowledge spontaneously to check their understanding of the question and the adequacy of their answers. Students with lower and moderate prior achievement favored one heuristic even when it was not useful for answering the question and rarely asked their own questions. In some cases, if students with lower prior achievement had thought about their answers in the context of their science knowledge, they would have been able to recognize their errors. One student with lower prior achievement motivated herself when she thought the questions were too difficult. In addition, students answered the TOGS in one of three ways: as if they were mathematics word problems, science data to be analyzed, or they were confused and had to guess. A second set of findings corroborated how science background knowledge affected graph interpretation: correct science knowledge supported students' reasoning, but it was not necessary to answer any question correctly; correct science knowledge could not compensate for incomplete mathematics knowledge; and incorrect science knowledge often distracted students when they tried to use it while answering a question. Finally, using Roth and Bowen's (2001) two-stage semiotic model of reading graphs, representative vignettes showed emerging patterns from the study. This study added to our understanding of the role of science content knowledge during line graph interpretation, highlighted the importance of heuristics and mathematics procedural knowledge, and documented the importance of perception attentions, motivation, and students' self-generated questions. Recommendations were made for future research in line graph interpretation in mathematics and science education and for improving instruction in this area.
Show less - Date Issued
- 2008
- Identifier
- CFE0002356, ucf:47810
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002356
- Title
- Sustainability Analysis of Intelligent Transportation Systems.
- Creator
-
Ercan, Tolga, Tatari, Mehmet, Al-Deek, Haitham, Oloufa, Amr, University of Central Florida
- Abstract / Description
-
Commuters in urban areas suffer from traffic congestion on a daily basis. The increasing number of vehicles and vehicle miles traveled (VMT) are exacerbating this congested roadway problem for society. Although literature contains numerous studies that strive to propose solutions to this congestion problem, the problem is still prevalent today. Traffic congestion problem affects society's quality of life socially, economically, and environmentally. In order to alleviate the unsustainable...
Show moreCommuters in urban areas suffer from traffic congestion on a daily basis. The increasing number of vehicles and vehicle miles traveled (VMT) are exacerbating this congested roadway problem for society. Although literature contains numerous studies that strive to propose solutions to this congestion problem, the problem is still prevalent today. Traffic congestion problem affects society's quality of life socially, economically, and environmentally. In order to alleviate the unsustainable impacts of the congested roadway problem, Intelligent Transportation Systems (ITS) has been utilized to improve sustainable transportation systems in the world. The purpose of this thesis is to analyze the sustainable impacts and performance of the utilization of ITS in the United States. This thesis advances the body of knowledge of sustainability impacts of ITS related congestion relief through a triple bottom line (TBL) evaluation in the United States. TBL impacts analyze from a holistic perspective, rather than considering only the direct economic benefits. A critical approach to this research was to include both the direct and the indirect environmental and socio-economic impacts associated with the chain of supply paths of traffic congestion relief. To accomplish this aim, net benefits of ITS implementations are analyzed in 101 cities in the United States. In addition to the state level results, seven metropolitan cities in Florida are investigated in detail among these 101 cities. For instance, the results of this study indicated that Florida saved 1.38 E+05 tons of greenhouse gas emissions (tons of carbon dioxide equivalent), $420 million of annual delay reduction costs, and $17.2 million of net fuel-based costs. Furthermore, to quantify the relative impact and sustainability performance of different ITS technologies, several ITS solutions are analyzed in terms of total costs (initial and operation (&) maintenance costs) and benefits (value of time, emissions, and safety). To account for the uncertainty in benefit and cost analyses, a fuzzy-data envelopment analysis (DEA) methodology is utilized instead of the traditional DEA approach for sustainability performance analysis. The results using the fuzzy-DEA approach indicate that some of the ITS investments are not efficient compared to other investments where as all of them are highly effective investments in terms of the cost/benefit ratios approach. The TBL results of this study provide more comprehensive picture of socio-economic benefits which include the negative and indirect indicators and environmental benefits for ITS related congestion relief. In addition, sustainability performance comparisons and TBL analysis of ITS investments contained encouraging results to support decision makers to pursue ITS projects in the future.
Show less - Date Issued
- 2013
- Identifier
- CFE0004994, ucf:49549
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004994
- Title
- Ok, Ladies, Now Let's Get Information: Recognizing Moments of Rhetorical Identification in Beyonc(&)#233;'s Digital Activism.
- Creator
-
Arban, Garrett, Jones, Natasha, Vie, Stephanie, Wheeler, Stephanie, University of Central Florida
- Abstract / Description
-
This research seeks to understand how activists are encouraging audiences to identify with their work in digital spaces through a case study of Beyonc(&)#233; Knowles-Carter's activism. The current scholarship surrounding digital activism is extensive and has offered a detailed look at individual tools used in activist movements, but there is a lack of research that recognizes the complex network of tools that are often used by an activist or activist group. To address this gap in the...
Show moreThis research seeks to understand how activists are encouraging audiences to identify with their work in digital spaces through a case study of Beyonc(&)#233; Knowles-Carter's activism. The current scholarship surrounding digital activism is extensive and has offered a detailed look at individual tools used in activist movements, but there is a lack of research that recognizes the complex network of tools that are often used by an activist or activist group. To address this gap in the research, this thesis offers an analysis of three specific activist tools used by Beyonc(&)#233; to encourage her fans and other audiences to identify with and participate in her activism. This study investigates the methods Beyonc(&)#233; employs to get her multiple audiences informed and engaged through an analysis of her activist blog, the (")Formation(") music video, and her live performance during the 2016 Super Bowl halftime show. Specifically, the purpose of this study is to assess, from a rhetorical standpoint, how Beyonc(&)#233; is inviting her audiences to respond and become engaged.The analysis of these three activist tools utilizes qualitative data analysis, focusing on Burke's (1969) concept of rhetorical identification to understand how her activist messages are presented across mediums. To expand on the findings of this analysis, a reception study on Beyonc(&)#233;'s (")Formation(") music video and 2016 Super Bowl performance was conducted to gauge the success of her rhetorical methods. The findings of this study recognize the need to continue looking at the multiple tools used by activists to understand the complexity of their rhetorical work online. This study also provides methods for analyzing the intertextual nature of digital activism so that further research can be done. While this study begins to address the gap in the current scholarship, more research needs to be done to study the current rhetorical practices of digital activists.
Show less - Date Issued
- 2017
- Identifier
- CFE0006557, ucf:51344
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006557
- Title
- Arterial-level real-time safety evaluation in the context of proactive traffic management.
- Creator
-
Yuan, Jinghui, Abdel-Aty, Mohamed, Eluru, Naveen, Hasan, Samiul, Cai, Qing, Wang, Liqiang, University of Central Florida
- Abstract / Description
-
In the context of pro-active traffic management, real-time safety evaluation is one of the most important components. Previous studies on real-time safety analysis mainly focused on freeways, seldom on arterials. With the advancement of sensing technologies and smart city initiative, more and more real-time traffic data sources are available on arterials, which enables us to evaluate the real-time crash risk on arterials. However, there exist substantial differences between arterials and...
Show moreIn the context of pro-active traffic management, real-time safety evaluation is one of the most important components. Previous studies on real-time safety analysis mainly focused on freeways, seldom on arterials. With the advancement of sensing technologies and smart city initiative, more and more real-time traffic data sources are available on arterials, which enables us to evaluate the real-time crash risk on arterials. However, there exist substantial differences between arterials and freeways in terms of traffic flow characteristics, data availability, and even crash mechanism. Therefore, this study aims to deeply evaluate the real-time crash risk on arterials from multiple aspects by integrating all kinds of available data sources. First, Bayesian conditional logistic models (BCL) were developed to examine the relationship between crash occurrence on arterial segments and real-time traffic and signal timing characteristics by incorporating the Bluetooth, adaptive signal control, and weather data, which were extracted from four urban arterials in Central Florida. Second, real-time intersection-approach-level crash risk was investigated by considering the effects of real-time traffic, signal timing, and weather characteristics based on 23 signalized intersections in Orange County. Third, a deep learning algorithm for real-time crash risk prediction at signalized intersections was proposed based on Long Short-Term Memory (LSTM) and Synthetic Minority Over-Sampling Technique (SMOTE). Moreover, in-depth cycle-level real-time crash risk at signalized intersections was explored based on high-resolution event-based data (i.e., Automated Traffic Signal Performance Measures (ATSPM)). All the possible real-time cycle-level factors were considered, including traffic volume, signal timing, headway and occupancy, traffic variation between upstream and downstream detectors, shockwave characteristics, and weather conditions. Above all, comprehensive real-time safety evaluation algorithms were developed for arterials, which would be key components for future real-time safety applications (e.g., real-time crash risk prediction and visualization system) in the context of pro-active traffic management.
Show less - Date Issued
- 2019
- Identifier
- CFE0007743, ucf:52398
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007743
- Title
- POLICE ORGANIZATIONAL PERFORMANCE IN THE STATE OF FLORIDA:CONFIRMATORY ANALYSIS OF THE RELATIONSHIP OF THE ENVIRONMENT AND DESIGN STRUCTURE TO PERFORMANCE.
- Creator
-
Goltz, Jeffrey, Wan, Thomas, University of Central Florida
- Abstract / Description
-
To date, police organizations have not been rigorously analyzed by organizational scholars and most analysis of these organizations has been captured through a single construct. The purpose of this study is to develop confirmatory police organizational analysis by validating a multi-dimensional conceptual framework that explains the relationships among three constructs: environmental constraints, the design structures of police organizations, and organizational performance indicators. The...
Show moreTo date, police organizations have not been rigorously analyzed by organizational scholars and most analysis of these organizations has been captured through a single construct. The purpose of this study is to develop confirmatory police organizational analysis by validating a multi-dimensional conceptual framework that explains the relationships among three constructs: environmental constraints, the design structures of police organizations, and organizational performance indicators. The modeling is deeply rooted in contingency theory, and the influence of isomorphism and institutional theory on the covariance structure model are investigated. One hundred and thirteen local police organizations from the State of Florida are included in this non-experimental, cross-sectional study to determine the direct effect of the environmental constraints on the performance of police organizations, the indirect effect of environmental constraints on the performance of police organizations via the organizational design structure of police organizations, and the direct affect of organizational design structure on performance of police organizations. For the first time, structural equation modeling and data envelopment analysis are used together to confirm the effects of the environment on police organization structure and performance. The results indicate that environmental social economic disparity indicators have a large positive effect on police resources and a medium effect on police efficiency. Propensity of crime indicators has a large negative effect on police resources, and population density has a small to medium negative effect on crime clearance. Structure has a much smaller effect on performance than the environment. The results of the efficiency analysis revealed unexpected findings. Three of the top five largest police organizations in the study scored maximum efficiency. The cause of this unexpected result is explained and confirmed in the covariance model. The study methodology and results enhances the understanding of the relationship among the constructs while subjecting environmental and police organizational data to two comprehensive analytical techniques. The policy implications and practical contributions of the study provide new knowledge and information to organizational management of police organizations. Furthermore, the study establishes a new approach to police organizational analysis and police services management research called Police Services Management Research (PSMR) that encompasses a variety of disciplines with a primary responsibility of theory building and the selection of theoretical framework.
Show less - Date Issued
- 2006
- Identifier
- CFE0001363, ucf:47000
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001363
- Title
- PREVENTING CHILDHOOD OBESITY IN SCHOOL-AGED CHILDREN: RELATIONSHIPS BETWEEN READING NUTRITION LABELS AND HEALTHY DIETARY BEHAVIORS.
- Creator
-
Bogers, Kimberly S, Quelly, Susan, University of Central Florida
- Abstract / Description
-
Childhood obesity is a prevalent problem in the United States. Obesity increases the risk for many diseases. Obese children are likely to become obese adults with additional comorbidities. Studies have reported mixed findings regarding associations between reading nutrition labels and improved dietary behaviors/healthy weight status. The purpose of this study is to determine whether the frequency of children reading nutrition labels is related to frequency of performing 12 dietary behaviors....
Show moreChildhood obesity is a prevalent problem in the United States. Obesity increases the risk for many diseases. Obese children are likely to become obese adults with additional comorbidities. Studies have reported mixed findings regarding associations between reading nutrition labels and improved dietary behaviors/healthy weight status. The purpose of this study is to determine whether the frequency of children reading nutrition labels is related to frequency of performing 12 dietary behaviors. De-identified baseline data from a previous quasiexperimental pilot study were analyzed. Data were collected from 4th and 5th graders (n = 42) at an after-school program. An adapted paper survey was administered to the children to measure the number of days (0�7) they read nutrition labels and performed 12 dietary behaviors over the preceding week. Due to non-normal distribution of data, non-parametric Spearman rho correlations were conducted to determine relationships between frequency of reading nutrition labels and dietary behaviors. Positive correlations were found between frequency of reading nutrition labels and eating fruit for breakfast; eating vegetables at lunch/dinner; eating whole grain/multigrain bread (p less than .05); eating fruit for a snack; eating vegetables for a snack (p less than .01). Frequency of reading nutrition labels was inversely related to drinking soda/sugar-sweetened beverages (p less than .05). Significant relationships were found between frequency of reading nutrition labels and several dietary behaviors associated with childhood obesity prevention. Findings are promising and support the need for further intervention research to determine potential direct influences of children reading nutrition labels on dietary behaviors.
Show less - Date Issued
- 2018
- Identifier
- CFH2000281, ucf:45722
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH2000281
- Title
- INTEGRATING THEORY, PRACTICE AND POLICY: THE TECHNICAL EFFICIENCY AND PRODUCTIVITY OF FLORIDA'S CIRCUIT COURTS.
- Creator
-
Ferrandino, Joseph, Wan, Thomas T.H., University of Central Florida
- Abstract / Description
-
In 1998, Florida voters approved Article V, Revision 7, which changed the funding mechanism of the state circuit court system from a county/state mix to state responsibility. The change was implemented as planned in the 2004/05 fiscal year. Although increased efficiency was a key goal of Revision 7, to date no published studies exist on the impacts of Revision 7 on circuit or system efficiency and/or productivity. This work analyzes Revision 7, integrating the larger debate of increasing...
Show moreIn 1998, Florida voters approved Article V, Revision 7, which changed the funding mechanism of the state circuit court system from a county/state mix to state responsibility. The change was implemented as planned in the 2004/05 fiscal year. Although increased efficiency was a key goal of Revision 7, to date no published studies exist on the impacts of Revision 7 on circuit or system efficiency and/or productivity. This work analyzes Revision 7, integrating the larger debate of increasing judgeships or improving efficiency.The study is a full performance analysis of the Florida circuit courts from 1993 through 2008 that can benchmark the system's future efficiency and productivity. In that respect, top performers are identified. The study follows the evolution of court studies from their rational origins to the more recent orientation of open-natural systems. Resource dependency and institutional theory, two open-natural system frameworks, are utilized to predict that Florida's circuit courts have become more efficient over the period since the implementation of Revision 7. The efficiency outcomes are expected to be unequal across circuit sizes. Integrating a Florida debate to a larger one that transcends time and culture, productivity changes are expected to be a function of the number of judges that a circuit adds within a given year, controlling for other factors. The results of the study methodologies--data envelopment analysis, Malmquist Productivity Index, hierarchal regression analysis and analysis of covariance--reveal that only 3 of 300 DMU's in Florida are technically efficient; the mean IOTA score is .76. The Florida circuits did not improve efficiency and productivity as expected, in fact becoming significantly less efficient over time as a function of Revision 7. Small and medium-sized circuits lost efficiency, large circuits showed no change and there was a significant interaction between circuit size and Revision 7 period. Within the system overall, productivity fell by 2.7%, most noticeably in the small and medium-sized circuits. The number of judges a circuit added explained 32.2% of the variance in total factor productivity change. The largest system productivity losses followed both Revision 7 intervention years and the addition of the most judges in a single year. Analysis of covariance revealed that productivity increased only when no judges were added to a circuit, regardless of circuit size or time period (+2.6%). The addition of a single judge reduced average productivity by 8.6%; adding two judges reduced productivity by 10.5% and adding 3 or more judges reduced productivity by 16.2%. As judges were added, productivity declined in circuits of all sizes, but the drop was more pronounced in the small and medium-sized circuits. None of the circuits showed an increase in productivity from 1993 to 2008. Revision 7 has not increased circuit court efficiency or productivity in Florida. It is recommended that efficiency and productivity analyses be included in resource allocation decisions such as adding judgeships. More data on court structures and process are needed. Efficiency and productivity measures show that the current level of circuit court judgeships is sufficient.
Show less - Date Issued
- 2010
- Identifier
- CFE0003457, ucf:52888
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003457
- Title
- Reliable Spectrum Hole Detection in Spectrum-Heterogeneous Mobile Cognitive Radio Networks via Sequential Bayesian Non-parametric Clustering.
- Creator
-
Zaeemzadeh, Alireza, Rahnavard, Nazanin, Vosoughi, Azadeh, Qi, GuoJun, University of Central Florida
- Abstract / Description
-
In this work, the problem of detecting radio spectrum opportunities in spectrum-heterogeneous cognitive radio networks is addressed. Spectrum opportunities are the frequency channels that are underutilized by the primary licensed users. Thus, by enabling the unlicensed users to detect and utilize them, we can improve the efficiency, reliability, and the flexibility of the radio spectrum usage. The main objective of this work is to discover the spectrum opportunities in time, space, and...
Show moreIn this work, the problem of detecting radio spectrum opportunities in spectrum-heterogeneous cognitive radio networks is addressed. Spectrum opportunities are the frequency channels that are underutilized by the primary licensed users. Thus, by enabling the unlicensed users to detect and utilize them, we can improve the efficiency, reliability, and the flexibility of the radio spectrum usage. The main objective of this work is to discover the spectrum opportunities in time, space, and frequency domains, by proposing a low-cost and practical framework. Spectrum-heterogeneous networks are the networks in which different sensors experience different spectrum opportunities. Thus, the sensing data from sensors cannot be combined to reach consensus and to detect the spectrum opportunities. Moreover, unreliable data, caused by noise or malicious attacks, will deteriorate the performance of the decision-making process. The problem becomes even more challenging when the locations of the sensors are unknown. In this work, a probabilistic model is proposed to cluster the sensors based on their readings, not requiring any knowledge of location of the sensors. The complexity of the model, which is the number of clusters, is automatically inferred from the sensing data. The processing node, also referred to as the base station or the fusion center, infers the probability distributions of cluster memberships, channel availabilities, and devices' reliability in an online manner. After receiving each chunk of sensing data, the probability distributions are updated, without requiring to repeat the computations on previous sensing data. All the update rules are derived mathematically, by employing Bayesian data analysis techniques and variational inference.Furthermore, the inferred probability distributions are employed to assign unique spectrum opportunities to each of the sensors. To avoid interference among the sensors, physically adjacent devices should not utilize the same channels. However, since the location of the devices is not known, cluster membership information is used as a measure of adjacency. This is based on the assumption that the measurements of the devices are spatially correlated. Thus, adjacent devices, which experience similar spectrum opportunities, belong to the same cluster. Then, the problem is mapped into a energy minimization problem and solved via graph cuts. The goal of the proposed graph-theory-based method is to assign each device an available channel, while avoiding interference among neighboring devices. The numerical simulations illustrates the effectiveness of the proposed methods, compared to the existing frameworks.
Show less - Date Issued
- 2017
- Identifier
- CFE0006963, ucf:51639
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006963
- Title
- Defining a Stakeholder-Relative Model to Measure Academic Department Efficiency at Achieving Quality in Higher Education.
- Creator
-
Robinson, Federica, Sepulveda, Jose, Reilly, Charles, Nazzal, Dima, Armacost, Robert, Feldheim, Mary, University of Central Florida
- Abstract / Description
-
In a time of strained resources and dynamic environments, the importance of effective and efficient systems is critical. This dissertation was developed to address the need to use feedback from multiple stakeholder groups to define quality and assess an entity's efficiency at achieving such quality.A decision support model with applicability to diverse domains was introduced to outline the approach. Three phases, (1) quality model development, (2) input-output selection and (3) relative...
Show moreIn a time of strained resources and dynamic environments, the importance of effective and efficient systems is critical. This dissertation was developed to address the need to use feedback from multiple stakeholder groups to define quality and assess an entity's efficiency at achieving such quality.A decision support model with applicability to diverse domains was introduced to outline the approach. Three phases, (1) quality model development, (2) input-output selection and (3) relative efficiency assessment, captured the essence of the process which also delineates the approach per tool applied. This decision support model was adapted in higher education to assess academic departmental efficiency at achieving stakeholder-relative quality. Phase 1 was accomplished through a three round, Delphi-like study which involved user group refinement. Those results were compared to the criteria of an engineering accreditation body (ABET) to support the model's validity to capture quality in the College of Engineering (&) Computer Science, its departments and programs. In Phase 2 the Analytic Hierarchy Process (AHP) was applied to the validated model to quantify the perspective of students, administrators, faculty and employers (SAFE). Using the composite preferences for the collective group (n=74), the model was limited to the top 7 attributes which accounted for about 55% of total preferences. Data corresponding to the resulting variables, referred to as key performance indicators, was collected using various information sources and infused in the data envelopment analysis (DEA) methodology (Phase 3). This process revealed both efficient and inefficient departments while offering transparency of opportunities to maximize quality outputs. Findings validate the potential of the Delphi-like, analytic hierarchical, data envelopment analysis approach for administrative decision-making in higher education. However, the availability of more meaningful metrics and data is required to adapt the model for decision making purposes. Several recommendations were included to improve the usability of the decision support model and future research opportunities were identified to extend the analyses inherent and apply the model to alternative areas.
Show less - Date Issued
- 2013
- Identifier
- CFE0004921, ucf:49636
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004921
- Title
- Fusing Freight Analysis Framework and Transearch Data: An Econometric Data Fusion Approach.
- Creator
-
Momtaz, Salah Uddin, Eluru, Naveen, Abdel-Aty, Mohamed, Anowar, Sabreena, Zheng, Qipeng, University of Central Florida
- Abstract / Description
-
A major hurdle in freight demand modeling has always been the lack of adequate data on freight movements for different industry sectors for planning applications. Freight Analysis Framework (FAF), and Transearch (TS) databases contain annualized commodity flow data. The primary motivation for our study is the development of a fused database from FAF and TS to realize transportation network flows at a fine spatial resolution (county-level) while accommodating for production and consumption...
Show moreA major hurdle in freight demand modeling has always been the lack of adequate data on freight movements for different industry sectors for planning applications. Freight Analysis Framework (FAF), and Transearch (TS) databases contain annualized commodity flow data. The primary motivation for our study is the development of a fused database from FAF and TS to realize transportation network flows at a fine spatial resolution (county-level) while accommodating for production and consumption behavioral trends (provided by TS). Towards this end, we formulate and estimate a joint econometric model framework grounded in maximum likelihood approach to estimate county-level commodity flows. The algorithm is implemented for the commodity flow information from 2012 FAF and 2011 TS databases to generate transportation network flows for 67 counties in Florida. The data fusion process considers several exogenous variables including origin-destination indicator variables, socio-demographic and socio-economic indicators, and transportation infrastructure indicators. Subsequently, the algorithm is implemented to develop freight flows for the Florida region considering inflows and outflows across the US and neighboring countries. The base year models developed are employed to predict future year data for years 2015 through 2040 in 5-year increments at the same spatial level. Furthermore, we disaggregate the county level flows obtained from algorithm to a finer resolution - statewide transportation analysis zone (SWTAZ) defined by the FDOT. The disaggregation process allocates truck-based commodity flows from a 79-zone system to an 8835-zone system. A two-stage factor multiplication method is proposed to disaggregate the county flow to SWTAZ flow. The factors are estimated both at the origin and destination level using a random utility factional split model approach. Eventually, we conducted a sensitivity analysis of the parameterization by evaluating the model structure for different numbers of intermediate stops in a route and/or the number of available routes for the origin-destinations.
Show less - Date Issued
- 2018
- Identifier
- CFE0007763, ucf:52384
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007763