Current Search: metrics (x)
-
-
Title
-
A LIFE CYCLE SOFTWARE QUALITY MODEL USING BAYESIAN BELIEF NETWORKS.
-
Creator
-
Beaver, Justin, Schiavone, Guy, University of Central Florida
-
Abstract / Description
-
Software practitioners lack a consistent approach to assessing and predicting quality within their products. This research proposes a software quality model that accounts for the influences of development team skill/experience, process maturity, and problem complexity throughout the software engineering life cycle. The model is structured using Bayesian Belief Networks and, unlike previous efforts, uses widely-accepted software engineering standards and in-use industry techniques to quantify...
Show moreSoftware practitioners lack a consistent approach to assessing and predicting quality within their products. This research proposes a software quality model that accounts for the influences of development team skill/experience, process maturity, and problem complexity throughout the software engineering life cycle. The model is structured using Bayesian Belief Networks and, unlike previous efforts, uses widely-accepted software engineering standards and in-use industry techniques to quantify the indicators and measures of software quality. Data from 28 software engineering projects was acquired for this study, and was used for validation and comparison of the presented software quality models. Three Bayesian model structures are explored and the structure with the highest performance in terms of accuracy of fit and predictive validity is reported. In addition, the Bayesian Belief Networks are compared to both Least Squares Regression and Neural Networks in order to identify the technique is best suited to modeling software product quality. The results indicate that Bayesian Belief Networks outperform both Least Squares Regression and Neural Networks in terms of producing modeled software quality variables that fit the distribution of actual software quality values, and in accurately forecasting 25 different indicators of software quality. Between the Bayesian model structures, the simplest structure, which relates software quality variables to their correlated causal factors, was found to be the most effective in modeling software quality. In addition, the results reveal that the collective skill and experience of the development team, over process maturity or problem complexity, has the most significant impact on the quality of software products.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001367, ucf:46993
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001367
-
-
Title
-
FACTORS INFLUENCING USER-LEVEL SUCCESS IN POLICE INFORMATIONSHARING: AN EXAMINATION OF FLORIDA'S FINDER SYSTEM.
-
Creator
-
Scott, Jr, Ernest, Reynolds, Kenneth, University of Central Florida
-
Abstract / Description
-
An important post-9/11 objective has been to connect law enforcement agencies so they can share information that is routinely collected by police. This low-level information, gathered from sources such as traffic tickets, calls for service, incident reports and field contacts, is not widely shared but might account for as much as 97% of the data held in police records systems. U.S. policy and law assume that access to this information advances crime control and counterterrorism efforts. The...
Show moreAn important post-9/11 objective has been to connect law enforcement agencies so they can share information that is routinely collected by police. This low-level information, gathered from sources such as traffic tickets, calls for service, incident reports and field contacts, is not widely shared but might account for as much as 97% of the data held in police records systems. U.S. policy and law assume that access to this information advances crime control and counterterrorism efforts. The scarcity of functioning systems has limited research opportunities to test this assumption or offer guidance to police leaders considering investments in information sharing. However, this study had access to FINDER, a Florida system that shares low-level data among 121 police agencies. The user-level value of FINDER was empirically examined using Goodhue's (1995) Task-Technology Fit framework. Objective system data from 1,352 users, user-reported "successes," and a survey of 402 active users helped define parameters of user-level success. Of the users surveyed, 68% reported arrests or case clearances, 71% reported improved performance, and 82% reported improved efficiency attributed to FINDER. Regression models identified system use, task-fit, and user characteristic measures that predicted changes in users' individual performance. A key finding was that FINDER affirmed the importance of sharing low-level police data, and successful outcomes were related to its ease of use and access to user-specified datasets. Also, users employed a variety of information-seeking techniques that were related to their task assignments. Improved understanding of user-defined success and system use techniques can inform the design and functionality of information sharing systems. Further, this study contributes to addressing the critical requirement for developing information sharing system metrics.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001503, ucf:47139
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001503
-
-
Title
-
ANALYSIS OF COMPLEXITY AND COUPLING METRICS OF SUBSYSTEMS IN LARGE SCALE SOFTWARE SYSTEMS.
-
Creator
-
Ramakrishnan, Harish, Eaglin, Ronald, University of Central Florida
-
Abstract / Description
-
Dealing with the complexity of large-scale systems can be a challenge for even the most experienced software architects and developers. Large-scale software systems can contain millions of elements, which interact to achieve the system functionality. Managing and representing the complexity involved in the interaction of these elements is a difficult task. We propose an approach for analyzing the reusability, maintainability and complexity of such a complex large-scale software system....
Show moreDealing with the complexity of large-scale systems can be a challenge for even the most experienced software architects and developers. Large-scale software systems can contain millions of elements, which interact to achieve the system functionality. Managing and representing the complexity involved in the interaction of these elements is a difficult task. We propose an approach for analyzing the reusability, maintainability and complexity of such a complex large-scale software system. Reducing the dependencies between the subsystems increase the reusability and decrease the efforts needed to maintain the system thus reducing the complexity of the system. Coupling is an attribute that summarizes the degree of interdependence or connectivity among subsystems and within subsystems. When used in conjunction with measures of other attributes, coupling can contribute to an assessment or prediction of software quality. We developed a set of metrics for measuring the coupling at the subsystems level in a large-scale software system as a part of this work. These metrics do not take into account the complexity internal to a subsystem and considers a subsystem as a single entity. Such a dependency metric gives an opportunity to predict the cost and effort needed to maintain the system and also to predict the reusability of the system parts. It also predicts the complexity of the system. More the dependency, higher is the cost to maintain and reuse the software. Also the complexity and cost of the system will be high if the coupling is high. We built a large-scale system and implemented these research ideas and analyzed how these measures help in minimizing the complexity and system cost. We also proved that these coupling measures help in re-factoring of the system design.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001031, ucf:46818
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001031
-
-
Title
-
IS ECONOMIC VALUE ADDED (EVA) THE BEST WAY TO ASSEMBLE A PORTFOLIO?.
-
Creator
-
Pataky, Tamas, Gilkeson, James, University of Central Florida
-
Abstract / Description
-
In search of a better investment metric, researchers began to study Economic Value Added, or EVA, which was introduced in 1991 by Stern Stewart & Co in their book, "The Quest for Value" (Turvey, 2000). Stern Stewart & Co devised EVA as a better alternative to evaluate investment projects within the corporate finance field, later to be considered for use as a performance metric for investor use. A wide array of multinational corporations, such as Coca-Cola, Briggs and Stratton, and AT&T...
Show moreIn search of a better investment metric, researchers began to study Economic Value Added, or EVA, which was introduced in 1991 by Stern Stewart & Co in their book, "The Quest for Value" (Turvey, 2000). Stern Stewart & Co devised EVA as a better alternative to evaluate investment projects within the corporate finance field, later to be considered for use as a performance metric for investor use. A wide array of multinational corporations, such as Coca-Cola, Briggs and Stratton, and AT&T adopted the EVA method, which led to EVA's worldwide acclaim. Several points in the study reveal that EVA does not offer less risk, higher returns, and more adaptability for an investor. In fact, EVA underperformed the traditional portfolio performance metrics in key measurements including mean returns, and confidence intervals. EVA is a difficult performance metric to calculate, with several complex components that can be calculated in several different ways such as NOPAT, cost of equity, and cost of debt. Any information that is inaccurate or lacking can significantly impact the outcomes. Traditional performance metrics, on the other hand, such as ROA, ROE, and E/P are simple to calculate with few components, and only one way to calculate them.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFH0004289, ucf:44909
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFH0004289
-
-
Title
-
Content-based Information Retrieval via Nearest Neighbor Search.
-
Creator
-
Huang, Yinjie, Georgiopoulos, Michael, Anagnostopoulos, Georgios, Hu, Haiyan, Sukthankar, Gita, Ni, Liqiang, University of Central Florida
-
Abstract / Description
-
Content-based information retrieval (CBIR) has attracted significant interest in the past few years. When given a search query, the search engine will compare the query with all the stored information in the database through nearest neighbor search. Finally, the system will return the most similar items. We contribute to the CBIR research the following: firstly, Distance Metric Learning (DML) is studied to improve retrieval accuracy of nearest neighbor search. Additionally, Hash Function...
Show moreContent-based information retrieval (CBIR) has attracted significant interest in the past few years. When given a search query, the search engine will compare the query with all the stored information in the database through nearest neighbor search. Finally, the system will return the most similar items. We contribute to the CBIR research the following: firstly, Distance Metric Learning (DML) is studied to improve retrieval accuracy of nearest neighbor search. Additionally, Hash Function Learning (HFL) is considered to accelerate the retrieval process.On one hand, a new local metric learning framework is proposed - Reduced-Rank Local Metric Learning (R2LML). By considering a conical combination of Mahalanobis metrics, the proposed method is able to better capture information like data's similarity and location. A regularization to suppress the noise and avoid over-fitting is also incorporated into the formulation. Based on the different methods to infer the weights for the local metric, we considered two frameworks: Transductive Reduced-Rank Local Metric Learning (T-R2LML), which utilizes transductive learning, while Efficient Reduced-Rank Local Metric Learning (E-R2LML)employs a simpler and faster approximated method. Besides, we study the convergence property of the proposed block coordinate descent algorithms for both our frameworks. The extensive experiments show the superiority of our approaches.On the other hand, *Supervised Hash Learning (*SHL), which could be used in supervised, semi-supervised and unsupervised learning scenarios, was proposed in the dissertation. By considering several codewords which could be learned from the data, the proposed method naturally derives to several Support Vector Machine (SVM) problems. After providing an efficient training algorithm, we also study the theoretical generalization bound of the new hashing framework. In the final experiments, *SHL outperforms many other popular hash function learning methods. Additionally, in order to cope with large data sets, we also conducted experiments running on big data using a parallel computing software package, namely LIBSKYLARK.
Show less
-
Date Issued
-
2016
-
Identifier
-
CFE0006327, ucf:51544
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006327
-
-
Title
-
DEGREE OF APROXIMATION OF HÖLDER CONTINUOUS FUNCTIONS.
-
Creator
-
Landon, Benjamin, Mohapatra, Ram, University of Central Florida
-
Abstract / Description
-
Pratima Sadangi in a Ph.D. thesis submitted to Utkal University proved results on degree of approximation of functions by operators associated with their Fourier series. In this dissertation, we consider degree of approximation of functions in H_(α,p) by different operators. In Chapter 1 we mention basic definitions needed for our work. In Chapter 2 we discuss different methods of summation. In Chapter 3 we define the H_(α,p) metric and present the degree of approximation problem...
Show morePratima Sadangi in a Ph.D. thesis submitted to Utkal University proved results on degree of approximation of functions by operators associated with their Fourier series. In this dissertation, we consider degree of approximation of functions in H_(α,p) by different operators. In Chapter 1 we mention basic definitions needed for our work. In Chapter 2 we discuss different methods of summation. In Chapter 3 we define the H_(α,p) metric and present the degree of approximation problem relating to Fourier series and conjugate series of functions in the H_(α,p) metric using Karamata (K^λ) means. In Chapter 4 we present the degree of approximation of an integral associated with the conjugate series by the Euler, Borel and (e,c) means of a series analogous to the Hardy-Littlewood series in the H_(α,p) metric. In Chapter 5 we propose problems to be solved in the future.
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002414, ucf:47730
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002414
-
-
Title
-
AUTOMATIC ANNOTATION OF DATABASE IMAGES FOR QUERY-BY-CONCEPT.
-
Creator
-
Hiransakolwong, Nualsawat, Hua, kien A., University of Central Florida
-
Abstract / Description
-
As digital images become ubiquitous in many applications, the need for efficient and effective retrieval techniques is more demanding than ever. Query by Example (QBE) and Query by Concept (QBC) are among the most popular query models. The former model accepts example images as queries and searches for similar ones based on low-level features such as colors and textures. The latter model allows queries to be expressed in the form of high-level semantics or concept words, such as "boat" or ...
Show moreAs digital images become ubiquitous in many applications, the need for efficient and effective retrieval techniques is more demanding than ever. Query by Example (QBE) and Query by Concept (QBC) are among the most popular query models. The former model accepts example images as queries and searches for similar ones based on low-level features such as colors and textures. The latter model allows queries to be expressed in the form of high-level semantics or concept words, such as "boat" or "car," and finds images that match the specified concepts. Recent research has focused on the connections between these two models and attempts to close the semantic-gap between them. This research involves finding the best method that maps a set of low-level features into high-level concepts. Automatic annotation techniques are investigated in this dissertation to facilitate QBC. In this approach, sets of training images are used to discover the relationship between low-level features and predetermined high-level concepts. The best mapping with respect to the training sets is proposed and used to analyze images, annotating them with the matched concept words. One principal difference between QBE and QBC is that, while similarity matching in QBE must be done at the query time, QBC performs concept exploration off-line. This difference allows QBC techniques to shift the time-consuming task of determining similarity away from the query time, thus facilitating the additional processing time required for increasingly accurate matching. Consequently, QBC's primary design objective is to achieve accurate annotation within a reasonable processing time. This objective is the guiding principle in the design of the following proposed methods which facilitate image annotation: 1.A novel dynamic similarity function. This technique allows users to query with multiple examples: relevant, irrelevant or neutral. It uses the range distance in each group to automatically determine weights in the distance function. Among the advantages of this technique are higher precision and recall rates with fast matching time. 2.Object recognition based on skeletal graphs. The topologies of objects' skeletal graphs are captured and compared at the node level. Such graph representation allows preservation of the skeletal graph's coherence without sacrificing the flexibility of matching similar portions of graphs across different levels. The technique is robust to translation, scaling, and rotation invariants at object level. This technique achieves high precision and recall rates with reasonable matching time and storage space. 3.ASIA (Automatic Sampling-based Image Annotation) is a technique based on a new sampling-based matching framework allowing users to identify their area of interest. ASIA eliminates noise, or irrelevant areas of the image. ASIA is robust to translation, scaling, and rotation invariants at the object level. This technique also achieves high precision and recall rates. While the above techniques may not be the fastest when contrasted with some other recent QBE techniques, they very effectively perform image annotation. The results of applying these processes are accurately annotated database images to which QBC may then be applied. The results of extensive experiments are presented to substantiate the performance advantages of the proposed techniques and allow them to be compared with other recent high-performance techniques. Additionally, a discussion on merging the proposed techniques into a highly effective annotation system is also detailed.
Show less
-
Date Issued
-
2004
-
Identifier
-
CFE0000262, ucf:46239
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0000262
-
-
Title
-
Quantifying Trust and Reputation for Defense against Adversaries in Multi-Channel Dynamic Spectrum Access Networks.
-
Creator
-
Bhattacharjee, Shameek, Chatterjee, Mainak, Guha, Ratan, Zou, Changchun, Turgut, Damla, Catbas, Necati, University of Central Florida
-
Abstract / Description
-
Dynamic spectrum access enabled by cognitive radio networks are envisioned to drivethe next generation wireless networks that can increase spectrum utility by opportunisticallyaccessing unused spectrum. Due to the policy constraint that there could be no interferenceto the primary (licensed) users, secondary cognitive radios have to continuously sense forprimary transmissions. Typically, sensing reports from multiple cognitive radios are fusedas stand-alone observations are prone to errors...
Show moreDynamic spectrum access enabled by cognitive radio networks are envisioned to drivethe next generation wireless networks that can increase spectrum utility by opportunisticallyaccessing unused spectrum. Due to the policy constraint that there could be no interferenceto the primary (licensed) users, secondary cognitive radios have to continuously sense forprimary transmissions. Typically, sensing reports from multiple cognitive radios are fusedas stand-alone observations are prone to errors due to wireless channel characteristics. Suchdependence on cooperative spectrum sensing is vulnerable to attacks such as SecondarySpectrum Data Falsification (SSDF) attacks when multiple malicious or selfish radios falsifythe spectrum reports. Hence, there is a need to quantify the trustworthiness of radios thatshare spectrum sensing reports and devise malicious node identification and robust fusionschemes that would lead to correct inference about spectrum usage.In this work, we propose an anomaly monitoring technique that can effectively cap-ture anomalies in the spectrum sensing reports shared by individual cognitive radios duringcooperative spectrum sensing in a multi-channel distributed network. Such anomalies areused as evidence to compute the trustworthiness of a radio by its neighbours. The proposedanomaly monitoring technique works for any density of malicious nodes and for any physicalenvironment. We propose an optimistic trust heuristic for a system with a normal risk attitude and show that it can be approximated as a beta distribution. For a more conservativesystem, we propose a multinomial Dirichlet distribution based conservative trust framework,where Josang's Belief model is used to resolve any uncertainty in information that mightarise during anomaly monitoring. Using a machine learning approach, we identify maliciousnodes with a high degree of certainty regardless of their aggressiveness and variations intro-duced by the pathloss environment. We also propose extensions to the anomaly monitoringtechnique that facilitate learning about strategies employed by malicious nodes and alsoutilize the misleading information they provide. We also devise strategies to defend against a collaborative SSDF attack that islaunched by a coalition of selfish nodes. Since, defense against such collaborative attacks isdifficult with popularly used voting based inference models or node centric isolation techniques, we propose a channel centric Bayesian inference approach that indicates how much the collective decision on a channels occupancy inference can be trusted. Based on the measured observations over time, we estimate the parameters of the hypothesis of anomalous andnon-anomalous events using a multinomial Bayesian based inference. We quantitatively define the trustworthiness of a channel inference as the difference between the posterior beliefsassociated with anomalous and non-anomalous events. The posterior beliefs are updated based on a weighted average of the prior information on the belief itself and the recently observed data.Subsequently, we propose robust fusion models which utilize the trusts of the nodes to improve the accuracy of the cooperative spectrum sensing decisions. In particular, we propose three fusion models: (i) optimistic trust based fusion, (ii) conservative trust based fusion, and (iii) inversion based fusion. The former two approaches exclude untrustworthy sensing reports for fusion, while the last approach utilizes misleading information. Allschemes are analyzed under various attack strategies. We propose an asymmetric weightedmoving average based trust management scheme that quickly identifies on-off SSDF attacks and prevents quick trust redemption when such nodes revert back to temporal honest behavior. We also provide insights on what attack strategies are more effective from the adversaries' perspective.Through extensive simulation experiments we show that the trust models are effective in identifying malicious nodes with a high degree of certainty under variety of network and radio conditions. We show high true negative detection rates even when multiple malicious nodes launch collaborative attacks which is an improvement over existing voting based exclusion and entropy divergence techniques. We also show that we are able to improve the accuracy of fusion decisions compared to other popular fusion techniques. Trust based fusion schemes show worst case decision error rates of 5% while inversion based fusion show 4% as opposed majority voting schemes that have 18% error rate. We also show that the proposed channel centric Bayesian inference based trust model is able to distinguish between attacked and non-attacked channels for both static and dynamic collaborative attacks. We are also able to show that attacked channels have significantly lower trust values than channels that are not(-) a metric that can be used by nodes to rank the quality of inference on channels.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005764, ucf:50081
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005764
-
-
Title
-
Exploring 3D User Interface Technologies for Improving the Gaming Experience.
-
Creator
-
Kulshreshth, Arun, Laviola II, Joseph, Hughes, Charles, Da Vitoria Lobo, Niels, Masuch, Maic, University of Central Florida
-
Abstract / Description
-
3D user interface technologies have the potential to make games more immersive (&) engaging and thus potentially provide a better user experience to gamers. Although 3D user interface technologies are available for games, it is still unclear how their usage affects game play and if there are any user performance benefits. A systematic study of these technologies in game environments is required to understand how game play is affected and how we can optimize the usage in order to achieve...
Show more3D user interface technologies have the potential to make games more immersive (&) engaging and thus potentially provide a better user experience to gamers. Although 3D user interface technologies are available for games, it is still unclear how their usage affects game play and if there are any user performance benefits. A systematic study of these technologies in game environments is required to understand how game play is affected and how we can optimize the usage in order to achieve better game play experience.This dissertation seeks to improve the gaming experience by exploring several 3DUI technologies. In this work, we focused on stereoscopic 3D viewing (to improve viewing experience) coupled with motion based control, head tracking (to make games more engaging), and faster gesture based menu selection (to reduce cognitive burden associated with menu interaction while playing). We first studied each of these technologies in isolation to understand their benefits for games. We present the results of our experiments to evaluate benefits of stereoscopic 3D (when coupled with motion based control) and head tracking in games. We discuss the reasons behind these findings and provide recommendations for game designers who want to make use of these technologies to enhance gaming experiences. We also present the results of our experiments with finger-based menu selection techniques with an aim to find out the fastest technique. Based on these findings, we custom designed an air-combat game prototype which simultaneously uses stereoscopic 3D, head tracking, and finger-count shortcuts to prove that these technologies could be useful for games if the game is designed with these technologies in mind. Additionally, to enhance depth discrimination and minimize visual discomfort, the game dynamically optimizes stereoscopic 3D parameters (convergence and separation) based on the user's look direction. We conducted a within subjects experiment where we examined performance data and self-reported data on users perception of the game. Our results indicate that participants performed significantly better when all the 3DUI technologies (stereoscopic 3D, head-tracking and finger-count gestures) were available simultaneously with head tracking as a dominant factor. We explore the individual contribution of each of these technologies to the overall gaming experience and discuss the reasons behind our findings.Our experiments indicate that 3D user interface technologies could make gaming experience better if used effectively. The games must be designed to make use of the 3D user interface technologies available in order to provide a better gaming experience to the user. We explored a few technologies as part of this work and obtained some design guidelines for future game designers. We hope that our work will serve as the framework for the future explorations of making games better using 3D user interface technologies.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005643, ucf:50190
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005643