Current Search: computer networking (x)
View All Items
Pages
- Title
- A NEAT APPROACH TO GENETIC PROGRAMMING.
- Creator
-
Rodriguez, Adelein, Wu, Annie, University of Central Florida
- Abstract / Description
-
The evolution of explicitly represented topologies such as graphs involves devising methods for mutating, comparing and combining structures in meaningful ways and identifying and maintaining the necessary topological diversity. Research has been conducted in the area of the evolution of trees in genetic programming and of neural networks and some of these problems have been addressed independently by the different research communities. In the domain of neural networks, NEAT (Neuroevolution...
Show moreThe evolution of explicitly represented topologies such as graphs involves devising methods for mutating, comparing and combining structures in meaningful ways and identifying and maintaining the necessary topological diversity. Research has been conducted in the area of the evolution of trees in genetic programming and of neural networks and some of these problems have been addressed independently by the different research communities. In the domain of neural networks, NEAT (Neuroevolution of Augmenting Topologies) has shown to be a successful method for evolving increasingly complex networks. This system's success is based on three interrelated elements: speciation, marking of historical information in topologies, and initializing search in a small structures search space. This provides the dynamics necessary for the exploration of diverse solution spaces at once and a way to discriminate between different structures. Although different representations have emerged in the area of genetic programming, the study of the tree representation has remained of interest in great part because of its mapping to programming languages and also because of the observed phenomenon of unnecessary code growth or bloat which hinders performance. The structural similarity between trees and neural networks poses an interesting question: Is it possible to apply the techniques from NEAT to the evolution of trees and if so, how does it affect performance and the dynamics of code growth? In this work we address these questions and present analogous techniques to those in NEAT for genetic programming.
Show less - Date Issued
- 2007
- Identifier
- CFE0001971, ucf:47451
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001971
- Title
- OLDER ADULTS AND ONLINE SOCIAL NETWORKING: RELATING ISSUES OF ATTITUDES, EXPERTISE, AND USE.
- Creator
-
Hernandez, Elise, Smither, Janan, University of Central Florida
- Abstract / Description
-
The social transition to older adulthood can be challenging for elderly individuals and their families when isolation poses a threat to well-being. Technology is currently providing younger generations with an opportunity to stay in contact with social partners through the use of online social networking tools; it is unclear whether older adults are also taking advantage of this communication method. This study explored how older adults are experiencing online social networking. Specifically,...
Show moreThe social transition to older adulthood can be challenging for elderly individuals and their families when isolation poses a threat to well-being. Technology is currently providing younger generations with an opportunity to stay in contact with social partners through the use of online social networking tools; it is unclear whether older adults are also taking advantage of this communication method. This study explored how older adults are experiencing online social networking. Specifically, this research addressed how older adults' attitudes towards online social networking are related to their expertise in using computers and the internet for this purpose. A survey methodological approach was employed whereby older adults aged 65 and over were recruited from senior centers across the Central Florida area to fill out a series of questionnaires. The Computer Aversion, Attitudes, and Familiarity Index (CAAFI) was used to measure attitudes and expertise with computers. The Internet Technical Literacy and Social Awareness Scale was used to measure interest and expertise with the internet. The relationship between older adults' use of online social networking and their attitudes and expertise was also investigated. Finally, social connectedness, (measured using the Social Connectedness Scale) and subjective well-being (measured using the Satisfaction with Life Scale) were measured to explore whether older adults receive a psychosocial benefit from using online social networking. Findings showed expertise and attitudes scores were strongly correlated, and these scores were also predictive of online social networking use. The results of this study may help social service providers for elderly individuals begin to understand the many factors associated with using new forms of technology.
Show less - Date Issued
- 2011
- Identifier
- CFH0004078, ucf:44786
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFH0004078
- Title
- Stochastic-Based Computing with Emerging Spin-Based Device Technologies.
- Creator
-
Bai, Yu, Lin, Mingjie, DeMara, Ronald, Wang, Jun, Jin, Yier, Dong, Yajie, University of Central Florida
- Abstract / Description
-
In this dissertation, analog and emerging device physics is explored to provide a technology plat- form to design new bio-inspired system and novel architecture. With CMOS approaching the nano-scaling, their physics limits in feature size. Therefore, their physical device characteristics will pose severe challenges to constructing robust digital circuitry. Unlike transistor defects due to fabrication imperfection, quantum-related switching uncertainties will seriously increase their sus-...
Show moreIn this dissertation, analog and emerging device physics is explored to provide a technology plat- form to design new bio-inspired system and novel architecture. With CMOS approaching the nano-scaling, their physics limits in feature size. Therefore, their physical device characteristics will pose severe challenges to constructing robust digital circuitry. Unlike transistor defects due to fabrication imperfection, quantum-related switching uncertainties will seriously increase their sus- ceptibility to noise, thus rendering the traditional thinking and logic design techniques inadequate. Therefore, the trend of current research objectives is to create a non-Boolean high-level compu- tational model and map it directly to the unique operational properties of new, power efficient, nanoscale devices.The focus of this research is based on two-fold: 1) Investigation of the physical hysteresis switching behaviors of domain wall device. We analyze phenomenon of domain wall device and identify hys- teresis behavior with current range. We proposed the Domain-Wall-Motion-based (DWM) NCL circuit that achieves approximately 30x and 8x improvements in energy efficiency and chip layout area, respectively, over its equivalent CMOS design, while maintaining similar delay performance for a one bit full adder. 2) Investigation of the physical stochastic switching behaviors of Mag- netic Tunnel Junction (MTJ) device. With analyzing of stochastic switching behaviors of MTJ, we proposed an innovative stochastic-based architecture for implementing artificial neural network (S-ANN) with both magnetic tunneling junction (MTJ) and domain wall motion (DWM) devices, which enables efficient computing at an ultra-low voltage. For a well-known pattern recognition task, our mixed-model HSPICE simulation results have shown that a 34-neuron S-ANN imple- mentation, when compared with its deterministic-based ANN counterparts implemented with dig- ital and analog CMOS circuits, achieves more than 1.5 ? 2 orders of magnitude lower energy consumption and 2 ? 2.5 orders of magnitude less hidden layer chip area.
Show less - Date Issued
- 2016
- Identifier
- CFE0006680, ucf:51921
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006680
- Title
- Rethinking Routing and Peering in the era of Vertical Integration of Network Functions.
- Creator
-
Dey, Prasun, Yuksel, Murat, Wang, Jun, Ewetz, Rickard, Zhang, Wei, Hasan, Samiul, University of Central Florida
- Abstract / Description
-
Content providers typically control the digital content consumption services and are getting the most revenue by implementing an (")all-you-can-eat(") model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing...
Show moreContent providers typically control the digital content consumption services and are getting the most revenue by implementing an (")all-you-can-eat(") model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering (-) from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation (-) one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service.
Show less - Date Issued
- 2019
- Identifier
- CFE0007797, ucf:52351
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007797
- Title
- On the design and performance of cognitive packets over wired networks and mobile ad hoc networks.
- Creator
-
Lent, Marino Ricardo, Gelenbe, Erol, Engineering and Computer Science
- Abstract / Description
-
University of Central Florida College of Engineering Thesis; This dissertation studied cognitive packet networks (CPN) which build networked learning systems that support adaptive, quality of service-driven routing of packets in wired networks and in wireless, mobile ad hoc networks.
- Date Issued
- 2003
- Identifier
- CFR0001374, ucf:52931
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFR0001374
- Title
- EXAMINING ENGINEERING & TECHNOLOGY STUDENTS ACCEPTANCE OF NETWORK VIRTUALIZATION TECHNOLOGY USING THE TECHNOLOGY ACCEPTANCE MODEL.
- Creator
-
Yousif, Wael K. Yousif, Boote, David, University of Central Florida
- Abstract / Description
-
This causal and correlational study was designed to extend the Technology Acceptance Model (TAM) and to test its applicability to Valencia Community College (VCC) Engineering and Technology students as the target user group when investigating the factors influencing their decision to adopt and to utilize VMware as the target technology. In addition to the primary three indigenous factors: perceived ease of use, perceived usefulness, and intention toward utilization, the model was also...
Show moreThis causal and correlational study was designed to extend the Technology Acceptance Model (TAM) and to test its applicability to Valencia Community College (VCC) Engineering and Technology students as the target user group when investigating the factors influencing their decision to adopt and to utilize VMware as the target technology. In addition to the primary three indigenous factors: perceived ease of use, perceived usefulness, and intention toward utilization, the model was also extended with enjoyment, external control, and computer self-efficacy as antecedents to perceived ease of use. In an attempt to further increase the explanatory power of the model, the Task-Technology Fit constructs (TTF) were included as antecedents to perceived usefulness. The model was also expanded with subjective norms and voluntariness to assess the degree to which social influences affect students decision for adoption and utilization. This study was conducted during the fall term of 2009, using 11 instruments: (1) VMware Tools Functions Instrument; (2) Computer Networking Tasks Characteristics Instrument; (3) Perceived Usefulness Instrument; (4) Voluntariness Instrument; (5) Subjective Norms Instrument; (6) Perceived Enjoyment Instrument; (7) Computer Self-Efficacy Instrument; (8) Perception of External Control Instrument; (9) Perceived Ease of Use Instrument; (10) Intention Instrument; and (11) a Utilization Instrument. The 11 instruments collectively contained 58 items. Additionally, a demographics instrument of six items was included to investigate the influence of age, prior experience with the technology, prior experience in computer networking, academic enrollment status, and employment status on student intentions and behavior with regard to VMware as a network virtualization technology. Data were analyzed using path analysis, regressions, and univariate analysis of variance in SPSS and AMOS for Windows. The results suggest that perceived ease of use was found to be the strongest determinant of student intention. The analysis also suggested that external control, measuring the facilitating conditions (knowledge, resources, etc) necessary for adoption was the highest predictor of perceived ease of use. Consistent with previous studies, perceived ease of use was found to be the strongest predictor of perceived usefulness followed by subjective norms as students continued to use the technology. Even though the integration of the task-technology fit construct was not helpful in explaining the variance in student perceived usefulness of the target technology, it was statistically significant in predicting student perception of ease of use. The study concluded with recommendations to investigate other factors (such as service quality and ease of implementation) that might contribute to explaining the variance in perceived ease of use as the primary driving force in influencing student decision for adoption. A recommendation was also made to modify the task-technology fit construct instruments to improve the articulation and the specificity of the task. The need for further examination of the influence of the instructor on student decision for adoption of a target technology was also emphasized.
Show less - Date Issued
- 2010
- Identifier
- CFE0003071, ucf:48313
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003071
- Title
- A FRAMEWORK FOR EFFICIENT DATA DISTRIBUTION IN PEER-TO-PEER NETWORKS.
- Creator
-
Purandare, Darshan, Guha, Ratan, University of Central Florida
- Abstract / Description
-
Peer to Peer (P2P) models are based on user altruism, wherein a user shares its content with other users in the pool and it also has an interest in the content of the other nodes. Most P2P systems in their current form are not fair in terms of the content served by a peer and the service obtained from swarm. Most systems suffer from free rider's problem where many high uplink capacity peers contribute much more than they should while many others get a free ride for downloading the content...
Show morePeer to Peer (P2P) models are based on user altruism, wherein a user shares its content with other users in the pool and it also has an interest in the content of the other nodes. Most P2P systems in their current form are not fair in terms of the content served by a peer and the service obtained from swarm. Most systems suffer from free rider's problem where many high uplink capacity peers contribute much more than they should while many others get a free ride for downloading the content. This leaves high capacity nodes with very little or no motivation to contribute. Many times such resourceful nodes exit the swarm or don't even participate. The whole scenario is unfavorable and disappointing for P2P networks in general, where participation is a must and a very important feature. As the number of users increases in the swarm, the swarm becomes robust and scalable. Other important issues in the present day P2P system are below optimal Quality of Service (QoS) in terms of download time, end-to-end latency and jitter rate, uplink utilization, excessive cross ISP traffic, security and cheating threats etc. These current day problems in P2P networks serve as a motivation for present work. To this end, we present an efficient data distribution framework in Peer-to-Peer (P2P) networks for media streaming and file sharing domain. The experiments with our model, an alliance based peering scheme for media streaming, show that such a scheme distributes data to the swarm members in a near-optimal way. Alliances are small groups of nodes that share data and other vital information for symbiotic association. We show that alliance formation is a loosely coupled and an effective way to organize the peers and our model maps to a small world network, which form efficient overlay structures and are robust to network perturbations such as churn. We present a comparative simulation based study of our model with CoolStreaming/DONet (a popular model) and present a quantitative performance evaluation. Simulation results show that our model scales well under varying workloads and conditions, delivers near optimal levels of QoS, reduces cross ISP traffic considerably and for most cases, performs at par or even better than Cool-Streaming/DONet. In the next phase of our work, we focussed on BitTorrent P2P model as it the most widely used file sharing protocol. Many studies in academia and industry have shown that though BitTorrent scales very well but is far from optimal in terms of fairness to end users, download time and uplink utilization. Furthermore, random peering and data distribution in such model lead to suboptimal performance. Lately, new breed of BitTorrent clients like BitTyrant have shown successful strategic attacks against BitTorrent. Strategic peers configure the BitTorrent client software such that for very less or no contribution, they can obtain good download speeds. Such strategic nodes exploit the altruism in the swarm and consume resources at the expense of other honest nodes and create an unfair swarm. More unfairness is generated in the swarm with the presence of heterogeneous bandwidth nodes. We investigate and propose a new token-based anti-strategic policy that could be used in BitTorrent to minimize the free-riding by strategic clients. We also proposed other policies against strategic attacks that include using a smart tracker that denies the request of strategic clients for peer listmultiple times, and black listing the non-behaving nodes that do not follow the protocol policies. These policies help to stop the strategic behavior of peers to a large extent and improve overall system performance. We also quantify and validate the benefits of using bandwidth peer matching policy. Our simulations results show that with the above proposed changes, uplink utilization and mean download time in BitTorrent network improves considerably. It leaves strategic clients with little or no incentive to behave greedily. This reduces free riding and creates fairer swarm with very little computational overhead. Finally, we show that our model is self healing model where user behavior changes from selfish to altruistic in the presence of the aforementioned policies.
Show less - Date Issued
- 2008
- Identifier
- CFE0002260, ucf:47864
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0002260
- Title
- Probabilistic-Based Computing Transformation with Reconfigurable Logic Fabrics.
- Creator
-
Alawad, Mohammed, Lin, Mingjie, DeMara, Ronald, Mikhael, Wasfy, Wang, Jun, Das, Tuhin, University of Central Florida
- Abstract / Description
-
Effectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such...
Show moreEffectively tackling the upcoming (")zettabytes(") data explosion requires a huge quantum leapin our computing power and energy efficiency. However, with the Moore's law dwindlingquickly, the physical limits of CMOS technology make it almost intractable to achieve highenergy efficiency if the traditional (")deterministic and precise(") computing model still dominates.Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain,imperfect real-world environment. As such, the traditional computing means of first-principlemodeling or explicit statistical modeling will very likely be ineffective to achieveflexibility, autonomy, and human interaction. The bottom line is clear: given where we areheaded, the fundamental principle of modern computing(-)deterministic logic circuits canflawlessly emulate propositional logic deduction governed by Boolean algebra(-)has to bereexamined, and transformative changes in the foundation of modern computing must bemade.This dissertation presents a novel stochastic-based computing methodology. It efficientlyrealizes the algorithmatic computing through the proposed concept of Probabilistic DomainTransform (PDT). The essence of PDT approach is to encode the input signal asthe probability density function, perform stochastic computing operations on the signal inthe probabilistic domain, and decode the output signal by estimating the probability densityfunction of the resulting random samples. The proposed methodology possesses manynotable advantages. Specifically, it uses much simplified circuit units to conduct complexoperations, which leads to highly area- and energy-efficient designs suitable for parallel processing.Moreover, it is highly fault-tolerant because the information to be processed isencoded with a large ensemble of random samples. As such, the local perturbations of itscomputing accuracy will be dissipated globally, thus becoming inconsequential to the final overall results. Finally, the proposed probabilistic-based computing can facilitate buildingscalable precision systems, which provides an elegant way to trade-off between computingaccuracy and computing performance/hardware efficiency for many real-world applications.To validate the effectiveness of the proposed PDT methodology, two important signal processingapplications, discrete convolution and 2-D FIR filtering, are first implemented andbenchmarked against other deterministic-based circuit implementations. Furthermore, alarge-scale Convolutional Neural Network (CNN), a fundamental algorithmic building blockin many computer vision and artificial intelligence applications that follow the deep learningprinciple, is also implemented with FPGA based on a novel stochastic-based and scalablehardware architecture and circuit design. The key idea is to implement all key componentsof a deep learning CNN, including multi-dimensional convolution, activation, and poolinglayers, completely in the probabilistic computing domain. The proposed architecture notonly achieves the advantages of stochastic-based computation, but can also solve severalchallenges in conventional CNN, such as complexity, parallelism, and memory storage.Overall, being highly scalable and energy efficient, the proposed PDT-based architecture iswell-suited for a modular vision engine with the goal of performing real-time detection, recognitionand segmentation of mega-pixel images, especially those perception-based computingtasks that are inherently fault-tolerant.
Show less - Date Issued
- 2016
- Identifier
- CFE0006828, ucf:51768
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006828
- Title
- Functional Scaffolding for Musical Composition: A New Approach in Computer-Assisted Music Composition.
- Creator
-
Hoover, Amy, Stanley, Kenneth, Wu, Annie, Laviola II, Joseph, Anderson, Thaddeus, University of Central Florida
- Abstract / Description
-
While it is important for systems intended to enhance musical creativity to define and explore musical ideas conceived by individual users, many limit musical freedom by focusing on maintaining musical structure, thereby impeding the user's freedom to explore his or her individual style. This dissertation presents a comprehensive body of work that introduces a new musical representation that allows users to explore a space of musical rules that are created from their own melodies. This...
Show moreWhile it is important for systems intended to enhance musical creativity to define and explore musical ideas conceived by individual users, many limit musical freedom by focusing on maintaining musical structure, thereby impeding the user's freedom to explore his or her individual style. This dissertation presents a comprehensive body of work that introduces a new musical representation that allows users to explore a space of musical rules that are created from their own melodies. This representation, called functional scaffolding for musical composition (FSMC), exploits a simple yet powerful property of multipart compositions: The pattern of notes and rhythms in different instrumental parts of the same song are functionally related. That is, in principle, one part can be expressed as a function of another. Music in FSMC is represented accordingly as a functional relationship between an existing human composition, or scaffold, and an additional generated voice. This relationship is encoded by a type of artificial neural network called a compositional pattern producing network (CPPN). A human user without any musical expertise can then explore how these additional generated voices should relate to the scaffold through an interactive evolutionary process akin to animal breeding. The utility of this insight is validated by two implementations of FSMC called NEAT Drummer and MaestroGenesis, that respectively help users tailor drum patterns and complete multipart arrangements from as little as a single original monophonic track. The five major contributions of this work address the overarching hypothesis in this dissertation that functional relationships alone, rather than specialized music theory, are sufficient for generating plausible additional voices. First, to validate FSMC and determine whether plausible generated voices result from the human-composed scaffold or intrinsic properties of the CPPN, drum patterns are created with NEAT Drummer to accompany several different polyphonic pieces. Extending the FSMC approach to generate pitched voices, the second contribution reinforces the importance of functional transformations through quality assessments that indicate that some partially FSMC-generated pieces are indistinguishable from those that are fully human. While the third contribution focuses on constructing and exploring a space of plausible voices with MaestroGenesis, the fourth presents results from a two-year study where students discuss their creative experience with the program. Finally, the fifth contribution is a plugin for MaestroGenesis called MaestroGenesis Voice (MG-V) that provides users a more natural way to incorporate MaestroGenesis in their creative endeavors by allowing scaffold creation through the human voice. Together, the chapters in this dissertation constitute a comprehensive approach to assisted music generation, enabling creativity without the need for musical expertise.
Show less - Date Issued
- 2014
- Identifier
- CFE0005350, ucf:50495
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0005350
- Title
- Solving Constraint Satisfaction Problems with Matrix Product States.
- Creator
-
Pelton, Sabine, Mucciolo, Eduardo, Ishigami, Masa, Leuenberger, Michael, University of Central Florida
- Abstract / Description
-
In the past decade, Matrix Product State (MPS) algorithms have emerged as an efficient method of modeling some many-body quantum spin systems. Since spin system Hamiltonians can be considered constraint satisfaction problems (CSPs), it follows that MPS should provide a versatile framework for studying a variety of general CSPs. In this thesis, we apply MPS to two types of CSP. First, use MPS to simulate adiabatic quantum computation (AQC), where the target Hamiltonians are instances of a...
Show moreIn the past decade, Matrix Product State (MPS) algorithms have emerged as an efficient method of modeling some many-body quantum spin systems. Since spin system Hamiltonians can be considered constraint satisfaction problems (CSPs), it follows that MPS should provide a versatile framework for studying a variety of general CSPs. In this thesis, we apply MPS to two types of CSP. First, use MPS to simulate adiabatic quantum computation (AQC), where the target Hamiltonians are instances of a fully connected, random Ising spin glass. Results of the simulations help shed light on why AQC fails for some optimization problems. We then present the novel application of a modified MPS algorithm to classical Boolean satisfiability problems, specifically k-SAT and max k-SAT. By construction, the algorithm also counts solutions to a given Boolean formula (\#-SAT). For easy satisfiable instances, the method is more expensive than other existing algorithms; however, for hard and unsatisfiable instances, the method succeeds in finding satisfying assignments where other algorithms fail to converge.
Show less - Date Issued
- 2017
- Identifier
- CFE0006902, ucf:51713
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006902
- Title
- HIGH PERFORMANCE DATA MINING TECHNIQUES FOR INTRUSION DETECTION.
- Creator
-
Siddiqui, Muazzam Ahmed, Lee, Joohan, University of Central Florida
- Abstract / Description
-
The rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques...
Show moreThe rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques varying from statistical methods to machine learning algorithms.Intrusion detection systems use audit data generated by operating systems, application softwares or network devices. These sources produce huge amount of datasets with tens of millions of records in them. To analyze this data, data mining is used which is a process to dig useful patterns from a large bulk of information. A major obstacle in the process is that the traditional data mining and learning algorithms are overwhelmed by the bulk volume and complexity of available data. This makes these algorithms impractical for time critical tasks like intrusion detection because of the large execution time.Our approach towards this issue makes use of high performance data mining techniques to expedite the process by exploiting the parallelism in the existing data mining algorithms and the underlying hardware. We will show that how high performance and parallel computing can be used to scale the data mining algorithms to handle large datasets, allowing the data mining component to search a much larger set of patterns and models than traditional computational platforms and algorithms would allow.We develop parallel data mining algorithms by parallelizing existing machine learning techniques using cluster computing. These algorithms include parallel backpropagation and parallel fuzzy ARTMAP neural networks. We evaluate the performances of the developed models in terms of speedup over traditional algorithms, prediction rate and false alarm rate. Our results showed that the traditional backpropagation and fuzzy ARTMAP algorithms can benefit from high performance computing techniques which make them well suited for time critical tasks like intrusion detection.
Show less - Date Issued
- 2004
- Identifier
- CFE0000056, ucf:46142
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0000056
- Title
- Visionary Ophthalmics: Confluence of Computer Vision and Deep Learning for Ophthalmology.
- Creator
-
Morley, Dustin, Foroosh, Hassan, Bagci, Ulas, Gong, Boqing, Mohapatra, Ram, University of Central Florida
- Abstract / Description
-
Ophthalmology is a medical field ripe with opportunities for meaningful application of computer vision algorithms. The field utilizes data from multiple disparate imaging techniques, ranging from conventional cameras to tomography, comprising a diverse set of computer vision challenges. Computer vision has a rich history of techniques that can adequately meet many of these challenges. However, the field has undergone something of a revolution in recent times as deep learning techniques have...
Show moreOphthalmology is a medical field ripe with opportunities for meaningful application of computer vision algorithms. The field utilizes data from multiple disparate imaging techniques, ranging from conventional cameras to tomography, comprising a diverse set of computer vision challenges. Computer vision has a rich history of techniques that can adequately meet many of these challenges. However, the field has undergone something of a revolution in recent times as deep learning techniques have sprung into the forefront following advances in GPU hardware. This development raises important questions regarding how to best leverage insights from both modern deep learning approaches and more classical computer vision approaches for a given problem. In this dissertation, we tackle challenging computer vision problems in ophthalmology using methods all across this spectrum. Perhaps our most significant work is a highly successful iris registration algorithm for use in laser eye surgery. This algorithm relies on matching features extracted from the structure tensor and a Gabor wavelet (-) a classically driven approach that does not utilize modern machine learning. However, drawing on insight from the deep learning revolution, we demonstrate successful application of backpropagation to optimize the registration significantly faster than the alternative of relying on finite differences. Towards the other end of the spectrum, we also present a novel framework for improving RANSAC segmentation algorithms by utilizing a convolutional neural network (CNN) trained on a RANSAC-based loss function. Finally, we apply state-of-the-art deep learning methods to solve the problem of pathological fluid detection in optical coherence tomography images of the human retina, using a novel retina-specific data augmentation technique to greatly expand the data set. Altogether, our work demonstrates benefits of applying a holistic view of computer vision, which leverages deep learning and associated insights without neglecting techniques and insights from the previous era.
Show less - Date Issued
- 2018
- Identifier
- CFE0007058, ucf:52001
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007058
- Title
- Enhancing Cognitive Algorithms for Optimal Performance of Adaptive Networks.
- Creator
-
Lugo-Cordero, Hector, Guha, Ratan, Wu, Annie, Stanley, Kenneth, University of Central Florida
- Abstract / Description
-
This research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in...
Show moreThis research proposes to enhance some Evolutionary Algorithms in order to obtain optimal and adaptive network configurations. Due to the richness in technologies, low cost, and application usages, we consider Heterogeneous Wireless Mesh Networks. In particular, we evaluate the domains of Network Deployment, Smart Grids/Homes, and Intrusion Detection Systems. Having an adaptive network as one of the goals, we consider a robust noise tolerant methodology that can quickly react to changes in the environment. Furthermore, the diversity of the performance objectives considered (e.g., power, coverage, anonymity, etc.) makes the objective function non-continuous and therefore not have a derivative. For these reasons, we enhance Particle Swarm Optimization (PSO) algorithm with elements that aid in exploring for better configurations to obtain optimal and sub-optimal configurations. According to results, the enhanced PSO promotes population diversity, leading to more unique optimal configurations for adapting to dynamic environments. The gradual complexification process demonstrated simpler optimal solutions than those obtained via trial and error without the enhancements.Configurations obtained by the modified PSO are further tuned in real-time upon environment changes. Such tuning occurs with a Fuzzy Logic Controller (FLC) which models human decision making by monitoring certain events in the algorithm. Example of such events include diversity and quality of solution in the environment. The FLC is able to adapt the enhanced PSO to changes in the environment, causing more exploration or exploitation as needed.By adding a Probabilistic Neural Network (PNN) classifier, the enhanced PSO is again used as a filter to aid in intrusion detection classification. This approach reduces miss classifications by consulting neighbors for classification in case of ambiguous samples. The performance of ambiguous votes via PSO filtering shows an improvement in classification, causing the simple classifier perform better the commonly used classifiers.
Show less - Date Issued
- 2018
- Identifier
- CFE0007046, ucf:52003
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007046
- Title
- Game-Theoretic Frameworks and Strategies for Defense Against Network Jamming and Collocation Attacks.
- Creator
-
Hemida, Ahmed, Atia, George, Simaan, Marwan, Vosoughi, Azadeh, Sukthankar, Gita, Guirguis, Mina, University of Central Florida
- Abstract / Description
-
Modern networks are becoming increasingly more complex, heterogeneous, and densely connected. While more diverse services are enabled to an ever-increasing number of users through ubiquitous networking and pervasive computing, several important challenges have emerged. For example, densely connected networks are prone to higher levels of interference, which makes them more vulnerable to jamming attacks. Also, the utilization of software-based protocols to perform routing, load balancing and...
Show moreModern networks are becoming increasingly more complex, heterogeneous, and densely connected. While more diverse services are enabled to an ever-increasing number of users through ubiquitous networking and pervasive computing, several important challenges have emerged. For example, densely connected networks are prone to higher levels of interference, which makes them more vulnerable to jamming attacks. Also, the utilization of software-based protocols to perform routing, load balancing and power management functions in Software-Defined Networks gives rise to more vulnerabilities that could be exploited by malicious users and adversaries. Moreover, the increased reliance on cloud computing services due to a growing demand for communication and computation resources poses formidable security challenges due to the shared nature and virtualization of cloud computing. In this thesis, we study two types of attacks: jamming attacks on wireless networks and side-channel attacks on cloud computing servers. The former attacks disrupt the natural network operation by exploiting the static topology and dynamic channel assignment in wireless networks, while the latter attacks seek to gain access to unauthorized data by co-residing with target virtual machines (VMs) on the same physical node in a cloud server. In both attacks, the adversary faces a static attack surface and achieves her illegitimate goal by exploiting a stationary aspect of the network functionality. Hence, this dissertation proposes and develops counter approaches to both attacks using moving target defense strategies. We study the strategic interactions between the adversary and the network administrator within a game-theoretic framework.First, in the context of jamming attacks, we present and analyze a game-theoretic formulation between the adversary and the network defender. In this problem, the attack surface is the network connectivity (the static topology) as the adversary jams a subset of nodes to increase the level of interference in the network. On the other side, the defender makes judicious adjustments of the transmission footprint of the various nodes, thereby continuously adapting the underlying network topology to reduce the impact of the attack. The defender's strategy is based on playing Nash equilibrium strategies securing a worst-case network utility. Moreover, scalable decomposition-based approaches are developed yielding a scalable defense strategy whose performance closely approaches that of the non-decomposed game for large-scale and dense networks. We study a class of games considering discrete as well as continuous power levels.In the second problem, we consider multi-tenant clouds, where a number of VMs are typically collocated on the same physical machine to optimize performance and power consumption and maximize profit. This increases the risk of a malicious virtual machine performing side-channel attacks and leaking sensitive information from neighboring VMs. The attack surface, in this case, is the static residency of VMs on a set of physical nodes, hence we develop a timed migration defense approach. Specifically, we analyze a timing game in which the cloud provider decides when to migrate a VM to a different physical machine to mitigate the risk of being compromised by a collocated malicious VM. The adversary decides the rate at which she launches new VMs to collocate with the victim VMs. Our formulation captures a data leakage model in which the cost incurred by the cloud provider depends on the duration of collocation with malicious VMs. It also captures costs incurred by the adversary in launching new VMs and by the defender in migrating VMs. We establish sufficient conditions for the existence of Nash equilibria for general cost functions, as well as for specific instantiations, and characterize the best response for both players. Furthermore, we extend our model to characterize its impact on the attacker's payoff when the cloud utilizes intrusion detection systems that detect side-channel attacks. Our theoretical findings are corroborated with extensive numerical results in various settings as well as a proof-of-concept implementation in a realistic cloud setting.
Show less - Date Issued
- 2019
- Identifier
- CFE0007468, ucf:52677
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007468
- Title
- Network Partitioning in Distributed Agent-Based Models.
- Creator
-
Petkova, Antoniya, Deo, Narsingh, Hughes, Charles, Bassiouni, Mostafa, Shaykhian, Gholam, University of Central Florida
- Abstract / Description
-
Agent-Based Models (ABMs) are an emerging simulation paradigm for modeling complex systems, comprised of autonomous, possibly heterogeneous, interacting agents. The utility of ABMs lies in their ability to represent such complex systems as self-organizing networks of agents. Modeling and understanding the behavior of complex systems usually occurs at large and representative scales, and often obtaining and visualizing of simulation results in real-time is critical.The real-time requirement...
Show moreAgent-Based Models (ABMs) are an emerging simulation paradigm for modeling complex systems, comprised of autonomous, possibly heterogeneous, interacting agents. The utility of ABMs lies in their ability to represent such complex systems as self-organizing networks of agents. Modeling and understanding the behavior of complex systems usually occurs at large and representative scales, and often obtaining and visualizing of simulation results in real-time is critical.The real-time requirement necessitates the use of in-memory computing, as it is dif?cult and challenging to handle the latency and unpredictability of disk accesses. Combining this observation with the scale requirement emphasizes the need to use parallel and distributed computing platforms, such as MPI-enabled CPU clusters. Consequently, the agent population must be "partitioned" across different CPUs in a cluster. Further, the typically high volume of interactions among agents can quickly become a signi?cant bottleneck for real-time or large-scale simulations. The problem is exacerbated if the underlying ABM network is dynamic and the inter-process communication evolves over the course of the simulation. Therefore, it is critical to develop topology-aware partitioning mechanisms to support such large simulations.In this dissertation, we demonstrate that distributed agent-based model simulations bene?t from the use of graph partitioning algorithms that involve a local, neighborhood-based perspective. Such methods do not rely on global accesses to the network and thus are more scalable. In addition, we propose two partitioning schemes that consider the bottom-up individual-centric nature of agent-based modeling. The ?rst technique utilizes label-propagation community detection to partition the dynamic agent network of an ABM. We propose a latency-hiding, seamless integration of community detection in the dynamics of a distributed ABM. To achieve this integration, we exploit the similarity in the process flow patterns of a label-propagation community-detection algorithm and self-organizing ABMs.In the second partitioning scheme, we apply a combination of the Guided Local Search (GLS) and Fast Local Search (FLS) metaheuristics in the context of graph partitioning. The main driving principle of GLS is the dynamic modi?cation of the objective function to escape local optima. The algorithm augments the objective of a local search, thereby transforming the landscape structure and escaping a local optimum. FLS is a local search heuristic algorithm that is aimed at reducing the search space of the main search algorithm. It breaks down the space into sub-neighborhoods such that inactive sub-neighborhoods are removed from the search process. The combination of GLS and FLS allowed us to design a graph partitioning algorithm that is both scalable and sensitive to the inherent modularity of real-world networks.
Show less - Date Issued
- 2017
- Identifier
- CFE0006903, ucf:51706
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0006903
- Title
- MODELING, DESIGN AND EVALUATION OF NETWORKING SYSTEMS AND PROTOCOLS THROUGH SIMULATION.
- Creator
-
Lacks, Daniel, Kocak, Taskin, University of Central Florida
- Abstract / Description
-
Computer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework...
Show moreComputer modeling and simulation is a practical way to design and test a system without actually having to build it. Simulation has many benefits which apply to many different domains: it reduces costs creating different prototypes for mechanical engineers, increases the safety of chemical engineers exposed to dangerous chemicals, speeds up the time to model physical reactions, and trains soldiers to prepare for battle. The motivation behind this work is to build a common software framework that can be used to create new networking simulators on top of an HLA-based federation for distributed simulation. The goals are to model and simulate networking architectures and protocols by developing a common underlying simulation infrastructure and to reduce the time a developer has to learn the semantics of message passing and time management to free more time for experimentation and data collection and reporting. This is accomplished by evolving the simulation engine through three different applications that model three different types of network protocols. Computer networking is a good candidate for simulation because of the Internet's rapid growth that has spawned off the need for new protocols and algorithms and the desire for a common infrastructure to model these protocols and algorithms. One simulation, the 3DInterconnect simulator, simulates data transmitting through a hardware k-array n-cube network interconnect. Performance results show that k-array n-cube topologies can sustain higher traffic load than the currently used interconnects. The second simulator, Cluster Leader Logic Algorithm Simulator, simulates an ad-hoc wireless routing protocol that uses a data distribution methodology based on the GPS-QHRA routing protocol. CLL algorithm can realize a maximum of 45% power savings and maximum 25% reduced queuing delay compared to GPS-QHRA. The third simulator simulates a grid resource discovery protocol for helping Virtual Organizations to find resource on a grid network to compute or store data on. Results show that worst-case 99.43% of the discovery messages are able to find a resource provider to use for computation. The simulation engine was then built to perform basic HLA operations. Results show successful HLA functions including creating, joining, and resigning from a federation, time management, and event publication and subscription.
Show less - Date Issued
- 2007
- Identifier
- CFE0001887, ucf:47399
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001887
- Title
- MYSPACE OR OURSPACE: A MEDIA SYSTEM DEPENDENCY VIEW OF MYSPACE.
- Creator
-
Schrock, Andrew, Brown, Timothy, University of Central Florida
- Abstract / Description
-
MySpace is a type of "social networking" website where people meet, socialize, and create friendships. The way MySpace members, particularly younger individuals, interact online underscores the changing nature of mass media. Media system dependency states that individuals become reliant on media in their daily life because of fundamental human goals. This reliance, termed a dependency, leads to repeated use. Media system dependency was applied in the current study to explain how and why...
Show moreMySpace is a type of "social networking" website where people meet, socialize, and create friendships. The way MySpace members, particularly younger individuals, interact online underscores the changing nature of mass media. Media system dependency states that individuals become reliant on media in their daily life because of fundamental human goals. This reliance, termed a dependency, leads to repeated use. Media system dependency was applied in the current study to explain how and why individuals became habitual MySpace users. To attain results a survey was administered to a convenience sampling of 401 adult undergraduates at the University of Central Florida. Members reported MySpace dependency had a moderate correlation to MySpace use, and they actively used the website an average of 1.3 hours of use per day. Results indicated members use MySpace to primarily satisfy play and interaction orientation dependencies. MySpace use was found to have a correlation with number of MySpace friends. "Number of friends created" in turn had a correlation with MySpace dependency, as people returned to interact with their friends. Individual factors were also found to be a source of influence in MySpace dependency. These individual factors were demographics, psychological factors related to use of the Internet, and psychological factors related to use of MySpace. Factors related to MySpace, extroversion and self-disclosure, were positively correlated with intensity of dependency. The influence of factors related to the Internet was partly supported; computer self-efficacy was not significantly related to MySpace dependency, while computer anxiety was significantly related to MySpace dependency. Speed of connection to the Internet and available time to use the Internet were not related to MySpace dependency. Additionally, significant differences were found between genders in overall dependency, extroversion, self-disclosure, computer anxiety, and computer self-efficacy. These findings provide evidence that MySpace members were little, if at all, constrained by factors related to use of the Internet, but were attracted to the websites for similar reasons as real-life relationships. Finally, MySpace is just one of the large number of online resources that are predominantly social, such as email, message boards, and online chat. This study found that through a "technology cluster" MySpace members use these other social innovations more frequently than non-members. However, members also used significantly more non-social innovations, which may indicate that MySpace members are part of a larger technology cluster than anticipated or perhaps are in the same category of innovation adopter.
Show less - Date Issued
- 2006
- Identifier
- CFE0001451, ucf:47057
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0001451
- Title
- VIRTUALIZATION AND SELF-ORGANIZATION FOR UTILITY COMPUTING.
- Creator
-
Saleh, Mehdi, Marinescu, Dan, University of Central Florida
- Abstract / Description
-
We present an alternative paradigm for utility computing when the delivery of service is subject to binding contracts; the solution we propose is based on resource virtualization and a self-management scheme. A virtual cloud aggregates set virtual machines to work in concert for the tasks specified by the service agreement. A first step for the establishment of a virtual cloud is to create a scale-free overlay network through a biased random walk; scale-free networks enjoy a set of remarkable...
Show moreWe present an alternative paradigm for utility computing when the delivery of service is subject to binding contracts; the solution we propose is based on resource virtualization and a self-management scheme. A virtual cloud aggregates set virtual machines to work in concert for the tasks specified by the service agreement. A first step for the establishment of a virtual cloud is to create a scale-free overlay network through a biased random walk; scale-free networks enjoy a set of remarkable properties such as: robustness against random failures, favorable scaling, and resilience to congestion, small diameter, and average path length. Constrains such as limits on the cost of per unit of service, total cost, or the requirement to use only "green" computing cycles are then considered when a node of this overlay network decides whether to join the virtual cloud or not. A VIRTUAL CLOUD consists of a subset of the nodes assigned to the tasks specified by a Service Level Agreement, SLA, as well as a virtual interconnection network, or overlay network, for the virtual cloud. SLAs could serve as a congestion control mechanism for an organization providing utility computing; this mechanism allows the system to reject new contracts when there is the danger of overloading the system and failing to fulfill existing contractual obligations. The objective of this thesis is to show that biased random walks in power law networks are capable of responding to dynamic changes of the workload in utility computing.
Show less - Date Issued
- 2011
- Identifier
- CFE0003725, ucf:48768
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0003725
- Title
- Quality Diversity: Harnessing Evolution to Generate a Diversity of High-Performing Solutions.
- Creator
-
Pugh, Justin, Stanley, Kenneth, Wu, Annie, Sukthankar, Gita, Garibay, Ivan, University of Central Florida
- Abstract / Description
-
Evolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of...
Show moreEvolution in nature has designed countless solutions to innumerable interconnected problems, giving birth to the impressive array of complex modern life observed today. Inspired by this success, the practice of evolutionary computation (EC) abstracts evolution artificially as a search operator to find solutions to problems of interest primarily through the adaptive mechanism of survival of the fittest, where stronger candidates are pursued at the expense of weaker ones until a solution of satisfying quality emerges. At the same time, research in open-ended evolution (OEE) draws different lessons from nature, seeking to identify and recreate processes that lead to the type of perpetual innovation and indefinitely increasing complexity observed in natural evolution. New algorithms in EC such as MAP-Elites and Novelty Search with Local Competition harness the toolkit of evolution for a related purpose: finding as many types of good solutions as possible (rather than merely the single best solution). With the field in its infancy, no empirical studies previously existed comparing these so-called quality diversity (QD) algorithms. This dissertation (1) contains the first extensive and methodical effort to compare different approaches to QD (including both existing published approaches as well as some new methods presented for the first time here) and to understand how they operate to help inform better approaches in the future.It also (2) introduces a new technique for encoding neural networks for evolution with indirect encoding that contain multiple sensory or output modalities.Further, it (3) explores the idea that QD can act as an engine of open-ended discovery by introducing an expressive platform called Voxelbuild where QD algorithms continually evolve robots that stack blocks in new ways. A culminating experiment (4) is presented that investigates evolution in Voxelbuild over a very long timescale. This research thus stands to advance the OEE community's desire to create and understand open-ended systems while also laying the groundwork for QD to realize its potential within EC as a means to automatically generate an endless progression of new content in real-world applications.
Show less - Date Issued
- 2019
- Identifier
- CFE0007513, ucf:52638
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0007513
- Title
- Automatically Acquiring a Semantic Network of Related Concepts.
- Creator
-
Szumlanski, Sean, Gomez, Fernando, Wu, Annie, Hughes, Charles, Sims, Valerie, University of Central Florida
- Abstract / Description
-
We describe the automatic acquisition of a semantic network in which over 7,500 of the most frequently occurring nouns in the English language are linked to their semantically related concepts in the WordNet noun ontology. Relatedness between nouns is discovered automatically from lexical co-occurrence in Wikipedia texts using a novel adaptation of an information theoretic inspired measure. Our algorithm then capitalizes on salient sense clustering among these semantic associates to...
Show moreWe describe the automatic acquisition of a semantic network in which over 7,500 of the most frequently occurring nouns in the English language are linked to their semantically related concepts in the WordNet noun ontology. Relatedness between nouns is discovered automatically from lexical co-occurrence in Wikipedia texts using a novel adaptation of an information theoretic inspired measure. Our algorithm then capitalizes on salient sense clustering among these semantic associates to automatically disambiguate them to their corresponding WordNet noun senses (i.e., concepts). The resultant concept-to-concept associations, stemming from 7,593 target nouns, with 17,104 distinct senses among them, constitute a large-scale semantic network with 208,832 undirected edges between related concepts. Our work can thus be conceived of as augmenting the WordNet noun ontology with RelatedTo links.The network, which we refer to as the Szumlanski-Gomez Network (SGN), has been subjected to a variety of evaluative measures, including manual inspection by human judges and quantitative comparison to gold standard data for semantic relatedness measurements. We have also evaluated the network's performance in an applied setting on a word sense disambiguation (WSD) task in which the network served as a knowledge source for established graph-based spreading activation algorithms, and have shown: a) the network is competitive with WordNet when used as a stand-alone knowledge source for WSD, b) combining our network with WordNet achieves disambiguation results that exceed the performance of either resource individually, and c) our network outperforms a similar resource, WordNet++ (Ponzetto (&) Navigli, 2010), that has been automatically derived from annotations in the Wikipedia corpus.Finally, we present a study on human perceptions of relatedness. In our study, we elicited quantitative evaluations of semantic relatedness from human subjects using a variation of the classical methodology that Rubenstein and Goodenough (1965) employed to investigate human perceptions of semantic similarity. Judgments from individual subjects in our study exhibit high average correlation to the elicited relatedness means using leave-one-out sampling (r = 0.77, ? = 0.09, N = 73), although not as high as average human correlation in previous studies of similarity judgments, for which Resnik (1995) established an upper bound of r = 0.90 (? = 0.07, N = 10). These results suggest that human perceptions of relatedness are less strictly constrained than evaluations of similarity, and establish a clearer expectation for what constitutes human-like performance by a computational measure of semantic relatedness. We also contrast the performance of a variety of similarity and relatedness measures on our dataset to their performance on similarity norms and introduce our own dataset as a supplementary evaluative standard for relatedness measures.
Show less - Date Issued
- 2013
- Identifier
- CFE0004759, ucf:49767
- Format
- Document (PDF)
- PURL
- http://purl.flvc.org/ucf/fd/CFE0004759