Current Search: Machinal (x)
Pages
-
-
Title
-
Experimental study and modeling of mechanical micro-machining of particle reinforced heterogeneous materials.
-
Creator
-
Liu, Jian, Xu, Chengying, An, Linan, Gordon, Ali, Bai, Yuanli, Gong, Xun, University of Central Florida
-
Abstract / Description
-
This study focuses on developing explicit analytical and numerical process models for mechanical micro-machining of heterogeneous materials. These models are used to select suitable process parameters for preparing and micro-machining of these advanced materials. The material system studied in this research is Magnesium Metal Matrix Composites (Mg-MMCs) reinforced with nano-sized and micro-sized silicon carbide (SiC) particles.This research is motivated by increasing demands of miniaturized...
Show moreThis study focuses on developing explicit analytical and numerical process models for mechanical micro-machining of heterogeneous materials. These models are used to select suitable process parameters for preparing and micro-machining of these advanced materials. The material system studied in this research is Magnesium Metal Matrix Composites (Mg-MMCs) reinforced with nano-sized and micro-sized silicon carbide (SiC) particles.This research is motivated by increasing demands of miniaturized components with high mechanical performance in various industries. Mg-MMCs become one of the best candidates due to its light weight, high strength, and high creep/wear resistance. However, the improved strength and abrasive nature of the reinforcements bring great challenges for the subsequent micro-machining process.Systematic experimental investigations on the machinability of Mg-MMCs reinforced with SiC nano-particles have been conducted. The nanocomposites containing 5 Vol.%, 10 Vol.% and 15 Vol.% reinforcements, as well as pure magnesium, are studied by using the Design of Experiment (DOE) method. Cutting forces, surface morphology and surface roughness are characterized to understand the machinability of the four materials. Based on response surface methodology (RSM) design, experimental models and related contour plots have been developed to build a connection between different materials properties and cutting parameters. Those models can be used to predict the cutting force, the surface roughness, and then optimize the machining process.An analytical cutting force model has been developed to predict cutting forces of Mg-MMCs reinforced with nano-sized SiC particles in the micro-milling process. This model is different from previous ones by encompassing the behaviors of reinforcement nanoparticles in three cutting scenarios, i.e., shearing, ploughing and elastic recovery. By using the enhanced yield strength in the cutting force model, three major strengthening factors are incorporated, including load-bearing effect, enhanced dislocation density strengthening effect and Orowan strengthening effect. In this way, the particle size and volume fraction, as significant factors affecting the cutting forces, are explicitly considered. In order to validate the model, various cutting conditions using different size end mills (100 (&)#181;m and 1 mm dia.) have been conducted on Mg-MMCs with volume fraction from 0 (pure magnesium) to 15 Vol.%. The simulated cutting forces show a good agreement with the experimental data. The proposed model can predict the major force amplitude variations and force profile changes as functions of the nanoparticles' volume fraction. Next, a systematic evaluation of six ductile fracture models has been conducted to identify the most suitable fracture criterion for micro-scale cutting simulations. The evaluated fracture models include constant fracture strain, Johnson-Cook, Johnson-Cook coupling criterion, Wilkins, modified Cockcroft-Latham, and Bao-Wierzbicki fracture criterion. By means of a user material subroutine (VUMAT), these fracture models are implemented into a Finite Element (FE) orthogonal cutting model in ABAQUS/Explicit platform. The local parameters (stress, strain, fracture factor, velocity fields) and global variables (chip morphology, cutting forces, temperature, shear angle, and machined surface integrity) are evaluated. Results indicate that by coupling with the damage evolution, the capability of Johnson-Cook and Bao-Wierzbicki can be further extended to predict accurate chip morphology. Bao-Wierzbiki-based coupling model provides the best simulation results in this study. The micro-cutting performance of MMCs materials has also been studied by using FE modeling method. A 2-D FE micro-cutting model has been constructed. Firstly, homogenized material properties are employed to evaluate the effect of particles' volume fraction. Secondly, micro-structures of the two-phase material are modeled in FE cutting models. The effects of the existing micro-sized and nano-sized ceramic particles on micro-cutting performance are carefully evaluated in two case studies. Results show that by using the homogenized material properties based on Johnson-Cook plasticity and fracture model with damage evolution, the micro-cutting performance of nano-reinforced Mg-MMCs can be predicted. Crack generation for SiC particle reinforced MMCs is different from their homogeneous counterparts; the effect of micro-sized particles is different from the one of nano-sized particles.In summary, through this research, a better understanding of the unique cutting mechanism for particle reinforced heterogeneous materials has been obtained. The effect of reinforcements on micro-cutting performance is obtained, which will help material engineers tailor suitable material properties for special mechanical design, associated manufacturing method and application needs. Moreover, the proposed analytical and numerical models provide a guideline to optimize process parameters for preparing and micro-machining of heterogeneous MMCs materials. This will eventually facilitate the automation of MMCs' machining process and realize high-efficiency, high-quality, and low-cost manufacturing of composite materials.
Show less
-
Date Issued
-
2012
-
Identifier
-
CFE0004570, ucf:49196
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0004570
-
-
Title
-
CONTEXTUALIZING OBSERVATIONAL DATA FOR MODELING HUMAN PERFORMANCE.
-
Creator
-
Trinh, Viet, Gonzalez, Avelino, University of Central Florida
-
Abstract / Description
-
This research focuses on the ability to contextualize observed human behaviors in efforts to automate the process of tactical human performance modeling through learning from observations. This effort to contextualize human behavior is aimed at minimizing the role and involvement of the knowledge engineers required in building intelligent Context-based Reasoning (CxBR) agents. More specifically, the goal is to automatically discover the context in which a human actor is situated when...
Show moreThis research focuses on the ability to contextualize observed human behaviors in efforts to automate the process of tactical human performance modeling through learning from observations. This effort to contextualize human behavior is aimed at minimizing the role and involvement of the knowledge engineers required in building intelligent Context-based Reasoning (CxBR) agents. More specifically, the goal is to automatically discover the context in which a human actor is situated when performing a mission to facilitate the learning of such CxBR models. This research is derived from the contextualization problem left behind in Fernlund's research on using the Genetic Context Learner (GenCL) to model CxBR agents from observed human performance [Fernlund, 2004]. To accomplish the process of context discovery, this research proposes two contextualization algorithms: Contextualized Fuzzy ART (CFA) and Context Partitioning and Clustering (COPAC). The former is a more naive approach utilizing the well known Fuzzy ART strategy while the latter is a robust algorithm developed on the principles of CxBR. Using Fernlund's original five drivers, the CFA and COPAC algorithms were tested and evaluated on their ability to effectively contextualize each driver's individualized set of behaviors into well-formed and meaningful context bases as well as generating high-fidelity agents through the integration with Fernlund's GenCL algorithm. The resultant set of agents was able to capture and generalized each driver's individualized behaviors.
Show less
-
Date Issued
-
2009
-
Identifier
-
CFE0002563, ucf:48253
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002563
-
-
Title
-
Data-Driven Simulation Modeling of Construction and Infrastructure Operations Using Process Knowledge Discovery.
-
Creator
-
Akhavian, Reza, Behzadan, Amir, Oloufa, Amr, Yun, Hae-Bum, Sukthankar, Gita, Zheng, Qipeng, University of Central Florida
-
Abstract / Description
-
Within the architecture, engineering, and construction (AEC) domain, simulation modeling is mainly used to facilitate decision-making by enabling the assessment of different operational plans and resource arrangements, that are otherwise difficult (if not impossible), expensive, or time consuming to be evaluated in real world settings. The accuracy of such models directly affects their reliability to serve as a basis for important decisions such as project completion time estimation and...
Show moreWithin the architecture, engineering, and construction (AEC) domain, simulation modeling is mainly used to facilitate decision-making by enabling the assessment of different operational plans and resource arrangements, that are otherwise difficult (if not impossible), expensive, or time consuming to be evaluated in real world settings. The accuracy of such models directly affects their reliability to serve as a basis for important decisions such as project completion time estimation and resource allocation. Compared to other industries, this is particularly important in construction and infrastructure projects due to the high resource costs and the societal impacts of these projects. Discrete event simulation (DES) is a decision making tool that can benefit the process of design, control, and management of construction operations. Despite recent advancements, most DES models used in construction are created during the early planning and design stage when the lack of factual information from the project prohibits the use of realistic data in simulation modeling. The resulting models, therefore, are often built using rigid (subjective) assumptions and design parameters (e.g. precedence logic, activity durations). In all such cases and in the absence of an inclusive methodology to incorporate real field data as the project evolves, modelers rely on information from previous projects (a.k.a. secondary data), expert judgments, and subjective assumptions to generate simulations to predict future performance. These and similar shortcomings have to a large extent limited the use of traditional DES tools to preliminary studies and long-term planning of construction projects.In the realm of the business process management, process mining as a relatively new research domain seeks to automatically discover a process model by observing activity records and extracting information about processes. The research presented in this Ph.D. Dissertation was in part inspired by the prospect of construction process mining using sensory data collected from field agents. This enabled the extraction of operational knowledge necessary to generate and maintain the fidelity of simulation models. A preliminary study was conducted to demonstrate the feasibility and applicability of data-driven knowledge-based simulation modeling with focus on data collection using wireless sensor network (WSN) and rule-based taxonomy of activities. The resulting knowledge-based simulation models performed very well in properly predicting key performance measures of real construction systems. Next, a pervasive mobile data collection and mining technique was adopted and an activity recognition framework for construction equipment and worker tasks was developed. Data was collected using smartphone accelerometers and gyroscopes from construction entities to generate significant statistical time- and frequency-domain features. The extracted features served as the input of different types of machine learning algorithms that were applied to various construction activities. The trained predictive algorithms were then used to extract activity durations and calculate probability distributions to be fused into corresponding DES models. Results indicated that the generated data-driven knowledge-based simulation models outperform static models created based upon engineering assumptions and estimations with regard to compatibility of performance measure outputs to reality.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006023, ucf:51014
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006023
-
-
Title
-
Towards Improving Human-Robot Interaction For Social Robots.
-
Creator
-
Khan, Saad, Boloni, Ladislau, Behal, Aman, Sukthankar, Gita, Garibay, Ivan, Fiore, Stephen, University of Central Florida
-
Abstract / Description
-
Autonomous robots interacting with humans in a social setting must consider the social-cultural environment when pursuing their objectives. Thus the social robot must perceive and understand the social cultural environment in order to be able to explain and predict the actions of its human interaction partners. This dissertation contributes to the emerging field of human-robot interaction for social robots in the following ways: 1. We used the social calculus technique based on culture...
Show moreAutonomous robots interacting with humans in a social setting must consider the social-cultural environment when pursuing their objectives. Thus the social robot must perceive and understand the social cultural environment in order to be able to explain and predict the actions of its human interaction partners. This dissertation contributes to the emerging field of human-robot interaction for social robots in the following ways: 1. We used the social calculus technique based on culture sanctioned social metrics (CSSMs) to quantify, analyze and predict the behavior of the robot, human soldiers and the public perception in the Market Patrol peacekeeping scenario. 2. We validated the results of the Market Patrol scenario by comparing the predicted values with the judgment of a large group of human observers cognizant of the modeled culture. 3. We modeled the movement of a socially aware mobile robot in a dense crowds, using the concept of a micro-conflict to represent the challenge of giving or not giving way to pedestrians. 4. We developed an approach for the robot behavior in micro-conflicts based on the psychological observation that human opponents will use a consistent strategy. For this, the mobile robot classifies the opponent strategy reflected by the personality and social status of the person and chooses an appropriate counter-strategy that takes into account the urgency of the robots' mission. 5. We developed an alternative approach for the resolution of micro-conflicts based on the imitation of the behavior of the human agent. This approach aims to make the behavior of an autonomous robot closely resemble that of a remotely operated one.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005965, ucf:50819
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005965
-
-
Title
-
An investigation of physiological measures in a marketing decision task.
-
Creator
-
Lerma, Nelson, Karwowski, Waldemar, Elshennawy, Ahmad, Xanthopoulos, Petros, Reinerman, Lauren, University of Central Florida
-
Abstract / Description
-
The objective of the present study was to understand the use of physiological measures as an alternative to traditional market research tools, such as self-reporting measures and focus groups. For centuries, corporations and researchers have relied almost exclusively on traditional measures to gain insights into consumer behavior. Oftentimes, traditional methods have failed to accurately predict consumer demand, and this has prompted corporations to explore alternative methods that will...
Show moreThe objective of the present study was to understand the use of physiological measures as an alternative to traditional market research tools, such as self-reporting measures and focus groups. For centuries, corporations and researchers have relied almost exclusively on traditional measures to gain insights into consumer behavior. Oftentimes, traditional methods have failed to accurately predict consumer demand, and this has prompted corporations to explore alternative methods that will accurately forecast future sales. One the most promising alternative methods currently being investigated is the use of physiological measures as an indication of consumer preference. This field, also referred to as neuromarketing, has blended the principles of psychology, neuroscience, and market research to explore consumer behavior from a physiological perspective. The goal of neuromarketing is to capture consumer behavior through the use of physiological sensors. This study investigated the extent to which physiological measures where correlated to consumer preferences by utilizing five physiological sensors which included two neurological sensors (EEG and ECG) two hemodynamic sensors (TCD and fNIR) and one optic sensor (eye-tracking). All five physiological sensors were used simultaneously to capture and record physiological changes during four distinct marketing tasks. The results showed that only one physiological sensor, EEG, was indicative of concept type and intent to purchase. The remaining four physiological sensors did not show any significant differences for concept type or intent to purchase.Furthermore, Machine Learning Algorithms (MLAs) were used to determine the extent to which MLAs (Na(&)#239;ve Bayes, Multilayer Perceptron, K-Nearest Neighbor, and Logistic Regression) could classify physiological responses to self-reporting measures obtained during a marketing task. The results demonstrated that Multilayer Perceptron, on average, performed better than the other MLAs for intent to purchase and concept type. It was also evident that the models faired best with the most popular concept when categorizing the data based on intent to purchase or final selection. Overall, the four models performed well at categorizing the most popular concept and gave some indication to the extent to which physiological measures are capable of capturing intent to purchase. The research study was intended to help better understand the possibilities and limitations of physiological measures in the field of market research. Based on the results obtained, this study demonstrated that certain physiological sensors are capable of capturing emotional changes, but only when the emotional response between two concepts is significantly different. Overall, physiological measures hold great promise in the study of consumer behavior, providing great insight on the relationship between emotions and intentions in market research.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0006345, ucf:51563
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0006345
-
-
Title
-
Mahalanobis kernel-based support vector data description for detection of large shifts in mean vector.
-
Creator
-
Nguyen, Vu, Maboudou, Edgard, Nickerson, David, Schott, James, University of Central Florida
-
Abstract / Description
-
Statistical process control (SPC) applies the science of statistics to various process control in order to provide higher-quality products and better services. The K chart is one among the many important tools that SPC offers. Creation of the K chart is based on Support Vector Data Description (SVDD), a popular data classifier method inspired by Support Vector Machine (SVM). As any methods associated with SVM, SVDD benefits from a wide variety of choices of kernel, which determines the...
Show moreStatistical process control (SPC) applies the science of statistics to various process control in order to provide higher-quality products and better services. The K chart is one among the many important tools that SPC offers. Creation of the K chart is based on Support Vector Data Description (SVDD), a popular data classifier method inspired by Support Vector Machine (SVM). As any methods associated with SVM, SVDD benefits from a wide variety of choices of kernel, which determines the effectiveness of the whole model. Among the most popular choices is the Euclidean distance-based Gaussian kernel, which enables SVDD to obtain a flexible data description, thus enhances its overall predictive capability. This thesis explores an even more robust approach by incorporating the Mahalanobis distance-based kernel (hereinafter referred to as Mahalanobis kernel) to SVDD and compare it with SVDD using the traditional Gaussian kernel. Method's sensitivity is benchmarked by Average Run Lengths obtained from multiple Monte Carlo simulations. Data of such simulations are generated from multivariate normal, multivariate Student's (t), and multivariate gamma populations using R, a popular software environment for statistical computing. One case study is also discussed using a real data set received from Halberg Chronobiology Center. Compared to Gaussian kernel, Mahalanobis kernel makes SVDD and thus the K chart significantly more sensitive to shifts in mean vector, and also in covariance matrix.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005676, ucf:50170
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005676
-
-
Title
-
Integrated Data Fusion and Mining (IDFM) Technique for Monitoring Water Quality in Large and Small Lakes.
-
Creator
-
Vannah, Benjamin, Chang, Ni-bin, Wanielista, Martin, Wang, Dingbao, University of Central Florida
-
Abstract / Description
-
Monitoring water quality on a near-real-time basis to address water resources management and public health concerns in coupled natural systems and the built environment is by no means an easy task. Furthermore, this emerging societal challenge will continue to grow, due to the ever-increasing anthropogenic impacts upon surface waters. For example, urban growth and agricultural operations have led to an influx of nutrients into surface waters stimulating harmful algal bloom formation, and...
Show moreMonitoring water quality on a near-real-time basis to address water resources management and public health concerns in coupled natural systems and the built environment is by no means an easy task. Furthermore, this emerging societal challenge will continue to grow, due to the ever-increasing anthropogenic impacts upon surface waters. For example, urban growth and agricultural operations have led to an influx of nutrients into surface waters stimulating harmful algal bloom formation, and stormwater runoff from urban areas contributes to the accumulation of total organic carbon (TOC) in surface waters. TOC in surface waters is a known precursor of disinfection byproducts in drinking water treatment, and microcystin is a potent hepatotoxin produced by the bacteria Microcystis, which can form expansive algal blooms in eutrophied lakes. Due to the ecological impacts and human health hazards posed by TOC and microcystin, it is imperative that municipal decision makers and water treatment plant operators are equipped with a rapid and economical means to track and measure these substances.Remote sensing is an emergent solution for monitoring and measuring changes to the earth's environment. This technology allows for large regions anywhere on the globe to be observed on a frequent basis. This study demonstrates the prototype of a near-real-time early warning system using Integrated Data Fusion and Mining (IDFM) techniques with the aid of both multispectral (Landsat and MODIS) and hyperspectral (MERIS) satellite sensors to determine spatiotemporal distributions of TOC and microcystin. Landsat satellite imageries have high spatial resolution, but such application suffers from a long overpass interval of 16 days. On the other hand, free coarse resolution sensors with daily revisit times, such as MODIS, are incapable of providing detailed water quality information because of low spatial resolution. This issue can be resolved by using data or sensor fusion techniques, an instrumental part of IDFM, in which the high spatial resolution of Landsat and the high temporal resolution of MODIS imageries are fused and analyzed by a suite of regression models to optimally produce synthetic images with both high spatial and temporal resolutions. The same techniques are applied to the hyperspectral sensor MERIS with the aid of the MODIS ocean color bands to generate fused images with enhanced spatial, temporal, and spectral properties. The performance of the data mining models derived using fused hyperspectral and fused multispectral data are quantified using four statistical indices. The second task compared traditional two-band models against more powerful data mining models for TOC and microcystin prediction. The use of IDFM is illustrated for monitoring microcystin concentrations in Lake Erie (large lake), and it is applied for TOC monitoring in Harsha Lake (small lake). Analysis confirmed that data mining methods excelled beyond two-band models at accurately estimating TOC and microcystin concentrations in lakes, and the more detailed spectral reflectance data offered by hyperspectral sensors produced a noticeable increase in accuracy for the retrieval of water quality parameters.
Show less
-
Date Issued
-
2013
-
Identifier
-
CFE0005066, ucf:49979
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005066
-
-
Title
-
SketChart: A Pen-Based Tool for Chart Generation and Interaction.
-
Creator
-
Vargas Gonzalez, Andres, Laviola II, Joseph, Foroosh, Hassan, Hua, Kien, University of Central Florida
-
Abstract / Description
-
It has been shown that representing data with the right visualization increases the understanding of qualitative and quantitative information encoded in documents. However, current tools for generating such visualizations involve the use of traditional WIMP techniques, which perhaps makes free interaction and direct manipulation of the content harder. In this thesis, we present a pen-based prototype for data visualization using 10 different types of bar based charts. The prototype lets users...
Show moreIt has been shown that representing data with the right visualization increases the understanding of qualitative and quantitative information encoded in documents. However, current tools for generating such visualizations involve the use of traditional WIMP techniques, which perhaps makes free interaction and direct manipulation of the content harder. In this thesis, we present a pen-based prototype for data visualization using 10 different types of bar based charts. The prototype lets users sketch a chart and interact with the information once the drawing is identified. The prototype's user interface consists of an area to sketch and touch based elements that will be displayed depending on the context and nature of the outline. Brainstorming and live presentations can benefit from the prototype due to the ability to visualize and manipulate data in real time. We also perform a short, informal user study to measure effectiveness of the tool while recognizing sketches and users acceptance while interacting with the system. Results show SketChart strengths and weaknesses and areas for improvement.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005434, ucf:50405
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005434
-
-
Title
-
Quantifying Trust and Reputation for Defense against Adversaries in Multi-Channel Dynamic Spectrum Access Networks.
-
Creator
-
Bhattacharjee, Shameek, Chatterjee, Mainak, Guha, Ratan, Zou, Changchun, Turgut, Damla, Catbas, Necati, University of Central Florida
-
Abstract / Description
-
Dynamic spectrum access enabled by cognitive radio networks are envisioned to drivethe next generation wireless networks that can increase spectrum utility by opportunisticallyaccessing unused spectrum. Due to the policy constraint that there could be no interferenceto the primary (licensed) users, secondary cognitive radios have to continuously sense forprimary transmissions. Typically, sensing reports from multiple cognitive radios are fusedas stand-alone observations are prone to errors...
Show moreDynamic spectrum access enabled by cognitive radio networks are envisioned to drivethe next generation wireless networks that can increase spectrum utility by opportunisticallyaccessing unused spectrum. Due to the policy constraint that there could be no interferenceto the primary (licensed) users, secondary cognitive radios have to continuously sense forprimary transmissions. Typically, sensing reports from multiple cognitive radios are fusedas stand-alone observations are prone to errors due to wireless channel characteristics. Suchdependence on cooperative spectrum sensing is vulnerable to attacks such as SecondarySpectrum Data Falsification (SSDF) attacks when multiple malicious or selfish radios falsifythe spectrum reports. Hence, there is a need to quantify the trustworthiness of radios thatshare spectrum sensing reports and devise malicious node identification and robust fusionschemes that would lead to correct inference about spectrum usage.In this work, we propose an anomaly monitoring technique that can effectively cap-ture anomalies in the spectrum sensing reports shared by individual cognitive radios duringcooperative spectrum sensing in a multi-channel distributed network. Such anomalies areused as evidence to compute the trustworthiness of a radio by its neighbours. The proposedanomaly monitoring technique works for any density of malicious nodes and for any physicalenvironment. We propose an optimistic trust heuristic for a system with a normal risk attitude and show that it can be approximated as a beta distribution. For a more conservativesystem, we propose a multinomial Dirichlet distribution based conservative trust framework,where Josang's Belief model is used to resolve any uncertainty in information that mightarise during anomaly monitoring. Using a machine learning approach, we identify maliciousnodes with a high degree of certainty regardless of their aggressiveness and variations intro-duced by the pathloss environment. We also propose extensions to the anomaly monitoringtechnique that facilitate learning about strategies employed by malicious nodes and alsoutilize the misleading information they provide. We also devise strategies to defend against a collaborative SSDF attack that islaunched by a coalition of selfish nodes. Since, defense against such collaborative attacks isdifficult with popularly used voting based inference models or node centric isolation techniques, we propose a channel centric Bayesian inference approach that indicates how much the collective decision on a channels occupancy inference can be trusted. Based on the measured observations over time, we estimate the parameters of the hypothesis of anomalous andnon-anomalous events using a multinomial Bayesian based inference. We quantitatively define the trustworthiness of a channel inference as the difference between the posterior beliefsassociated with anomalous and non-anomalous events. The posterior beliefs are updated based on a weighted average of the prior information on the belief itself and the recently observed data.Subsequently, we propose robust fusion models which utilize the trusts of the nodes to improve the accuracy of the cooperative spectrum sensing decisions. In particular, we propose three fusion models: (i) optimistic trust based fusion, (ii) conservative trust based fusion, and (iii) inversion based fusion. The former two approaches exclude untrustworthy sensing reports for fusion, while the last approach utilizes misleading information. Allschemes are analyzed under various attack strategies. We propose an asymmetric weightedmoving average based trust management scheme that quickly identifies on-off SSDF attacks and prevents quick trust redemption when such nodes revert back to temporal honest behavior. We also provide insights on what attack strategies are more effective from the adversaries' perspective.Through extensive simulation experiments we show that the trust models are effective in identifying malicious nodes with a high degree of certainty under variety of network and radio conditions. We show high true negative detection rates even when multiple malicious nodes launch collaborative attacks which is an improvement over existing voting based exclusion and entropy divergence techniques. We also show that we are able to improve the accuracy of fusion decisions compared to other popular fusion techniques. Trust based fusion schemes show worst case decision error rates of 5% while inversion based fusion show 4% as opposed majority voting schemes that have 18% error rate. We also show that the proposed channel centric Bayesian inference based trust model is able to distinguish between attacked and non-attacked channels for both static and dynamic collaborative attacks. We are also able to show that attacked channels have significantly lower trust values than channels that are not(-) a metric that can be used by nodes to rank the quality of inference on channels.
Show less
-
Date Issued
-
2015
-
Identifier
-
CFE0005764, ucf:50081
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005764
-
-
Title
-
MAKING VICTIM: ESTABLISHING A FRAMEWORK FOR ANALYZING VICTIMIZATION IN 20TH CENTURY AMERICAN THEATRE.
-
Creator
-
Hahl, Victoria, Listengarten, Julia, University of Central Florida
-
Abstract / Description
-
It is my belief that theatre is the telling of stories, and that playwrighting is the creation of those stories. Regardless of the underlying motives (to make the audience think, to make them feel, to offend them or to draw them in,) the core of the theatre world is the storyline. Some critics write of the importance of audience effect and audience reception; after all, a performance can only be so named if at least one person is there to witness it. So much of audience effect is based the...
Show moreIt is my belief that theatre is the telling of stories, and that playwrighting is the creation of those stories. Regardless of the underlying motives (to make the audience think, to make them feel, to offend them or to draw them in,) the core of the theatre world is the storyline. Some critics write of the importance of audience effect and audience reception; after all, a performance can only be so named if at least one person is there to witness it. So much of audience effect is based the storyline itself - that structure of which is created by the power characters have over others. Theatre generalists learn of Aristotle's well-made play structure. Playwrights quickly learn to distinguish between protagonists and antagonists. Actors are routinely taught physicalizations of creating "status" onstage. A plotline is driven by the power that people, circumstances, and even fate exercise over protagonists. Most audience members naturally sympathize with the underdog or victim in a given storyline, and so the submissive or oppressed character becomes (largely) the most integral. By what process, then, is this sense of oppression created in a play? How can oppression/victimization be analyzed with regard to character development? With emerging criticism suggesting that the concept of character is dying, what portrayals of victim have we seen in the late 20th century? What framework can we use to fully understand this complex concept? What are we to see in the future, and how will the concept evolve? In my attempt to answer these questions, I first analyze the definition of "victim" and what categories of victimization exist the victim of a crime, for example, or the victim of psychological oppression. "Victim" is a word with an extraordinarily complex definition, and so for the purposes of this study, I focus entirely on social victimization - that is, oppression or harm inflicted on a character by their peers or society. I focus on three major elements of this sort of victimization: harm inflicted on a character by another (not by their own actions), harm inflicted despite struggle or protest, and a power or authority endowed on the victimizer by the victim. After defining these elements, I analyze the literary methods by which playwrights can represent or create victimization blurred lines of authority, expressive text, and the creation of emotion through visual and auditory means. Once the concept of victim is defined and a framework established for viewing it in the theatre, I analyze the victimization of one of American theatre's most famous sufferers Eugene O'Neill's Yank in The Hairy Ape. To best contextualize this character, I explore the theories of theatre in this time period reflections of social struggles, the concept of hierarchy, and clearly drawn class lines. I also position The Hairy Ape in its immediate historical and theoretical time period, to understand if O'Neill created a reflection on or of his contemporaries. Finally, I look at the concept of victim through the nonrealistic and nonlinear plays of the 20th century how it has changed, evolved, or even (as Eleanor Fuchs may suggest) died. I found that my previously established framework for "making victim" has change dramatically to apply to contemporary nonlinear theatre pieces. Through this study, I have found that the lines of victimization and authority are as blurred today in nonrealistic and nonlinear theatre as they were in the seemingly "black and white" dramas of the 1920s and 30s. In my research, I have found the very beginnings of an extraordinarily complex definition of "victim".
Show less
-
Date Issued
-
2008
-
Identifier
-
CFE0002122, ucf:47534
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0002122
-
-
Title
-
Semiconductor Design and Manufacturing Interplay to Achieve Higher Yields at Reduced Costs using SMART Techniques.
-
Creator
-
Oberai, Ankush Bharati, Yuan, Jiann-Shiun, Abdolvand, Reza, Georgiopoulos, Michael, Sundaram, Kalpathy, Reilly, Charles, University of Central Florida
-
Abstract / Description
-
Since the outset of IC Semiconductor market there has been a gap between its design and manufacturing communities. This gap continued to grow as the device geometries started to shrink and the manufacturing processes and tools got more complex. This gap lowered the manufacturing yield, leading to higher cost of ICs and delay in their time to market. It also impacted performance of the ICs, impacting the overall functionality of the systems they were integrated in. However, in the recent years...
Show moreSince the outset of IC Semiconductor market there has been a gap between its design and manufacturing communities. This gap continued to grow as the device geometries started to shrink and the manufacturing processes and tools got more complex. This gap lowered the manufacturing yield, leading to higher cost of ICs and delay in their time to market. It also impacted performance of the ICs, impacting the overall functionality of the systems they were integrated in. However, in the recent years there have been major efforts to bridge the gap between design and manufacturing using software solutions by providing closer collaborations techniques between design and manufacturing communities. The root cause of this gap is inherited by the difference in the knowledge and skills required by the two communities. The IC design community is more microelectronics, electrical engineering and software driven whereas the IC manufacturing community is more driven by material science, mechanical engineering, physics and robotics. The cross training between the two is almost nonexistence and not even mandated. This gap is deemed to widen, with demand for more complex designs and miniaturization of electronic appliance-products. Growing need for MEMS, 3-D NANDS and IOTs are other drivers that could widen the gap between design and manufacturing. To bridge this gap, it is critical to have close loop solutions between design and manufacturing This could be achieved by SMART automation on both sides by using Artificial Intelligence, Machine Learning and Big Data algorithms. Lack of automation and predictive capabilities have even made the situation worse on the yield and total turnaround times. With the growing fabless and foundry business model, bridging the gap has become even more critical. Smart Manufacturing philosophy must be adapted to make this bridge possible. We need to understand the Fab-fabless collaboration requirements and the mechanism to bring design to the manufacturing floor for yield improvement. Additionally, design community must be educated with manufacturing process and tool knowledge, so they can design for improved manufacturability. This study will require understanding of elements impacting manufacturing on both ends of the design and manufacturing process. Additionally, we need to understand the process rules that need to be followed closely in the design phase. Best suited SMART automation techniques to bridge the gap need to be studied and analyzed for their effectiveness.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007351, ucf:52096
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007351
-
-
Title
-
Automatic Detection of Brain Functional Disorder Using Imaging Data.
-
Creator
-
Dey, Soumyabrata, Shah, Mubarak, Jha, Sumit, Hu, Haiyan, Weeks, Arthur, Rao, Ravishankar, University of Central Florida
-
Abstract / Description
-
Recently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity,...
Show moreRecently, Attention Deficit Hyperactive Disorder (ADHD) is getting a lot of attention mainly for two reasons. First, it is one of the most commonly found childhood behavioral disorders. Around 5-10% of the children all over the world are diagnosed with ADHD. Second, the root cause of the problem is still unknown and therefore no biological measure exists to diagnose ADHD. Instead, doctors need to diagnose it based on the clinical symptoms, such as inattention, impulsivity and hyperactivity, which are all subjective.Functional Magnetic Resonance Imaging (fMRI) data has become a popular tool to understand the functioning of the brain such as identifying the brain regions responsible for different cognitive tasks or analyzing the statistical differences of the brain functioning between the diseased and control subjects. ADHD is also being studied using the fMRI data. In this dissertation we aim to solve the problem of automatic diagnosis of the ADHD subjects using their resting state fMRI (rs-fMRI) data.As a core step of our approach, we model the functions of a brain as a connectivity network, which is expected to capture the information about how synchronous different brain regions are in terms of their functional activities. The network is constructed by representing different brain regions as the nodes where any two nodes of the network are connected by an edge if the correlation of the activity patterns of the two nodes is higher than some threshold. The brain regions, represented as the nodes of the network, can be selected at different granularities e.g. single voxels or cluster of functionally homogeneous voxels. The topological differences of the constructed networks of the ADHD and control group of subjects are then exploited in the classification approach.We have developed a simple method employing the Bag-of-Words (BoW) framework for the classification of the ADHD subjects. We represent each node in the network by a 4-D feature vector: node degree and 3-D location. The 4-D vectors of all the network nodes of the training data are then grouped in a number of clusters using K-means; where each such cluster is termed as a word. Finally, each subject is represented by a histogram (bag) of such words. The Support Vector Machine (SVM) classifier is used for the detection of the ADHD subjects using their histogram representation. The method is able to achieve 64% classification accuracy.The above simple approach has several shortcomings. First, there is a loss of spatial information while constructing the histogram because it only counts the occurrences of words ignoring the spatial positions. Second, features from the whole brain are used for classification, but some of the brain regions may not contain any useful information and may only increase the feature dimensions and noise of the system. Third, in our study we used only one network feature, the degree of a node which measures the connectivity of the node, while other complex network features may be useful for solving the proposed problem.In order to address the above shortcomings, we hypothesize that only a subset of the nodes of the network possesses important information for the classification of the ADHD subjects. To identify the important nodes of the network we have developed a novel algorithm. The algorithm generates different random subset of nodes each time extracting the features from a subset to compute the feature vector and perform classification. The subsets are then ranked based on the classification accuracy and the occurrences of each node in the top ranked subsets are measured. Our algorithm selects the highly occurring nodes for the final classification. Furthermore, along with the node degree, we employ three more node features: network cycles, the varying distance degree and the edge weight sum. We concatenate the features of the selected nodes in a fixed order to preserve the relative spatial information. Experimental validation suggests that the use of the features from the nodes selected using our algorithm indeed help to improve the classification accuracy. Also, our finding is in concordance with the existing literature as the brain regions identified by our algorithms are independently found by many other studies on the ADHD. We achieved a classification accuracy of 69.59% using this approach. However, since this method represents each voxel as a node of the network which makes the number of nodes of the network several thousands. As a result, the network construction step becomes computationally very expensive. Another limitation of the approach is that the network features, which are computed for each node of the network, captures only the local structures while ignore the global structure of the network.Next, in order to capture the global structure of the networks, we use the Multi-Dimensional Scaling (MDS) technique to project all the subjects from an unknown network-space to a low dimensional space based on their inter-network distance measures. For the purpose of computing distance between two networks, we represent each node by a set of attributes such as the node degree, the average power, the physical location, the neighbor node degrees, and the average powers of the neighbor nodes. The nodes of the two networks are then mapped in such a way that for all pair of nodes, the sum of the attribute distances, which is the inter-network distance, is minimized. To reduce the network computation cost, we enforce that the maximum relevant information is preserved with minimum redundancy. To achieve this, the nodes of the network are constructed with clusters of highly active voxels while the activity levels of the voxels are measured based on the average power of their corresponding fMRI time-series. Our method shows promise as we achieve impressive classification accuracies (73.55%) on the ADHD-200 data set. Our results also reveal that the detection rates are higher when classification is performed separately on the male and female groups of subjects.So far, we have only used the fMRI data for solving the ADHD diagnosis problem. Finally, we investigated the answers of the following questions. Do the structural brain images contain useful information related to the ADHD diagnosis problem? Can the classification accuracy of the automatic diagnosis system be improved combining the information of the structural and functional brain data? Towards that end, we developed a new method to combine the information of structural and functional brain images in a late fusion framework. For structural data we input the gray matter (GM) brain images to a Convolutional Neural Network (CNN). The output of the CNN is a feature vector per subject which is used to train the SVM classifier. For the functional data we compute the average power of each voxel based on its fMRI time series. The average power of the fMRI time series of a voxel measures the activity level of the voxel. We found significant differences in the voxel power distribution patterns of the ADHD and control groups of subjects. The Local binary pattern (LBP) texture feature is used on the voxel power map to capture these differences. We achieved 74.23% accuracy using GM features, 77.30% using LBP features and 79.14% using combined information.In summary this dissertation demonstrated that the structural and functional brain imaging data are useful for the automatic detection of the ADHD subjects as we achieve impressive classification accuracies on the ADHD-200 data set. Our study also helps to identify the brain regions which are useful for ADHD subject classification. These findings can help in understanding the pathophysiology of the problem. Finally, we expect that our approaches will contribute towards the development of a biological measure for the diagnosis of the ADHD subjects.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005786, ucf:50060
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005786
-
-
Title
-
Recycled Modernity: Google, Immigration History, and the Limits for H-1B.
-
Creator
-
Patten, Neil, Dombrowski, Paul, Mauer, Barry, Grajeda, Anthony, Dziuban, Charles, University of Central Florida
-
Abstract / Description
-
Regulation of admission to the United States for technology workers from foreign countries has been a difficult issue, especially during periods of intense development. Following the dot.com bubble, the Google Corporation continued to argue in favor of higher limits under the Immigration and Nationality Act exception referred to as (")H-1B(") for the section of the law where it appears. H-1B authorized temporary admission for highly skilled labor in specialty occupations. Congressional...
Show moreRegulation of admission to the United States for technology workers from foreign countries has been a difficult issue, especially during periods of intense development. Following the dot.com bubble, the Google Corporation continued to argue in favor of higher limits under the Immigration and Nationality Act exception referred to as (")H-1B(") for the section of the law where it appears. H-1B authorized temporary admission for highly skilled labor in specialty occupations. Congressional testimony by Laszlo Bock, Google Vice President for People Operations, provided the most succinct statement of Google's concerns based on maintaining a competitive and diverse workforce. Diversity has been a rhetorical priority for Google, yet diversity did not affect the argument in a substantial and realistic way. Likewise, emphasis on geographically situated competitive capability suggests a limited commitment to the global communities invoked by information technology. The history of American industry produced corporations determined to control and exploit every detail of their affairs. In the process, industrial corporations used immigration as a labor resource. Google portrayed itself, and Google has been portrayed by media from the outside, as representative of new information technology culture, an information community of diverse, inclusive, and democratically transparent technology in the sense of universal availability and benefit with a deliberate concern for avoiding evil. However, emphasis by Google on American supremacy combined with a kind of half-hearted rhetorical advocacy for principles of diversity suggest an inconsistent approach to the argument about H-1B. The Google argument for manageable resources connected to corporate priorities of Industrial Modernity, a habit of control, more than to democratic communities of technology. In this outcome, there are concerns for information technology and the Industry of Knowledge Work. By considering the treatment of immigration as a sign of management attitude, I look at questions posed by Jean Baudrillard, Daniel Headrick, Alan Liu, and others about whether information technology as an industry and as communities of common interests has achieved any democratically universal (")ethical progress(") beyond the preceding system of industrial commerce that demands the absolute power to exploit resources, including human resources. Does Google's performance confirm skeptical questions, or did Google actually achieve something more socially responsible? In the rhetoric of immigration history and the rhetoric of Google as technology, this study finds connections to a recycled corporate-management version of Industrial Modernity that constrains the diffusion of technology.
Show less
-
Date Issued
-
2014
-
Identifier
-
CFE0005685, ucf:50135
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0005685
-
-
Title
-
THE KIOSK CULTURE: RECONCILING THE PERFORMANCE SUPPORT PARADOX IN THE POSTMODERN AGE OF MACHINES.
-
Creator
-
Cavanagh, Thomas, Kitalong, Karla, University of Central Florida
-
Abstract / Description
-
Do you remember the first time you used an Automatic Teller Machine (ATM)? Or a pay-at-the-pump gas station? Or an airline e-ticket kiosk? How did you know what to do? Although you never received any formal instruction in how to interact with the self-service technology, you were likely able to accomplish your task (e.g., withdrawing or depositing money) as successfully as an experienced user. However, not so long ago, to accomplish that same task, you needed the direct mediation of a service...
Show moreDo you remember the first time you used an Automatic Teller Machine (ATM)? Or a pay-at-the-pump gas station? Or an airline e-ticket kiosk? How did you know what to do? Although you never received any formal instruction in how to interact with the self-service technology, you were likely able to accomplish your task (e.g., withdrawing or depositing money) as successfully as an experienced user. However, not so long ago, to accomplish that same task, you needed the direct mediation of a service professional who had been trained how to use the required complex technology. What has changed? In short, the technology is now able to compensate for the average consumer's lack of experience with the transactional system. The technology itself bridges the performance gap, allowing a novice to accomplish the same task as an experienced professional. This shift to a self-service paradigm is completely changing the dynamics of the consumer relationship with the capitalist enterprise, resulting in what is rapidly becoming the default consumer interface of the postmodern era. The recognition that the entire performance support apparatus now revolves around the end user/consumer rather than the employee represents a tectonic shift in the workforce training industry. What emerges is a homogenized consumer culture enabled by self-service technologies--a kiosk culture. No longer is the ability to interact with complex technology confined to a privileged workforce minority who has access to expensive and time-consuming training. The growth of the kiosk culture is being driven equally by business financial pressures, consumer demand for more efficient transactions, and the improved sophistication of compensatory technology that allows a novice to perform a task with the same competence as an expert. "The Kiosk Culture" examines all aspects of self-service technology and its ascendancy. Beyond the milieu of business, the kiosk culture is also infiltrating all corners of society, including medicine, athletics, and the arts, forcing us to re-examine our definitions of knowledge, skills, performance, and even humanity. The current ubiquity of self-service technology has already impacted our society and will continue to do so as we ride the rising tide of the kiosk culture.
Show less
-
Date Issued
-
2006
-
Identifier
-
CFE0001348, ucf:46989
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0001348
-
-
Title
-
Sampling and Subspace Methods for Learning Sparse Group Structures in Computer Vision.
-
Creator
-
Jaberi, Maryam, Foroosh, Hassan, Pensky, Marianna, Gong, Boqing, Qi, GuoJun, Pensky, Marianna, University of Central Florida
-
Abstract / Description
-
The unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the...
Show moreThe unprecedented growth of data in volume and dimension has led to an increased number of computationally-demanding and data-driven decision-making methods in many disciplines, such as computer vision, genomics, finance, etc. Research on big data aims to understand and describe trends in massive volumes of high-dimensional data. High volume and dimension are the determining factors in both computational and time complexity of algorithms. The challenge grows when the data are formed of the union of group-structures of different dimensions embedded in a high-dimensional ambient space.To address the problem of high volume, we propose a sampling method referred to as the Sparse Withdrawal of Inliers in a First Trial (SWIFT), which determines the smallest sample size in one grab so that all group-structures are adequately represented and discovered with high probability. The key features of SWIFT are: (i) sparsity, which is independent of the population size; (ii) no prior knowledge of the distribution of data, or the number of underlying group-structures; and (iii) robustness in the presence of an overwhelming number of outliers. We report a comprehensive study of the proposed sampling method in terms of accuracy, functionality, and effectiveness in reducing the computational cost in various applications of computer vision. In the second part of this dissertation, we study dimensionality reduction for multi-structural data. We propose a probabilistic subspace clustering method that unifies soft- and hard-clustering in a single framework. This is achieved by introducing a delayed association of uncertain points to subspaces of lower dimensions based on a confidence measure. Delayed association yields higher accuracy in clustering subspaces that have ambiguities, i.e. due to intersections and high-level of outliers/noise, and hence leads to more accurate self-representation of underlying subspaces. Altogether, this dissertation addresses the key theoretical and practically issues of size and dimension in big data analysis.
Show less
-
Date Issued
-
2018
-
Identifier
-
CFE0007017, ucf:52039
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007017
-
-
Title
-
OPTIMAL DETOUR PLANNING AROUND BLOCKED CONSTRUCTION ZONES.
-
Creator
-
Jardaneh , Mutasem, Khalafallah, Ahmed, University of Central Florida
-
Abstract / Description
-
Construction zones are traffic way areas where construction, maintenance or utility work is identified by warning signs, signals and indicators, including those on transport devices that mark the beginning and end of construction zones. Construction zones are among the most dangerous work areas, with workers facing workplace safety challenges that often lead to catastrophic injuries or fatalities. In addition, daily commuters are also impacted by construction zone detours that affect their...
Show moreConstruction zones are traffic way areas where construction, maintenance or utility work is identified by warning signs, signals and indicators, including those on transport devices that mark the beginning and end of construction zones. Construction zones are among the most dangerous work areas, with workers facing workplace safety challenges that often lead to catastrophic injuries or fatalities. In addition, daily commuters are also impacted by construction zone detours that affect their safety and daily commute time. These problems represent major challenges to construction planners as they are required to plan vehicle routes around construction zones in such a way that maximizes the safety of construction workers and reduces the impact on daily commuters. This research aims at developing a framework for optimizing the planning of construction detours. The main objectives of the research are to first identify all the decision variables that affect the planning of construction detours and secondly, implement a model based on shortest path formulation to identify the optimal alternatives for construction detours. The ultimate goal of this research is to offer construction planners with the essential guidelines to improve construction safety and reduce construction zone hazards as well as a robust tool for selecting and optimizing construction zone detours.
Show less
-
Date Issued
-
2011
-
Identifier
-
CFE0003586, ucf:48900
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0003586
Pages