You are here
Adversarial Attacks On Vision Algorithms Using Deep Learning Features
- Date Issued:
- 2017
- Abstract/Description:
- Computer vision algorithms, such as those implementing object detection, are known to be sus-ceptible to adversarial attacks. Small barely perceptible perturbations to the input can cause visionalgorithms to incorrectly classify inputs that they would have otherwise classified correctly. Anumber of approaches have been recently investigated to generate such adversarial examples fordeep neural networks. Many of these approaches either require grey-box access to the deep neuralnet being attacked or rely on adversarial transfer and grey-box access to a surrogate neural network.In this thesis, we present an approach to the synthesis of adversarial examples for computer vi-sion algorithms that only requires black-box access to the algorithm being attacked. Our attackapproach employs fuzzing with features derived from the layers of a convolutional neural networktrained on adversarial examples from an unrelated dataset. Based on our experimental results,we believe that our validation approach will enable designers of cyber-physical systems and otherhigh-assurance use-cases of vision algorithms to stress test their implementations.
Title: | Adversarial Attacks On Vision Algorithms Using Deep Learning Features. |
89 views
19 downloads |
---|---|---|
Name(s): |
Michel, Andy, Author Jha, Sumit Kumar, Committee Chair Leavens, Gary, Committee Member Valliyil Thankachan, Sharma, Committee Member University of Central Florida, Degree Grantor |
|
Type of Resource: | text | |
Date Issued: | 2017 | |
Publisher: | University of Central Florida | |
Language(s): | English | |
Abstract/Description: | Computer vision algorithms, such as those implementing object detection, are known to be sus-ceptible to adversarial attacks. Small barely perceptible perturbations to the input can cause visionalgorithms to incorrectly classify inputs that they would have otherwise classified correctly. Anumber of approaches have been recently investigated to generate such adversarial examples fordeep neural networks. Many of these approaches either require grey-box access to the deep neuralnet being attacked or rely on adversarial transfer and grey-box access to a surrogate neural network.In this thesis, we present an approach to the synthesis of adversarial examples for computer vi-sion algorithms that only requires black-box access to the algorithm being attacked. Our attackapproach employs fuzzing with features derived from the layers of a convolutional neural networktrained on adversarial examples from an unrelated dataset. Based on our experimental results,we believe that our validation approach will enable designers of cyber-physical systems and otherhigh-assurance use-cases of vision algorithms to stress test their implementations. | |
Identifier: | CFE0006898 (IID), ucf:51714 (fedora) | |
Note(s): |
2017-12-01 M.S. Engineering and Computer Science, Computer Science Masters This record was generated from author submitted information. |
|
Subject(s): | Deep Learning -- Computer Vision -- Adversarial Attack | |
Persistent Link to This Record: | http://purl.flvc.org/ucf/fd/CFE0006898 | |
Restrictions on Access: | public 2017-12-15 | |
Host Institution: | UCF |