Add topic "Adversarial machine learning" Accepted
Changes: 6
-
Add Explaining and Harnessing Adversarial Examples
- Title
-
- Unchanged
- Explaining and Harnessing Adversarial Examples
- Type
-
- Unchanged
- Paper
- Created
-
- Unchanged
- 2015-03-20
- Description
-
- Unchanged
- Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.
- Link
-
- Unchanged
- http://arxiv.org/abs/1412.6572
- Identifier
-
- Unchanged
- no value
Resource | v1 | current (v1) -
Add Breaking Linear Classifiers on ImageNet
- Title
-
- Unchanged
- Breaking Linear Classifiers on ImageNet
- Type
-
- Unchanged
- Blog post
- Created
-
- Unchanged
- 2015-03-30
- Description
-
- Unchanged
- You’ve probably heard that Convolutional Networks work very well in practice and across a wide range of visual recognition problems. You may have also read articles and papers that claim to reach a near “human-level performance”. Yet, a second group of seemingly baffling results has emerged that brings up an apparent contradiction. I’m referring to several people who have noticed that it is possible to take an image that a state-of-the-art Convolutional Network thinks is one class (e.g. “panda”), and it is possible to change it almost imperceptibly to the human eye in such a way that the Convolutional Network suddenly classifies the image as any other class of choice.
- Link
-
- Unchanged
- http://karpathy.github.io/2015/03/30/breaking-convnets/
- Identifier
-
- Unchanged
- no value
Resource | v1 | current (v1) -
Add Adversarial machine learning
- Title
-
- Unchanged
- Adversarial machine learning
- Description
-
- Unchanged
- Adversarial machine learning is a machine learning technique that attempts to fool models by supplying deceptive input. The most common reason is to cause a malfunction in a machine learning model. Most machine learning techniques were designed to work on specific problem sets in which the training and test data are generated from the same statistical distribution (IID). When those models are applied to the real world, adversaries may supply data that violates that statistical assumption. This data may be arranged to exploit specific vulnerabilities and compromise the results.
- Link
-
- Unchanged
- https://en.wikipedia.org/?curid=45049676
Topic | v1 | current (v1) -
Add Adversarial machine learning discussed in Explaining and Harnessing Adversarial Examples
- Current
- discussed in
Topic to resource relation | v1 -
Add Adversarial machine learning discussed in Breaking Linear Classifiers on ImageNet
- Current
- discussed in
Topic to resource relation | v1 -
Add Deep learning cons given in Adversarial machine learning
- Current
- cons given in
Topic to topic relation | v1