Breaking Linear Classifiers on ImageNet
Resource history | v1 (current) | created by janarez
Details
Breaking Linear Classifiers on ImageNet
see v1 | created by janarez | Add topic "Adversarial machine learning"
- Title
- Breaking Linear Classifiers on ImageNet
- Type
- BlogPost
- Created
- 2015-03-30
- Description
- You’ve probably heard that Convolutional Networks work very well in practice and across a wide range of visual recognition problems. You may have also read articles and papers that claim to reach a near “human-level performance”. Yet, a second group of seemingly baffling results has emerged that brings up an apparent contradiction. I’m referring to several people who have noticed that it is possible to take an image that a state-of-the-art Convolutional Network thinks is one class (e.g. “panda”), and it is possible to change it almost imperceptibly to the human eye in such a way that the Convolutional Network suddenly classifies the image as any other class of choice.
- Link
- http://karpathy.github.io/2015/03/30/breaking-convnets/
- Identifier
- no value
authors
This resource has no history of related authors.
topics
discusses Adversarial machine learning
resources
This resource has no history of related resources.