Breaking Linear Classifiers on ImageNet
You’ve probably heard that Convolutional Networks work very well in practice and across a wide range of visual recognition problems. You may have also read articles and papers that claim to reach a near “human-level performance”. Yet, a second group of seemingly baffling results has emerged that brings up an apparent contradiction. I’m referring to several people who have noticed that it is possible to take an image that a state-of-the-art Convolutional Network thinks is one class (e.g. “panda”), and it is possible to change it almost imperceptibly to the human eye in such a way that the Convolutional Network suddenly classifies the image as any other class of choice.
Relations
discusses Adversarial machine learning
Adversarial machine learning is a machine learning technique that attempts to fool models by supplyin...
Edit details Edit relations Attach new author Attach new topic Attach new resource
from 1 review
- Resource level 2.0 /10
- beginner intermediate advanced
- Resource clarity 10.0 /10
- hardly clear sometimes unclear perfectly clear
- Reviewer's background 7.0 /10
- none basics intermediate advanced expert
Comments 1
9 rating 2 level 10 clarity 7 user's background