Breaking Linear Classifiers on ImageNet


Resource | v1 | created by janarez |
Type Blog post
Created 2015-03-30
Identifier unavailable

Description

You’ve probably heard that Convolutional Networks work very well in practice and across a wide range of visual recognition problems. You may have also read articles and papers that claim to reach a near “human-level performance”. Yet, a second group of seemingly baffling results has emerged that brings up an apparent contradiction. I’m referring to several people who have noticed that it is possible to take an image that a state-of-the-art Convolutional Network thinks is one class (e.g. “panda”), and it is possible to change it almost imperceptibly to the human eye in such a way that the Convolutional Network suddenly classifies the image as any other class of choice.

Relations

discusses Adversarial machine learning

Adversarial machine learning is a machine learning technique that attempts to fool models by supplyin...


Edit details Edit relations Attach new author Attach new topic Attach new resource
9.0 /10
useless alright awesome
from 1 review
Write comment Rate resource Tip: Rating is anonymous unless you also write a comment.
Resource level 2.0 /10
beginner intermediate advanced
Resource clarity 10.0 /10
hardly clear sometimes unclear perfectly clear
Reviewer's background 7.0 /10
none basics intermediate advanced expert
Comments 1
janarez
0 0

9 rating 2 level 10 clarity 7 user's background

Clear and enjoyable introduction.