- Change theme
Flaw Found to Attack Image Recognition Systems
Psychedelic stickers fool image recognition software into seeing objects that are not there.
11:29 07 January 2018
A team of Google researchers has found a flaw that can be used to attack image recognition systems.
Using a toaster as an example, the team created psychedelic stickers using colourful-generated patterns, which caused image recognition software to see objects that are not there. When the patterns were put next to another object, such as a banana, many neural networks saw a toaster instead.
Researchers said: "These adversarial patches can be printed, added to any scene, photographed, and presented to image classifiers,"
"Even when the patches are small, they cause the classifiers to ignore the other items in the scene and report a chosen target class."
The researchers found that the trick works if the computer-generated pattern was more “salient” to image recognition software than real objects.
"While images may contain several items, only one target label is considered true, and thus the network must learn to detect the most 'salient' item in the frame," they wrote in their research paper.