Adversarial Traffic Signs

A couple of days ago I wrote about embedding barcodes into traffic signs to help self-driving cars. Several commenters pointed out a recent academic paper in which researchers (Evtimov, et al.) confused a computer vision system into thinking that a stop sign was a 45 mph sign, with just a few pieces of tape.

This appears to be an extension of a property of neural networks that was already known, which is that they can be fooled in surprising ways. This is called an “adversarial” attack.

Here is an example Justin Johnson gave in the fantastic Stanford CS231n class on convolutional neural networks:


So it’s no shocker that the computer vision systems for cars, which rely largely on CNNs, can be fooled.

But notice that it’s not obvious how to apply Justin Johnson’s examples above to an actual printed photo of a goldfish in the real world. The examples above only really work if you have a digital photo of a goldfish.

The breakthrough of the Evtimov et al. paper is that they developed an attack algorithm, which they call Robust Physical Perturbations, that allows them to apply this attack to signs in the real world.

So now we are heading down the road of fooling cars into blowing through stop signs. Is the end nigh?

I’m skeptical.

Hackers hardly need to wait until self-driving cars are on the road before they mess with stop signs. It’s easy enough to cause real carnage today just by removing a stop sign. Indeed, this happens already and the people who do it get convicted of manslaughter. (Although note that particular case was overturned on appeal because it wasn’t clear whether the convicts removed the precise stop sign in question, or a different stop sign.)

I don’t see too many hackers messing with street signs, though, presumably because the result is both fleeting and unpredictable, and the cost (jail time) is high.

In fact, self-driving cars seem even less likely than human drivers to be fooled by tampered stop signs. Self-driving cars are likely to have maps and sensors that could override whatever the car’s camera sees.

It’s possible this paper leads to further breakthroughs in adversarial attacks that could cause more problems, but I don’t think this advance by itself is too worrisome.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s