It’s terrifying to find out that your personal computer has been compromised by a hacker. You instantly begin a damage assessment. What personal information is contained in the compromised machine? How long have you been monitored by spyware? These questions race through your mind. But what happens when the computer being compromised is your car?

There are surely security measures built in to every autonomous vehicle, but the vulnerability that may be hardest to control is one hackers can exploit without even knowing your car exists. Because autonomous vehicles operate in response to inputs from the world around them, the easiest way to hack a self-driving vehicle may be to simply change its environment.

A group of researchers led by Ivan Evtimov from the University of Washington has developed a concept for fooling self-driving cars by slightly altering the road signs they rely upon. The alterations trick the vehicles into recognizing them as different signs.

The study, titled Robust Physical-World Attacks on Machine Learning Models and posted on arXiv.org, explains a proof of concept for fooling autonomous cars. Using subtle changes like fake weathering or specially designed graffiti, the research team was able to fool the vehicle algorithm they were testing into believing stop signs were actually 45 mile per hour speed limit signs.

The danger of such alterations is obvious. When cars should be stopping but instead may be speeding up, disaster is almost guaranteed. The question for investors is, will scare stories like these prevent mass uptake of autonomous vehicles by consumers, or will tech companies innovate their way around such obstacles?

Hackers take over steering from smart car driver