A team of researchers from prominent universities – including SUNY Buffalo, Iowa State, UNC Charlotte, and Purdue – were able to turn an autonomous vehicle (AV) operated on the open sourced Apollo driving platform from Chinese web giant Baidu into a deadly weapon by tricking its multi-sensor fusion system, and suggest the attack could be applied to other self-driving cars.
https://xkcd.com/1958/
TL;DR: faking out a self-driving system is always going to be possible, and so is faking out humans. But doing so is basically attempted murder, which is why the existence of an exploit like this is not interesting or new. You could also cut the brake lines or rig a bomb to it.
People seem to hold computers to a higher standard than other people when performing the same task.
Because humans have more accountability. Also it has implications for military/police use of self-guided stuff.
I think human responses vary too much: could you follow a strategy that makes 50% of human drivers crash reliably? probably. Could you follow a strategy to make 100% of autonomous vehicles crash reliably? Almost certainly.
Or if it’s a Tesla you could hack someone’s weather app and thus force them to drive in the rain.
You don’t even have to rig a bomb, a better analogy to the sensor spoofing would be to just shine a sufficiently bright light in the driver’s eyes from the opposite side of the road. Things will go sideways real quick.
It’s not meant to be a perfect example. It’s a comparable principle. Subverting the self-driving like that is more or less equivalent to any other means of attempting to kill someone with their car.
I don’t disagree, i’m simply trying to present a somewhat less extreme (and therefore i think more appealing) version of your argument