This topic is only going to grow, as might be expected. I read a newspaper report (i newspaper, United Kingdom 18th July 2016) of the first known fatal accident involving an autonomous car, a Tesla, the software in which was unable to distinguish a vehicle (with which it then collided) in front of a background of a bright sky and the passenger in the Tesla died as a result of the impact.
An interesting discussion is unfolding in the media following the recent crash between the Google car and a bus in the USA. I find two aspects of the discussion that I have seen (on BBC.com) of particular interest:
The statement by the US transport secretary Anthony Foxx that the crash was not surprising and that the technology should not be compared with perfection
The legal problems of determining liability in such a case: should it be the passenger or the software manufacturer who is liable?
Society in general has a perception that in a perfect world there would be no crashes, no accidents in industry or elsewhere. This view holds that things going wrong are an anomaly and must be due to the actions of a culpable individual or organisation. This gives rise to the concept of liability and the whole legal process that flows in the wake of unwanted damage like the wake of a boat navigating a polluted river. The view is given passionate and often public expression by the family of victims, who seek someone to blame, seek punishment to fit the crime and some ‘closure’ to be had.
Just maybe, the essentially flawed nature of the perfection view point will be exposed by the obvious stupidity of finding the passenger liable and the difficulty the lawyers will find (one can only hope) in pinning the blame on software and hardware developers. They might just as well blame the developers of braking systems in cars of our current technology – if a car stopped instantly how many crashes would be avoided?
Just maybe the idea of the blameless car accident will surface along with no-fault insurance pay-outs. That is something I’d like to see!