Posted: 28/03/2018
Driverless cars are thought likely to be safer than cars driven by human beings, because they will not be subject to human failings: they will never be tired, or drunk, or distracted, speeding or lost. That does not mean, however, that they will not be subject to non-human failings and even the most optimistic engineer would probably accept that accidents will occur.
On Sunday 18 March 2018 came the tragic news that a pedestrian in Tempe, Arizona, had been killed by a Volvo which had hit her at about 40mph. Although the car was occupied by a human being, described by the owners, Uber, as a “safety driver”, the car was in a fully autonomous mode at the time of the accident. It is thought that about 400 autonomous vehicles have been given a licence to operate in California and over the last four years about 60 collisions have been reported. This is believed, however, to be the first fatality involving a pedestrian and apart from the need to find out what went wrong, this tragic accident must focus the minds of the automotive industry regulators and insurers on the legal and policy issues which must be resolved before these vehicles come into widespread use.
In February 2015, the UK Government released a summary report and action plan entitled 'The Pathway to Driverless cars'. The report concluded that the existing legal and regulatory framework was not a barrier to the testing of autonomous vehicles on public roads. A Code of Practice was produced in July 2015 which provides guidance to anyone wishing to conduct testing of automated vehicle technologies on public roads in the UK. Testing of driverless cars on public roads is possible in the UK today provided that a “driver” is present and takes responsibility for the safe operation of the vehicle and that the vehicle can be used within existing road traffic law. In other words, these cars can only operate in the UK if a human safety operator is present, just as in Tempe.
What that means is that the “driver” must have an appropriate driving licence: the vehicle must be registered at the Driver and Vehicle Licensing Agency; it must have type approval based on compliance with safety and environmental standards; the vehicle must be roadworthy and the driver must be able to see the road and have proper control of the vehicle. All vehicles would also have to have proper third party insurance cover. Therefore, whilst highly automated cars might be able to appear on the roads as things stand now, fully autonomous vehicles that have no possibility of human intervention would not.
The law always lags behind technology but it is clear that before matters go much further, and quite apart from any construction and use issues, there will be a need for further legislation as the whole concept of a driverless car runs counter to the policies on which road traffic legislation is based at present.
To state the blindingly obvious, a car has no legal personality, so cannot be responsible for what it does even if it is “autonomous”. Responsibility rests with owners and drivers in the same way as it does with any other object which could do harm, such as a knife or a brick. As matters stand, responsibility, both civil and criminal, rests with the person who is in control of a vehicle at the time of an accident and the question is whether he or she has been negligent or breached the criminal law whilst in control.
This might be a simple matter of fact, where for example speed limits are exceeded, or it might be measured against the standards set by the Highway Code which, while not mandatory, is used as a guide against which negligent driving can be assessed. If an accident is caused because a vehicle is faulty, then the driver may escape liability if he can show that an accident resulted from a design or manufacture fault and not, for example, poor maintenance, or some event which made evasive action impossible, such as being attacked by a swarm of bees. Generally, however a defence of “the car ran away with me, Your Honour” is rarely successful.
But under the present law, what would the implications be if the car was in fact completely autonomous and not under the control of any human being at the time of the accident?
Currently civil liability turns on whether the driver failed to control the vehicle adequately. If the car is controlling itself, and the occupant cannot in any way override the automatic systems, then he is not in control of the car in the legal sense at all and is not just not driving negligently, he is not driving at all and can have no direct liability to injured victims.
The owner’s insurers would not therefore be liable to compensate anyone, because it is the activity of driving, turning on the issue of control at the relevant moment, which is the insured activity for the purposes of third party insurance. He might conceivably be negligent however if it could be shown that the cause of the accident was a failure to maintain the car properly.
The criminal law is even more difficult. Driving offences also turn on the question of control, which is why requests for information about who was driving are sent out when speed cameras pick up speeding offences. However in relation to more serious offences such as reckless or causing death by dangerous driving, there must be an element of behavioural choice by the driver in the way in which he controls the car. However, if the occupant cannot control the car in any way at all, he cannot commit the offence, whatever happens.
Failure to maintain a vehicle in a safe condition might attract penalties under the Construction and Use Regulations but at present the penalties for these offences do not equate with those imposed for causing death by dangerous driving. If the manufacturer commits a criminal offence at all, it would arise under consumer regulations relating to the supply of safe vehicles.
One has only to think of the range and number of possible driving offences to realise what practical problems might arise in prosecution.
There would, therefore, be great difficulty in applying the present law to fully autonomous cars without artificial distortion of the existing law. In the 2017 Autumn Budget, the Government stated that it wanted to see fully self-driving cars, without a human operator, on UK roads by 2021. In this relatively short timeframe, new or extensive changes to the current regulatory framework will have to be made to define what amounts to safety in a fully autonomous car if this were to be possible. The consequences for road traffic offences, and insurance requirements are also likely to need new or amended legislation, as if the issues which arise are left to be resolved in the courts, there will be many years of litigation before the position is clear. Regulators themselves may be at risk of civil legal action if they permit vehicles in the roads which show patterns of mechanical failure. The future of roadbuilding would also need to be considered, to adapt to support self-driving cars.
And it won’t just be cars. What happens if my robot runs amok and hits your robot or child? Or takes something from a shop without paying? Or a drone drops a parcel on my roof instead of leaving it at the front door...