Driverless cars: The safety conundrum

©iStock.com/AnnBaldwin

Before driverless cars can be released into our complex environment, they need to achieve exceptional levels of safety. How close are we to making this happen?

Automation is old hat. Almost every new car built features some form of non-human control: ABS; cruise control; lane-changing warning systems; even semi-automatic control. In the latter case, the car is capable of following traffic, accelerating and braking, as well as keeping to the same lane, within a defined maximum speed.

The driver still remains entirely necessary, with one hand on the wheel but also to input adaptive strategies to cope with the changing environment accordingly.

Complete automation has long been studied and tested in the car industry. Some experimental vehicles are entirely capable of driving themselves with no human passengers. The question is certainly not if, but when will this technology pass into public hands?

Basic construction

To build a driverless vehicle several things are required: a control computer and algorithm (which is confirmed to be safe and reliable); a sufficient number of (equally reliable) input sensors from which the car can 'read' its environment; and finally, the control actuators.

This bare-bones structure is not new. Completely automated, highly complex vehicles function routinely in other fields, e.g., autopilot computers in modern aircraft. The driverless car however, has the short straw of additional concerns raised by its uniquely hazardous zone of operation.

The complexity is obvious: pedestrians, other vehicles, temporary road-blocks, obscured traffic signs and lane markings, fog and snow conditions, to name a few. Then there is also the potential for lack of maintenance, incorrect maintenance or even 'hacking' of electronic control units. How do we overcome these difficulties? How do we prove the system can work and will behave as expected?

Insights from the aeroplane and railway industries

The aerospace industry, since 1914, has seen aircraft able to function with decreasing human intervention. A modern autopilot will control the glide path, speed and horizontal displacement of the plane in relation to the runway centre line, landing safely, even in harsh conditions.

While the ultimate responsibility for the safety of the flight rests always with the pilot-in-command, the human can afford to take on a more supervisory role, allowing the machine to operate.

Of course, this is tempered by the 'limit of authority' built into such an autopilot system. For example, a severe storm could cause damage resulting in erroneous, inconsistent air speed signals, in which case the autopilot would disengage, notifying the human accordingly.

This withdrawal initiates when, to control the plane, the system would need to exceed a pre-set limit of authority. It proceeds to enter a programmed 'safe state', opting out of the control-loop, demanding specially-trained human intervention.

A similar concept exists in the railway industry. If a completely automated train control system is no longer capable of driving a train, due to the loss of input control signals, for example, it can bring the train to a stop then subsequently shut down, entering a safe state.

Humans still required?

So even with advanced automated technology in place, the equivalent automobile example would still require an aptly qualified human in the driver's seat. Also, there is no 'safe-state' for disengagement or stopping; a car in the middle of a motorway portends to a more hazardous scenario than either a plane mid-flight or a train on its tracks.

Interestingly, technology does exist that could be developed to answer this problem. Safety-critical real-time hypervisors could assist with the added complexity of driverless vehicle control, while assuring safety. This underlying technology – to make sure that a driverless-monitoring function would shut down the auto-drive in case of a 'limit of authority' crossing – could be installed alongside the autopilot control software itself. Despite the overriding disengagement of the system, the software could still be running complex functions (e.g. image recognition) on a different partition.

However, not only will such a system take a significant amount of research and development, there is also the small print to consider: the adoption of fully independent validation and safety assessments; full certification from independent authorities; and full adoption of appropriate development standards (such as the ISO 26262), for example.

Plus, for general release, what kinds of safety concerns should the industry have in mind? If an incident occurs, who is responsible - the car 'driver', the manufacturer, or the certifying authority? There are as yet, many questions unanswered.

In summation, driverless cars are a certainty, but sufficiently safe, reliable driverless cars, soon? There's still a long road ahead.

Leave a comment

Alternatively

This will only be used to quickly provide signup information and will not allow us to post to your account or appear on your timeline.