In November 2017, Transport Minister Khaw Boon Wan unveiled plans to roll out self-driving buses in Punggol, Tengah and the Jurong Innovation District by 2022. That the news came just a week after an MRT collision was not lost on those who read it.
As always, netizens were quick to poke fun at Singapore’s seemingly lofty plans to have autonomous vehicles on the road, when its trains – with drivers – were running into each other on dedicated and segregated tracks.
It is an understandable reaction. Why run when you have not learnt how to walk properly?
And beneath that veneer of mockery lies a real worry about the potential chaos that driverless vehicles can potentially give rise to. Is this kind of reaction justified? Yes and no. No because autonomous road vehicles are far more sophisticated than rail systems, which remain largely unchanged from over a century ago.
There are a lot more sensors on a driverless car, comprising cameras, radar, laser and GPS – all tied to powerful processors worthy of a deep space expedition. Understandably so, since an autonomous vehicle has to navigate an infinite combination of situations and circumstances.
These processors then control the vehicle’s throttle, steering and brakes. They can also be programmed to take on discretionary functions such as activating the horn, foglamps or hazard lights.
In other words, do almost everything a human driver does. Well, except losing its temper or making rude gestures, that is.
In fact, Renault recently announced that it has developed an autonomous control system that can handle challenging driving scenarios as competently as professional test drivers.
Tested against these drivers, the technology will find its way into at least 15 Renault models with different levels of autonomy that are slated to be in showrooms by 2022. (Yes, the same year that Singapore will have autonomous shuttles gliding around in three towns.)
But going by the semi-autonomous driving aids we have today, there is some cause for scepticism.
For instance, lane-keeping systems will sometimes intervene when no intervention is needed, and fail to intervene when they should. Adaptive cruise control systems are slow to react to the front vehicle’s change in speed, often leaving wide gaps which will invariably be filled by others. Said systems also react erratically when there is a bend.
These are straightforward tasks. What more of tasks where subtle nuances are involved, such as knowing a driver’s intention to filter by the way he slows down and veering ever so slightly.
The proof of the pudding is always in the eating, though. And a taste of things to come may well have been a race between man and machine.
It happened in early-November, when Yamaha and American robotics specialist SRI developed a humanoid rider which raced against multi-championship holder Valentino Rossi. The robot lost the 3.2km race by 30 seconds.
The thing is, not many non-professional human riders would have been able to match the robot in the same race.
But most train drivers will be able to match a train’s self-driving mode. Because a railway is an isolated environment. There is no cross traffic, overtaking vehicles and merging to be concerned with. The driver merely has to keep sight of the train in front of him, and maintain a safe distance. A problem arises only when a driver thinks the machine is in control when a glitch arises.
That was what happened on November 15, 2017, when a train “lost sight” of another in front and ran into it. Although the driver was at the helm, he could not react in time because he was not expecting his train to “lose sight” of the one in front.
The incident was traced to two faults in a new signalling system being installed by French engineering group Thales. The first fault was onboard computers failing, compromising the train’s first “bubble” (which makes it visible to other trains). When this happens, a second “bubble” is activated. But it too was incapacitated when it passed by a track point which had not been modified properly.
From an engineering point of view, that should never happen. The second “bubble” should stay in place until the fault which rendered the first one useless is fixed.
Will an autonomous car be caught in a similar situation? Never say never, but chances are that when one sensor or subsystem fails, the others should still be able to pull the vehicle to the side of the road and bring it to a slow stop. At least, that is the theory.
Before Singapore starts those driverless shuttles, there should not be any shadow of a doubt about them being able to fail safely. That is where the term “fail-safe” comes from.