Google self-driving car illustration

Google Cars Actually Cede Control to Humans

Burney Simpson

Google’s self-driving cars may be pretty safe but a human must occasionally take control of the vehicle to avoid an accident or another problem.

That’s the word from Chris Urmson, director of Google’s self-driving car project, who spoke today at the Transportation Research Board 95th Annual Meeting in Washington, D.C.

Urmson shared numbers on disengagement, the term used when a self-driving vehicle disengages from the autonomous mode, and gives control back to the human driver in the car, or when the driver takes control back from the vehicle.

(Find Google’s December 2015 report to California on its self-driving cars in the state here).

Google has had 341 safety disengagements since it began testing its cars on public roads in November 2014. Of those, 272 were triggered by Google software and 69 were triggered by the human driver.

Google drivers’ average response time in taking control was 0.84 seconds. In practice, the vehicle alerts the driver with lights and sounds when a disengagement is occurring, and the driver takes over in less than a second. These guys are faster than Steve McQueen in Bullet.

Disengagements may sound scary but they happen all the time, Google reports. They might occur when the car slows in unusual weather, or the communication system suffers a momentary glitch, or the car accelerates in an odd way. (Which may be common as a Google car recently received a ticket for going too slow).

Hence, Google uses the term safety disengagements to describe when the giving up of control is not routine.

Urmson emphasized that the 341 safety disengagements happened during the more than 424,000 miles that the car drove itself over 13 months.

And, things are getting better, he said, because one safety disengagement happened every 5,318 miles in 2015’s fourth quarter compared with one every 785 miles in 4Q 2014.

Still, a human did prevent at least 13 accidents, or contact with another car or object, according to Google’s analysis of the 69 times a driver took control of the car. In two of those the contact would have been with a traffic cone.

Google reports that the self-driving car was behaving in a manner that could have caused an accident in the other 56 driver control events.

That’s right, Google used the term ‘behavior’ to describe the activity of its self-driving cars. Almost as if the massive tech firm thought the cars were human, sort of like a rambunctious kid.

Google has not started tracking Twilight Zone incidents involving its cars. Yet.