MEye5

Mobileye ‘On Track’ for Autonomous Driving in 2016-2019: CEO

Paul Godsmark

This week Mobileye (MBLY) CEO Ziv Aviram made some bold statements regarding highly- and fully-autonomous driving during the company’s third quarter earnings call. One key comment concerned Tesla:

“The Tesla auto pilot feature is currently using a mono camera sensor for performing the most important understanding of the scene - the visual interpretation. Our multiple camera sensor configuration launches are planned to begin as early as next year.” 

That indicates that the Tesla autopilot system is operating using a single camera – and despite the many scary videos of Tesla drivers using and abusing this system – it is operating extremely well.

Tesla CEO Elon Musk is reported to have ‘told analysts that the company is aware of many instances where Autopilot might have prevented an accident, but none where Autopilot actually caused an accident.’

In order to provide some context to the difference between Tesla’s autopilot and Google’s self-driving car system, Brad Templeton wrote a piece explaining what the two systems are aiming for.

But, based on additional Mobileye statements, its multiple camera systems approach could rapidly close this divide:

“We are on track with four launches of the front-sensing trifocal camera configuration to support highly autonomous driving. And we are on track with two launches of an eight camera 360 degree awareness system designed to support fully autonomous driving. And all the above, our plan for the 2016 to 2019 timeframe as will occur in parallel rather than one following the other.”

Therefore it can be concluded that Mobileye has high ambitions for its technology in some very aggressive timeframes - including highly autonomous and fully autonomous driving in a 2016-2019 timeframe.

(The transcript of the Mobileye call can be found at Seeking Alpha, and a helpful dissection of much of what was written about autonomous driving can be found on a thread on the Tesla Motor Club Forum.)

MACHINE VISION SYSTEMS

The various autonomous vehicle developers appear to have taken two different sensor-based approaches. On the one side are those that base their sensing around LiDAR (e.g. Google, Bosch, Ford), and on the other those that use vision-based systems (e.g. Daimler, Tesla, Ambarella/VisLab).

Until now the LiDAR-based systems have demonstrated the most promise resulting in their widespread use in public demonstrations and other high- profile tests.

However, the combination of neural network, ‘deep learning’, with machine vision systems being developed by companies such as Tesla, Mobileye, Nvidia, and Apple? also shows great promise.

It is worth noting that Google is no slouch in the machine vision arena themselves having won the Image Net Large Scale Visual Recognition Challenge 2014. They have also acquired some of the top talent in artificial intelligence, deep learning and robotics.

However, Google’s equivalent in China, Baidu, says its best image recognition systems are better than the average human error rate. It should come as no surprise that Baidu is also developing highly autonomous vehicle technology, with plans to launch a car in the latter half of 2015.

Competition to develop fully autonomous vehicle systems continues to heat up and numerous companies say they will have some form of system in public hands in the 2016-2020 timeframe. Despite enormous challenges it appears that progress in the autonomous vehicle sector remains relentless.

Paul Godsmark is co-founder and CTO of CAVCOE, a provider of consulting services to organizations in the public and private sector whose operational and business models will be impacted by the arrival of automated vehicles.