In October 2016, Tesla announced a significant change to its Advanced Driver Assistance System package. This is the combination of sensors and computing power that will enable Tesla to fulfil Elon Musk’s promise to drive “all the way from a parking lot in California to a parking lot in New York with no controls touched in the entire journey” by the end of 2017.
Amongst the many changes to the sensor package was a switch in the systems’ brains. Previously powered by a processor from Mobileye (recently acquired by Intel) the package now sports a Nvidia Drive PX 2. Why?
It turns out that to be safe, self-driving cars need an extraordinary amount of data from sensor systems. And if it is to figure out what all those sensors are telling it, the car requires an unprecedented amount of processing. Once it knows what is going on in the environment, yet more processing is needed to help the car figure out what to do next.
The switch that Tesla made gives a clue to just how much processing. The Mobileye EyeQ3 processor was a significant chip. It was 42mm² in area (about a quarter of the size of a modern Intel i7 processor), packing transistors using a manufacturing process which arrays transistors 40nm apart.
The replacement chip from Nvidia is 610mm² in size and used a more advanced manufacturing technique, packing transistors at a 16nm node. This smaller node means that the transistors are packed 2.5 times more tightly than those in the EyeQ3 processor. In short, the replacement Nvidia chip had a 90x performance improvement over the Mobileye one.
Even by the standards of Moore’s Law, which represents an average 60% improvement of transistor packing or performance every year, this was a significant jump. In fact, the switch out represented the equivalent of a decadeof Moore’s Law processing.
Continue reading here: https://medium.com/s/ai-and-the-future-of-computing/when-moores-law-met-ai-f572585da1b7
Subscribe to Exponential View here: http://www.exponentialview.co