Chips off the old block: Computers are taking design cues from human brains
We expect a lot from our computers these days. They should talk to us, recognise everything from faces to flowers, and maybe soon do the driving. All this artificial intelligence requires an enormous amount of computing power, stretching the limits of even the most modern machines.
After years of stagnation, the computer is evolving again, and this behind-the-scenes migration to a new kind of machine will have broad and lasting implications. It will allow work on artificially intelligent systems to accelerate, so the dream of machines that can navigate the physical world by themselves can one day come true.
This migration could also diminish the power of Intel, the long-time giant of chip design and manufacturing, and fundamentally remake the $335 billion a year semiconductor industry that sits at the heart of all things tech, from the data centres that drive the internet to your iPhone to the virtual reality headsets and flying drones of tomorrow.
“This is an enormous change,” said John Hennessy, the former Stanford University president who wrote an authoritative book on computer design in the mid-1990s and is now a member of the board at Alphabet, Google’s parent company. “The existing approach is out of steam, and people are trying to re-architect the system.”
The existing approach has had a pretty nice run. For about half a century, computer makers have built systems around a single, do-it-all chip — the central processing unit — from a company like Intel, one of the world’s biggest semiconductor makers. That’s what you’ll find in the middle of your own laptop computer or smartphone.
Now, computer engineers are fashioning more complex systems. Rather than funnelling all tasks through one beefy chip made by Intel, newer machines are dividing work into tiny pieces and spreading them among vast farms of simpler, specialised chips that consume less power.
Google reached this point out of necessity. For years, the company had operated the world’s largest computer network – an empire of data centres and cables that stretched from California to Finland to Singapore. But for one Google researcher, it was much too small.
In 2011, Jeff Dean, one of the company’s most celebrated engineers, led a research team that explored the idea of neural networks —essentially computer algorithms that can learn tasks on their own. They could be useful for a number of things, like recognizing the words spoken into smartphones or the faces in a photograph.
In a matter of months, Dean and his team built a service that could recognize spoken words far more accurately than Google’s existing service. But there was a catch: If the world’s more than one billion phones that operated on Google’s Android software used the new service just three minutes a day, Dean realised, Google would have to double its data centre capacity in order to support it.
But what began inside data centres is starting to shift other parts of the tech landscape. Over the next few years, companies like Google, Apple and Samsung will build phones with specialized AI chips. Microsoft is designing such a chip specifically for an augmented-reality headset. And everyone from Google to Toyota is building autonomous cars that will need similar chips.
©2017 The New York Times News Service