Turn off the Ad Banner  

To print: Select File and then Print from your browser's menu.

    -----------------------------------------------
This story was printed from CdrInfo.com,
located at http://www.cdrinfo.com.
-----------------------------------------------

Appeared on: Friday, April 30, 2010
Nvidia: Moore's Law is Dead

Since we have reached the limit of what is possible with one or more traditional CPUs, the computing industry needs to take the leap into parallel processing, says Bill Dally, chief scientist and senior vice president of research at NVIDIA.

Forty-five years ago this month, Intel co-founder Gordon Moore predicted that the number of transistors on an integrated circuit would double each year (later revised to every 18 months). This laid the groundwork for another prediction: that doubling the number of transistors would also double the performance of CPUs every 18 months.

This bold prediction, known as Moore?s Law, long held true. But we have reached the limit of what is possible with one or more traditional CPUs. The computing industry - and everyone who relies on it for continued improvements in productivity - needs to take the leap into parallel processing. The CPU scaling predicted by Moore?s Law is now dead, according to Nvidia's researcher.

Moore's paper also contained another prediction that has received far less attention over the years. He projected that the amount of energy consumed by each unit of computing would decrease as the number of transistors increased. This enabled computing performance to scale up while the electrical power consumed remained constant. This power scaling, in addition to transistor scaling, is needed to scale CPU performance.

"However, this power scaling has ended. And as a result, the CPU scaling predicted by Moore's Law is now dead. CPU performance no longer doubles every 18 months. And that poses a grave threat to the many industries that rely on the historic growth in computing performance," Dally added.

Dally believes that that there are specific needs that won't be met unless there is a fundamental change in our approach to computing, and identifies parallel computing as the solution. Parallel computing can resurrect Moore's Law and provide a platform for future economic growth and commercial innovation, Dally says.

In parrallel computers, many processing cores, each optimized for efficiency, not serial speed, work together on the solution of a problem.

"A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance," Dally says. "Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance--at a tremendous expense in energy," he adds.

Nvidia's scientist also underlined the importance of graphics processing units, which enable continued scaling of computing performance in today's energy-constrained environment.

"Every three years we can increase the number of transistors (and cores) by a factor of four. By running each core slightly slower, and hence more efficiently, we can more than triple performance at the same total power. This approach returns us to near historical scaling of computing performance," he says.

To continue scaling computer performance, it is essential that we build parallel machines using cores optimized for energy efficiency, not serial performance.

"Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work. This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance," Dallly added.

"Parallel computing is the only way to maintain the growth in computing performance that has transformed industries, economies, and human welfare throughout the world. The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers - not on multi-core CPUs," said Dally.

Forbes.com has published Bill Dally's complete article.


Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .