Thursday, December 18, 2014
Search
  
Submit your own News for
inclusion in our Site.
Click here...
Breaking News
Facebook And Android Top Digital Trends For 2014
Your Next Car Could Have Android Inside
North Korea Linked To Recent Sony Hacking
Sony Global Education Established
CEA and Japan Audio Society to Jointly Promote Hi-Res Audio
Intel, IBM Follow Different Strategies On 14nm FinFET
Toshiba Announces 6TB Enterprise Capacity HDD Models
WebOS 2.0 Smart TV Platfom To Debut At CES
Active Discussions
Windows xp
Will there be any trade in scheme for the coming PSP Go?
Hello, Glad to be Aboard!!!
Best optical drive for ripping CD's? My LG 4163B is mediocre.
Hi All!
cdrw trouble
CDR for car Sat Nav
DVD/DL for Optiarc 7191S at 8X
 Home > News > PC Parts > Nvidia:...
Last 7 Days News : SU MO TU WE TH FR SA All News

Friday, April 30, 2010
Nvidia: Moore's Law is Dead


Since we have reached the limit of what is possible with one or more traditional CPUs, the computing industry needs to take the leap into parallel processing, says Bill Dally, chief scientist and senior vice president of research at NVIDIA.

Forty-five years ago this month, Intel co-founder Gordon Moore predicted that the number of transistors on an integrated circuit would double each year (later revised to every 18 months). This laid the groundwork for another prediction: that doubling the number of transistors would also double the performance of CPUs every 18 months.

This bold prediction, known as Moore?s Law, long held true. But we have reached the limit of what is possible with one or more traditional CPUs. The computing industry - and everyone who relies on it for continued improvements in productivity - needs to take the leap into parallel processing. The CPU scaling predicted by Moore?s Law is now dead, according to Nvidia's researcher.

Moore's paper also contained another prediction that has received far less attention over the years. He projected that the amount of energy consumed by each unit of computing would decrease as the number of transistors increased. This enabled computing performance to scale up while the electrical power consumed remained constant. This power scaling, in addition to transistor scaling, is needed to scale CPU performance.

"However, this power scaling has ended. And as a result, the CPU scaling predicted by Moore's Law is now dead. CPU performance no longer doubles every 18 months. And that poses a grave threat to the many industries that rely on the historic growth in computing performance," Dally added.

Dally believes that that there are specific needs that won't be met unless there is a fundamental change in our approach to computing, and identifies parallel computing as the solution. Parallel computing can resurrect Moore's Law and provide a platform for future economic growth and commercial innovation, Dally says.

In parrallel computers, many processing cores, each optimized for efficiency, not serial speed, work together on the solution of a problem.

"A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance," Dally says. "Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance--at a tremendous expense in energy," he adds.

Nvidia's scientist also underlined the importance of graphics processing units, which enable continued scaling of computing performance in today's energy-constrained environment.

"Every three years we can increase the number of transistors (and cores) by a factor of four. By running each core slightly slower, and hence more efficiently, we can more than triple performance at the same total power. This approach returns us to near historical scaling of computing performance," he says.

To continue scaling computer performance, it is essential that we build parallel machines using cores optimized for energy efficiency, not serial performance.

"Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work. This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance," Dallly added.

"Parallel computing is the only way to maintain the growth in computing performance that has transformed industries, economies, and human welfare throughout the world. The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers - not on multi-core CPUs," said Dally.

Forbes.com has published Bill Dally's complete article.


Previous
Next
Sony's Dash On Sale        All News        Europe's First All Digital, 3D Multiplex Cinema Opens
ASUS Launches the EeeKeyboard PC     PC Parts News      LaCie Announces LaCie Rugged USB 3.0

Get RSS feed Easy Print E-Mail this Message

Related News
Samsung Files ITC Complaint Against Nvidia
NVIDIA Unveils Tesla K80 Dual-GPU Accelerator For Data Analytics
Samsung Responds To Nvidia With Chip Patents Lawsuit
NVIDIA Releases VXGI Apollo 11 Demo
Nvidia Reports Increased Quarterly Revenue
Nvidia Pick Your Path Bundle Announced
New Nvidia Driver Enable DSR On Older Graphics Cards
Nvidia Gives $100,000 To GPU-Powered Startups
Nvidia Brings Maxwell Architecture to Notebooks
ITC Opens Investigation Into Samsung Product Infringement of NVIDIA GPU Patents
New GeForce WHQL Driver Released
NVIDIA Unveils Full Power of Maxwell GPU Architecture With GeForce GTX 980, 970 Graphics Cards

Most Popular News
 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2014 - All rights reserved -
Privacy policy - Contact Us .