Thursday, July 24, 2014
Search
  
Submit your own News for
inclusion in our Site.
Click here...
Breaking News
Samsung's New Curved TV Costs A Fortune
Seagate Launches Channel On Roku
Apple OS X Yosemite Bets Coming on Thursday
Apple Says There Are No Backdoors In iOS
Sony Invests in Image Sensor Production
Ultra-High-Definition TVs Still Lack Market Penetration
Lumia 530 Entry-level Windows Phone Released
ASUS Launches the RT-AC87 Dual-Band Wireless-AC2400 Gigabit Router
Active Discussions
help questions structure DVDR
Made video, won't play back easily
Questions durability monitor LCD
Questions fungus CD/DVD Media, Some expert engineer in optical media can help me?
CD, DVD and Blu-ray burning for Android in development
IBM supercharges Power servers with graphics chips
Werner Vogels: four cloud computing trends for 2014
Video editing software.
 Home > News > PC Parts > Intel I...
Last 7 Days News : SU MO TU WE TH FR SA All News

Thursday, September 23, 2010
Intel Invests In Exa-scale Supercomputers


Intel has invested in collaborations with institutions that specialize in high performance computing with Exa-scale performance levels. Three Intel labs, all members of the Intel Labs Europe network, now exclusively focus on Exa-scale computing research, Intel says.

In the past year, Intel has launched three new research centers focused on different aspects of the same challenge: developing supercomputers with Exa-scale performance levels. That means a billion billion computations per second. To put that in context, if you had all ~6.9 billion people on earth scribbling out math problems at a rate of one per second, it would still take over four and a half years to calculate what an Exa-scale supercomputer could do in a single second. Exa-scale was the hot topic this week at the Intel European Research and Innovation Conference (EPIC), which was held in Braunschweig, Germany, September 21 & 22.

According to Prof. Thomas Lippert, director of the J?lich Supercomputing Center in Germany, these massive systems could arrive by the end of this decade.

Intel Sr. Fellow Steve Pawlowski, head of Central Architecture and Planning, predicted that demand for high performance computing will continue to rise, driven by computationally intensive tasks such as analyzing the human genome and the creation of climate models that can accurately predict weather patterns. But he emphasized that Exa-scale levels of performance can?t be achieved with today?s techniques, so new technologies must be developed. Pawlowski identified several major challenges facing Exa-scale researchers: energy-efficiency, parallelization, reliability, memory, storage capacity and bandwidth. Moreover, he said that it is important that hardware and software be woven together with a unified programming model.

Meeting these challenges will require a modular, cluster-based design that is both scalable and resilient, according to Prof. Lippert. He noted that the JUROPA supercomputer at his center in J?lich, currently the 14th fastest computer in the world, consists of a cluster of about 15,000 processor cores. He predicted that a future exa-scale systems could be comprised of as many as 10 million cores - a major challenge in terms of power consumption and data communication amongst all the cores.

To achieve all of this, Intel has invested in collaborations with institutions that specialize in high performance computing. Three Intel labs, all members of the Intel Labs Europe network, now exclusively focus on Exa-scale computing research. These include the EXACluster Laboratory in J?lich, Germany (which collaborates closely with Prof. Lippert?s center), the Exascale Computing Research Center in Paris, France and the ExaScience Lab in Leuven, Belgium.

At the same time, researchers are developing technologies for the future many-core microprocessors that will one day be at the heart of these clusters.

Last December, Intel Labs demonstrated the latest concept vehicle to emerge from this program, the 48-core Single-chip Cloud Computer. At the time, our CTO Justin Rattner also announced that we would make this experimental chip available to dozens of researchers worldwide, and even highlighted an early example collaboration via a demo presented at Microsoft Research.

Since then Intel has been working to make good on this commitment, soliciting and reviewing over 200 research proposals from academic and industry researchers around the globe, engineering a development platform suitable for external distribution, and even building a small 'datacenter' of a few dozen systems that can be accessed remotely - a cloud-based option for research on an architecture that itself was designed as a microcosm of a cloud datacenter.

To this end, at the celebration of the 10th anniversary of the Intel R&D site in Braunschweig, Germany (whose researchers co-developed the SCC), Intel officially unveiled the Many-core Applications Research Community, or MARC for short. Under the new MARC program, the academic and industry researchers whose proposals were accepted will be able to use the SCC as a platform for next-generation software research. MARC will provide them with a new tool to solve challenges in parallel programming and application development that, hopefully, will in turn lead to dramatic new computing experiences for people and business in the future.

As of today, MARC consists of 51 research projects from 38 institutions worldwide. Aside from Microsoft Research, a few examples are the Karlsruhe Institute for Technology (KIT), the Technical University of Braunschweig, the University of Oxford, ETH Zurich, the Barcelona Supercomputing Center, the University of Edinburgh, the University of Texas, Purdue University, and the University of California San Diego.

Although MARC has been launched with an initial focus on the SCC (Single-chip Cloud Computer) concept vehicle, Intel hopes that the community itself proves to be as valuable as the chip. As such, Intel will explore sharing other hardware and software research platforms over time.

This research is part of an overarching effort to continue scaling processor capabilities while keeping power consumption low. With a wealth of data quickly accumulating across the internet, from tiny tweets to high-res video feeds, from customer data warehouses to medical imaging repositories ? Intel will need these powerful parallel processors to sort and analyse this data flood in real time.


Previous
Next
Fujitsu and Oracle Deliver Palm Vein Biometric Identity Management Solution        All News        Facebook, Streaming Video Dominate the New Media Landscape
Samsung Mass Producing 40nm-class 8GB DDR3 Module for Laptops     PC Parts News      Dell Chief Shows New Tablet

Get RSS feed Easy Print E-Mail this Message

Related News
New Intel Solid-State Drive Pro 2500 Series Packs SK Hynix Flash And Brings Trusted Security Features
New Intel Haswell CPUs Released
Intel to $60 Ship Galileo Gen2 Computer Next Month
Intel Reports Second-Quarter Revenue of $13.8 billion
Intel Chipsets To Support PCIe 3.0
Intel To Manufacture Future Panasonic SoCs Using Intel's 14nm Low-Power Process
Intel Details Next-Generation Xeon Phi Processor with Integrated Omni Scale Fabric
Intel Adds Laughter into Mobile Messaging
Intel Offers Customizable Chips For Data Centers
Intel Raises Revenue Expectations Thanks To XP Retirement
Intel Focuses On Energy Efficiency In Semiconductors At VLSI 2014
European Court Upholds Record Fine Imposed On Intel

Most Popular News
 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2014 - All rights reserved -
Privacy policy - Contact Us .