Saturday, December 20, 2014
Search
  
Submit your own News for
inclusion in our Site.
Click here...
Breaking News
T-Mobile to Pay $90 Million To Settle Case With FCC
New Trojan Targetted Banks Wordlwide
FBI Confirms North Korea Was Behind Sony Hack
Apple Responds To BBC's Allegations Over Working Conditions In Chinese Factory
BlackBerry Returns To Cash Flow
Comparison: Quantum Dot Vs. OLED Displays
Toshiba and SK Hynix Reach Settlement in Lawsuit Ahead Of CES
Google Concerned About MPAA's Actions To Revive SOPA
Active Discussions
Digital Audio Extraction and Plextools
Will there be any trade in scheme for the coming PSP Go?
Hello, Glad to be Aboard!!!
Best optical drive for ripping CD's? My LG 4163B is mediocre.
Hi All!
cdrw trouble
CDR for car Sat Nav
DVD/DL for Optiarc 7191S at 8X
 Home > News > PC Parts > Intel I...
Last 7 Days News : SU MO TU WE TH FR SA All News

Thursday, September 23, 2010
Intel Invests In Exa-scale Supercomputers


Intel has invested in collaborations with institutions that specialize in high performance computing with Exa-scale performance levels. Three Intel labs, all members of the Intel Labs Europe network, now exclusively focus on Exa-scale computing research, Intel says.

In the past year, Intel has launched three new research centers focused on different aspects of the same challenge: developing supercomputers with Exa-scale performance levels. That means a billion billion computations per second. To put that in context, if you had all ~6.9 billion people on earth scribbling out math problems at a rate of one per second, it would still take over four and a half years to calculate what an Exa-scale supercomputer could do in a single second. Exa-scale was the hot topic this week at the Intel European Research and Innovation Conference (EPIC), which was held in Braunschweig, Germany, September 21 & 22.

According to Prof. Thomas Lippert, director of the J?lich Supercomputing Center in Germany, these massive systems could arrive by the end of this decade.

Intel Sr. Fellow Steve Pawlowski, head of Central Architecture and Planning, predicted that demand for high performance computing will continue to rise, driven by computationally intensive tasks such as analyzing the human genome and the creation of climate models that can accurately predict weather patterns. But he emphasized that Exa-scale levels of performance can?t be achieved with today?s techniques, so new technologies must be developed. Pawlowski identified several major challenges facing Exa-scale researchers: energy-efficiency, parallelization, reliability, memory, storage capacity and bandwidth. Moreover, he said that it is important that hardware and software be woven together with a unified programming model.

Meeting these challenges will require a modular, cluster-based design that is both scalable and resilient, according to Prof. Lippert. He noted that the JUROPA supercomputer at his center in J?lich, currently the 14th fastest computer in the world, consists of a cluster of about 15,000 processor cores. He predicted that a future exa-scale systems could be comprised of as many as 10 million cores - a major challenge in terms of power consumption and data communication amongst all the cores.

To achieve all of this, Intel has invested in collaborations with institutions that specialize in high performance computing. Three Intel labs, all members of the Intel Labs Europe network, now exclusively focus on Exa-scale computing research. These include the EXACluster Laboratory in J?lich, Germany (which collaborates closely with Prof. Lippert?s center), the Exascale Computing Research Center in Paris, France and the ExaScience Lab in Leuven, Belgium.

At the same time, researchers are developing technologies for the future many-core microprocessors that will one day be at the heart of these clusters.

Last December, Intel Labs demonstrated the latest concept vehicle to emerge from this program, the 48-core Single-chip Cloud Computer. At the time, our CTO Justin Rattner also announced that we would make this experimental chip available to dozens of researchers worldwide, and even highlighted an early example collaboration via a demo presented at Microsoft Research.

Since then Intel has been working to make good on this commitment, soliciting and reviewing over 200 research proposals from academic and industry researchers around the globe, engineering a development platform suitable for external distribution, and even building a small 'datacenter' of a few dozen systems that can be accessed remotely - a cloud-based option for research on an architecture that itself was designed as a microcosm of a cloud datacenter.

To this end, at the celebration of the 10th anniversary of the Intel R&D site in Braunschweig, Germany (whose researchers co-developed the SCC), Intel officially unveiled the Many-core Applications Research Community, or MARC for short. Under the new MARC program, the academic and industry researchers whose proposals were accepted will be able to use the SCC as a platform for next-generation software research. MARC will provide them with a new tool to solve challenges in parallel programming and application development that, hopefully, will in turn lead to dramatic new computing experiences for people and business in the future.

As of today, MARC consists of 51 research projects from 38 institutions worldwide. Aside from Microsoft Research, a few examples are the Karlsruhe Institute for Technology (KIT), the Technical University of Braunschweig, the University of Oxford, ETH Zurich, the Barcelona Supercomputing Center, the University of Edinburgh, the University of Texas, Purdue University, and the University of California San Diego.

Although MARC has been launched with an initial focus on the SCC (Single-chip Cloud Computer) concept vehicle, Intel hopes that the community itself proves to be as valuable as the chip. As such, Intel will explore sharing other hardware and software research platforms over time.

This research is part of an overarching effort to continue scaling processor capabilities while keeping power consumption low. With a wealth of data quickly accumulating across the internet, from tiny tweets to high-res video feeds, from customer data warehouses to medical imaging repositories ? Intel will need these powerful parallel processors to sort and analyse this data flood in real time.


Previous
Next
Fujitsu and Oracle Deliver Palm Vein Biometric Identity Management Solution        All News        Facebook, Streaming Video Dominate the New Media Landscape
Samsung Mass Producing 40nm-class 8GB DDR3 Module for Laptops     PC Parts News      Dell Chief Shows New Tablet

Get RSS feed Easy Print E-Mail this Message

Related News
Intel, IBM Follow Different Strategies On 14nm FinFET
Intel Unifies and Simplifies Connectivity for IoT
TSMC To Make Intel's SoFIA Handset Chips
Intel to Invest in China Factory
Intel and Luxottica To Collaborate On Smart Eyewear
Intel Offers Professor Stephen Hawking Ability to Better Communicate
Intel Acquires Security Firm PasswordBox
Intel To Release Chromecast-like Thumb-sized PCs
Intel Gives Upbeat Outlook for 2015 Revenue
Intel Labs Showcase Low-energy DRAM Memory
Intel to Merge Mobile Business With PC Division
Intel Light Beams to Speed Up Supercomputers, Details New Intel Xeon Phi, MICA Smart Bracelet

Most Popular News
 
Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2014 - All rights reserved -
Privacy policy - Contact Us .