Turn off the Ad Banner  

To print: Select File and then Print from your browser's menu.

    -----------------------------------------------
This story was printed from CdrInfo.com,
located at http://www.cdrinfo.com.
-----------------------------------------------

Appeared on: Tuesday, April 2, 2013
Hybrid Memory Cube To Boost DRAM Bandwidth

More than 100 developer and adopter members of the Hybrid Memory Cube Consortium (HMCC) have reached consensus for the global standard that will deliver a three-dimensional DRAM technology, which is aimed at increasing performance for networking and high performance computing markets.

Micron, Samsung and Hynix are leading the technology development efforts backed by the Hybrid Memory Cube Consortium (HMC). The technology, called a Hybrid Memory Cube, will stack multiple volatile memory dies on top of a DRAM controller. The DRAM is connected to the controller by way of the relatively new silicon VIA (Vertical Interconnect Access) technology, a method of passing an electrical wire vertically through a silicon wafer.

Developed in only 17 months, the final specification marks the turning point for designers in a wide range of segments - from networking and high-performance computing, to industrial and beyond - to begin designing Hybrid Memory Cube (HMC) technology into future products.

"The consensus we have among major memory companies and many others in the industry will contribute significantly to the launch of this promising technology," said Jim Elliott, Vice President, Memory Planning and Product Marketing, Samsung Semiconductor, Inc. "As a result of the work of the HMCC, IT system designers and manufacturers will be able to get new green memory solutions that outperform other memory options offered today."

"This milestone marks the tearing down of the memory wall," said Robert Feurle, Micron's Vice President for DRAM Marketing. "The industry agreement is going to help drive the fastest possible adoption of HMC technology, resulting in what we believe will be radical improvements to computing systems and, ultimately, consumer applications."

"HMC is a very special offering currently on the radar," said JH Oh, Vice President, DRAM Product Planning and Enabling Group, SK hynix Inc. "HMC brings a new level of capability to memory that provides exponential performance and efficiency gains that will redefine the future of memory."

One of the primary challenges facing the industry - and a key motivation for forming the HMCC- is that the memory bandwidth required by high-performance computers and next-generation networking equipment has increased beyond what conventional memory architectures can efficiently provide. The term "memory wall" has been used to describe this challenge. Breaking through the memory wall requires an architecture such as HMC that can provide increased density and bandwidth with significantly lower power consumption.

The HMC standard focuses on alleviating an extremely challenging bandwidth bottleneck while optimizing the performance between processor and memory to drive high-bandwidth memory products scaled for a wide range of applications. The need for more efficient, high-bandwidth memory solutions has become particularly important for servers, high-performance computing, networking, cloud computing and consumer electronics.

The achieved specification provides an advanced, short-reach (SR) and ultra short-reach (USR) interconnection across physical layers (PHYs) for applications requiring tightly coupled or close-proximity memory support for FPGAs, ASICs and ASSPs, such as high-performance networking, and test and measurement.

The first Hybrid Memory Cube specification will deliver 2GB and 4GB of capacity, providing aggregate bi-directional bandwidth of up to 160GBps compared with DDR3's 11GBps of aggregate bandwidth and DDR4, with 18GB to 20GB of aggregate bandwidth.

Today's DRAM chips have to drive circuit board traces or copper electrical connections, and the I/O pins of numerous other chips to force data down the bus at gigahertz speeds, which consumes a lot of energy. The hybrid memory cube technology reduces the tasks that a DRAM must perform so that it only drives the TSVs, which are connected to much lower loads over shorter distances. A controller at the bottom of the DRAM stack is the only chip burdened with driving the circuit board traces and the processor's I/O pins.

HMCC memmbers claim that the interface is 15 times as fast as standard DRAMs, while reducing power by 70%.

The next goal for the consortium is to further advance standards designed to increase data rate speeds from 10, 12.5 and 15 gigabits per second (Gb/s) up to 28 Gb/s for SR and from 10 Gb/s up to 15 Gb/s for USR. The next-generation specification is projected to gain consortium agreement by the first quarter of 2014.

The HMCC is a focused collaboration of OEMs, enablers and integrators who are cooperating to develop and implement an open interface standard for HMC. More than 100 technology companies from Asia, Japan, Europe and the U.S. have joined the effort, including Altera, ARM, Cray, Fujitsu, GLOBALFOUNDRIES, HP, IBM, Marvell, Micron Technology, National Instruments, Open-Silicon, Samsung, SK hynix, ST Microelectronics, Teradyne and Xilinx. Continued collaborations within the consortium could ultimately facilitate new uses in HPC, networking, energy, wireless communications, transportation, security and other semiconductor applications.


Home | News | All News | Reviews | Articles | Guides | Download | Expert Area | Forum | Site Info
Site best viewed at 1024x768+ - CDRINFO.COM 1998-2024 - All rights reserved -
Privacy policy - Contact Us .