This new data center memory connects to the next Intel Xeon Scalable processor, code-named Cascade Lake. Intel said it started revenue shipments of Intel Optane DC persistent memory to select customers.
To date, the systems that deliver insights can be hampered by technology and/or cost limitations in memory and storage. Powerful processors and AI algorithms must be fed a steady stream of data, but the DRAM memory, which is fast enough to deliver the data, gets very expensive at large scale, so memory configurations tend to be kept as small as required to meet the target service level.
When the data isn't in memory, the processors must incur multiple orders-of-magnitude latency penalty to retrieve data from storage. Any data that needs to be stored permanently must also make the relatively slow trip out to the drives. There is a historical trade-off between the speed of DRAM and the capacity and permanence of disks. These are the gaps in the memory-storage hierarchy we've discussed before.
The new Intel Optane DC persistent memory product, based on the Intel 3D XPoint memory media technology, establishes a new tier in the memory-storage subsystem: Persistent Memory. It combines the speed of traditional memory with the capacity and native persistence of storage. Filling this gap means that performance-degrading workarounds to move data back and forth across it can be avoided, and applications are more likely to be able to quickly access the data they need, when they need it.
The software developer ecosystem has long wished for a persistent memory tier, and getting ready for its introduction by modifying their software to take full advantage of it. Moving to a workload-optimized system architecture, maximizing the working data available to an application, and reducing I/O operations with disks, can deliver impressive results in many use cases.
These enhanced solutions have achieved 8x-improvements in reduced query wait times for one popular analytics tool, 4x-increased virtual machines by quadrupling the memory capacity, increased user and application throughput, as well as entirely new use cases around consistency and replication. And native persistence also means that in-memory database restart times can be greatly reduced from minutes to seconds.
Intel announced the Optane DC Persistent Memory Developer Challenge. This competition will showcase the developer community's creativity and technical savvy with persistent memory. Awards will be based on originality, performance improvement from baseline, and most comprehensive use of persistent memory. Complete details will be available shortly.
Seperqately, at the Intel Data Centric Innovation Summit, Intel discussed how the company transports, processes, analyzes, and gather timely intelligent insights from all of this data. Many of Intel's customers find the performance growth rate of certain workloads exceeds what can be efficiently delivered with microprocessors alone, or these workloads need to operate at very high speed under power/thermal-constrained conditions, including dense data centers or out on the edge. In those cases, Intel clims that FPGAs can step in to offer highly-efficient, customizable acceleration that can be programmed and tuned to the specific characteristics of the workload.
Due to the growth of data and the need for acceleration, the FPGA market is projected to grow from $5B today to over $8B by 2022.
The growth and adoption of FPGA acceleration is taking place across all vertical market segments. In the cloud, Intel's customers like Yandex and Microsoft are using Intel FPGAs for accelerating AI, search, and in their core infrastructure - which includes networking, storage, and security. Intel FPGAs are also being used to solve some of the world's biggest problems, such as land cover mapping within Microsoft's AI for Earth initiative.
In the enterprise, OEMs including Dell, Fujitsu, Quanta, and Inspur are adopting Intel FPGA accelerator solutions to augment their server architectures.