Intel today disclosed several contributions to the Open Compute Project (OCP) as part of its commitment to a standards-based ecosystem of software and hardware developers.
Intel works in collaboration with the broad OCP community to meet the demands of hyper scale data center infrastructure. Jason Waxman, corporate vice president in Intel's Data Center Group, highlighted several of these contributions in his keynote this week at the 2017 OCP U.S. Summit.
At the summit, Intel announced that it released to partners the second generation (Gen2) of Intel Rack Scale Design software based in v2.1 software. Intel Rack Scale Design hardware management software provides an effective way to manage OCP systems. Gen2 provides operators of hyper scale data centers with modern hardware manageability features and the ability to dynamically compose infrastructure based on pools of PCIe storage to meet highly varied workload requirements.
Intel has released the code to partners and anticipates system availability starting in Q2 of this year.
This release represents a significant step toward the Intel RSD vision of fully disaggregated compute, storage and I/O resources. It starts with storage pooling. Intel RSD 2.1 provides high-performance NVMe SSD pooling over high-speed PCIe interconnects. Future releases of Intel RSD specifications will bring expanded capabilities for other resource pools, including FPGA accelerators. This first step with NVMe SSD drive pooling marks the beginning of several key Intel RSD benefits that will reach their full potential in the next few years:
- Workload-optimized, dynamically composed physical system builds via resource pooling. Physical (vs. simply virtualized) composed systems means that these systems can be used to support bare-metal, virtualized and containerized workloads.
- Enhanced upgradeability via a modular disaggregated design that prevents need for complete server replacement to upgrade just one resource group, such as storage or compute, allowing greater freedom to control costs and optimize resources.
- Right-provisioning. Disaggregated servers, with their inherent modularity, can be built with whatever ratio of resources that best serve the workload mix running on that system. Control spending by buying only what is needed, when it's needed.
Intel also released several IP libraries for use with Intel FPGAs in cloud applications. The Intel FPGA IP libraries support widely used accelerator functions, including deep learning, data analytics, compression and encryption workloads and simplify the task for FPGA developers to ease and accelerate solutions deployment.
Several other innovations related to OCP were described by Intel and its ecosystem partners that span compute, storage and network technologies, such as optimizations to the Intel Math Kernel Library (MKL) to speed up artificial intelligence workloads, Lightning NVMe storage solution and network solutions based on Intel Silicon Photonics, among others.