Western Digital will cover OCP-compliant storage track initiatives and data center innovations like ZNS, SMR HDDs, and EDSFF at the annual OCP Global Summit, March 4-5 in San Jose, Caifirnia.
The annual OCP (Open Compute Project) gathering brings together data center architects, developers, and suppliers for collaborative effort towards standardized server, storage, and networking designs. OCP standards are supported by cloud service providers and contributing suppliers, including Western Digital.
Among the technological exhibits that will be on display in the Western Digital booth are NVMe SSDs – EDSFF (Enterprise & Datacenter SSD Form Factor).
The majority of initial data center SSDs deployed has been the legacy HDD 2.5” form factor with SAS or SATA interfaces. The 2.5” SSD form factor gave I/O acceleration to legacy workloads built for the HDD era, but will the 2.5” SSD form factor provide enough storage density and optimizations for the next era in data center architectures?
The new packaging standard known as ESDFF (Enterprise Datacenter Storage Form Factor) is a simpler approach compared to U.2 SSDs. U.2 requires a more complex backplane, intricate wiring assembly during manufacturing, and ample thermal considerations for cooling.
Data center flash storage is now unleashed from the legacy 2.5” HDD form factor to take advantage of denser flash storage capacity points, greater rack density, and better alignment of compute/logic, e.g., processor + GPU, to storage I/O.
Western Digital will continue to sell the NVMe U.2 2.5” form factor into the foreseeable future, including future U.3 connector models, but the new EDSFF E1.L (L=long) packaging standard enables density at scales up to 1PB of raw capacity in 1U of rack space through an NVMe interface.
The EDSFF E1.S (S=short) is a standard packaging to replace the NVMe M.2 form factor over time because of its better thermal cooling designs and hot-swap capabilities.
The 7200 RPM, 3.5” HDD form factor still has its respective role in the data center of today and the foreseeable future for capacity-optimization of bulk storage requirements. Of course, performance-optimized HDDs, i.e., 15K RPM HDDs and 10K RPM HDDs have ceded to SSDs.
Cloud architects will find the TCO variable of $/GB still attractive vis-à-vis TLC/QLC SSDs for large-scale data center build outs of bulk storage.
For cloud architects, the governing factors for HDDs in future architectures is the ability to rely on denser storage options, e.g., the availability of a 50TB HDD one day, improved HDD IOPS, and dependable HDD quality for a low HDD annual failure rate under 1%.
Cloud architects may stop by WD's to get an idea of SMR HDDs as one option for capacity-optimized HDDs:
- SMR (Shingled Magnetic Recording) – a host-managed HDD implementation to achieve greater storage densities than conventional HDDs by allowing for data to be written in an overlapping fashion to zones.
- ZNS (Zoned Namespaces) is an standards initiative to address the shortcomings of SSD-based data center architectures at large scale. Specifically, ZNS SSDs address the following standard data center SSD architectural inefficiencies for large-scale deployments:
- Write Amplification: To manage the constraints of not overwriting data, an SSD has to move data around internally to reclaim unused data locations, this is called “Garbage Collection.” The garbage collection process causes multiple writes of the same data (hence the term Write Amplification) creating further wear to flash media, thus reducing the lifespan of an SSD.
- Over Provisioning: Extra allotted SSD storage capacity (up to 28%) reserved for moving data around for garbage collection and to improve efficiency.
- The cost of DRAM: After the cost of NAND media, DRAM (memory) is the second most expensive component in a data center SSD. To maintain the flash translation layer (FTL) logical-to-physical mapping, quantities of DRAM are required in an SSD; the amount of DRAM increases in proportion to an SSD capacity point.
- QoS: Quality of Service (QoS), i.e., statistical latency measured out to 5 nines (99.999%) variability due to the garbage collection incurring at any given moment, irrespective to workload I/O or control from the host software.
Cloud architects may want to consider ZNS SSDs to address these shortcomings compared to conventional data center SSD designs for large-scale deployments for CAPEX savings, OPEX efficiency, and to address latency at large scale.
WD will have at its booth a live demo of ZNS SSDs, including performance metrics. The company will also discuss ZNS SSD futures they are contemplating like enabling QLC NAND in the data center at large scale, SSD form factor options, and other Linux kernel details.
ZNS is not an OCP design, but is part of a separate working group working to ratify the ZNS set of commands into the NVMe specification.