At Supercomputing 2019, Intel unveiled new additions to its data-centric silicon portfolio and an ambitious new software initiative.
Intel expanded on its existing technology portfolio to move, store and process data more effectively by announcing a new category of discrete general-purpose GPUs optimized for AI and HPC convergence. Intel also launched the oneAPI industry initiative to deliver a unified and simplified programming model for application development across heterogenous processing architectures, including CPUs, GPUs, FPGAs and other accelerators.
“HPC and AI workloads demand diverse architectures, ranging from CPUs, general-purpose GPUs and FPGAs, to more specialized deep-learning NNPs, which Intel demonstrated earlier this month,” said Raja Koduri, senior vice president, chief architect, and general manager of architecture, graphics and software at Intel. “Simplifying our customers’ ability to harness the power of diverse computing environments is paramount, and Intel is committed to taking a software-first approach that delivers a unified and scalable abstraction for heterogeneous architectures.”
The oneAPI initiative Intel launched will define programming for an increasingly AI-infused, multi-architecture world. oneAPI delivers a unified and open programming experience to developers on the architecture of their choice without compromising performance and eliminating the complexity of separate code bases, multiple-programming languages, and different tools and workflows.
oneAPI includes both an industry initiative based on open specifications and an Intel beta product. The oneAPI specification includes a direct programming language, powerful APIs and a low-level hardware interface. Intel’s oneAPI beta software provides developers a portfolio of developer tools that include compilers, libraries and analyzers, packaged into domain-focused toolkits. The initial oneAPI beta release targets Intel Xeon Scalable processors, Intel Core processors with integrated graphics, and Intel FPGAs, with additional hardware support to follow in future releases. Developers can download the oneAPI tools and test drive them in the Intel oneAPI DevCloud.
Intel’s silicon portfolio is comprised of a diverse mix of architectures deployed in a range of silicon platforms. The foundation of Intel’s data centric strategy is the Intel Xeon Scalable processor, which today powers over 90 percent of the world’s Top500 supercomputers. Intel Xeon Scalable processors are the only x86 CPUs with built-in AI acceleration that are optimized to analyze the massive data sets in HPC workloads.
At Supercomputing 2019, Intel unveiled a new category of general-purpose GPUs based on Intel’s Xe architecture. Code-named “Ponte Vecchio,” this new high-performance, highly flexible discrete general-purpose GPU is architected for HPC modeling and simulation workloads and AI training. Ponte Vecchio will be manufactured on Intel’s 7nm technology and will be Intel’s first Xe-based GPU optimized for HPC and AI workloads. Ponte Vecchio will leverage Intel’s Foveros 3D and EMIB packaging innovations and feature multiple technologies in-package, including high-bandwidth memory, Compute Express Link interconnect and other intellectual property.
Aurora will be the first U.S. exascale system to leverage the full breadth of Intel’s data-centric technology portfolio, building upon the Intel Xeon Scalable platform and using Xe architecture-based GPUs, as well as Intel Optane DC persistent memory and connectivity technologies. The compute node architecture of Aurora will feature two 10nm-based Intel Xeon Scalable processors (code-named “Sapphire Rapids”) and six Ponte Vecchio GPUs. Aurora will support over 10 petabytes of memory and over 230 petabytes of storage. Aurora will leverage the Cray Slingshot fabric to connect nodes across more than 200 racks.