Amazon is bringing its first, custom-designed 'Graviton' Arm processor to cloud customers.
Announced at the AWS re:Invent, the new A1 instances for Amazon Elastic Compute Cloud (Amazon EC2) is powered by a custom-designed AWS Graviton processors (16nm) for scale-out workloads.
"With today’s introduction of A1 instances, we’re providing customers with a cost optimized way to run distributed applications like containerized microservices. A1 instances are powered by our new custom-designed AWS Graviton processors with the Arm instruction set that leverages our expertise in building hyperscale cloud platforms for over a decade," said said Matt Garman, Vice President of Compute Services, AWS.
Although general purpose processors continue to provide great value for many workloads, new scale-out workloads like containerized microservices and web tier applications that do not rely on the x86 instruction set can gain additional cost and performance benefits from running on smaller and modern 64-bit Arm processors that work together to share an application’s computational load.
Graviton is based on technology acquired when Amazon picked up Annapurna Labs in 2016, which it had previously used only for embedded-style system-on-chip (SoC) designs for gateways, routers, and network attached storage (NAS) hardware. Graviton is based on the Arm Neoverse “Cosmos” platform, and specifically built to run customer application workloads.
During the recent Techcon event, ARM had shared its new Neoverse roadmap, building off our current "Cosmos" platform (16nm), followed annually by "Ares" (7nm), "Zeus" (7nm+), and "Poseidon" platforms (5nm).
The A1 Graviton-powered instance offers one virtual CPU (vCPU) and 2GB of RAM. Moving on, the a1.large offers 2 vCPUs and 4GB of RAM, a1.xlarge 4 vCPUs and 8GB, a1.2xlarge 8 vCPUs and 16GB, and a1.4xlarge 16 vCPUs and 32GB.
With A1 instances, Amazon claims that its customers will benefit from up to a 45 percent cost reduction (compared to other Amazon EC2 general purpose instances) for scale-out workloads. A1 instances are supported by several Linux distributions, including Amazon Linux 2, Red Hat, and Ubuntu, as well as container services, including Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Container Service for Kubernetes (EKS).
The A1 instances are available now in AWS EC2 US East North Virginia and Ohio, US West Oregon, and Europe Ireland regions.
P3dn GPU and C5n instances
Amazon also announced the P3dn GPU and C5n compute optimized instances, featuring 100 Gbps networking throughput to enable scale-out of distributed workloads like high performance computing (HPC), machine learning training, and data analytics.
P3dn instances (available next week) will be the most powerful GPU instances in the cloud for machine learning training. They deliver a 4X increase in network throughput compared to existing P3 instances, providing up to 100 Gbps of networking throughput, NVMe instance storage, custom Intel CPUs with 96 vCPUs and support for AVX512 instructions, and NVIDIA Tesla V100 GPUs each with 32 GB of memory.
C5n instances (available today) increase the maximum throughput performance available in AWS’s compute-intensive instance family. C5 instances offer up to 25 Gbps of network bandwidth addressing the requirements of a wide range of workloads, but highly distributed and HPC applications can benefit from even higher network performance. C5n instances offer 100 Gbps of network bandwidth, providing four times as much throughput as C5 instances.