Amazon Web Services unveiled new EC2 Arm-based instances powered by its AWS-designed Graviton2 processors, along with Inf1 machine-learning inference instances powered by its custom AWS Inferentia chips.
“If you look at instances to start, it's not just that we have meaningfully more instances than anybody else, but it's also that we've got a lot more powerful capabilities in each of those instances,” AWS CEO Andy Jassy said in his keynote address at the AWS re:Invent 2019 conference in Las Vegas.
AWS’ pace of innovation has resulted in a significant increase in instances, Jassy said, and AWS now has four times more types today than two years ago.
“We have the most powerful GPU machine-learning training instances, most powerful GPU graphics rendering instances, the largest in-memory instances for SAP workflows with 24 terabytes, the fastest processors in the cloud with the z1d,” Jassy said. “You've got the only standard instances that have 100 Gigabits per second of network connectivity, the only instances that have all the processor choices from Intel and AMD. A very different set of capabilities on the instances side.”
Jassy attributed that to the AWS Nitro System platform and AWS’ chip innovation. AWS spent a significant amount of time over a couple of years reinventing its virtualization hypervisor and building its AWS Nitro System that takes the virtualization of security, networking and storage off a main server, according to Jassy. And a big turning point for AWS was its decision to design and build chips and its acquisition of Israeli chip maker Annapurna Labs in 2015, he said.
“We decided that we were going to actually design and build chips to try to give you more capabilities,” he said. “While lots of companies, including ourselves, have been working with x86 processors for a long time—Intel is a very close partner, and we've increasingly started using AMD as well—if we wanted to push the price/ performance envelope for you, it meant that we had to do some innovating ourselves.”
AWS unveiled its first ARM-based chip—the Graviton chip—last year as part of its A1 instances, which were the first ARM-based instances in the cloud, Jassy said.
The new ARM-based M6g, R6g and C6g instances for EC2 (Amazon Elastic Compute Cloud) are powered by AWS Graviton2 processors and the AWS Nitro System.
“These are pretty exciting, and they provide a pretty significant difference over the first version of Graviton chips,” Jassy said.
The AWS Graviton2 processors use 64-bit Neoverse cores with AWS-designed 7-nanometer silicon and provide up to 64 vCPUs, 25 Gbps of enhanced networking and 18 Gbps of EBS bandwidth. They’re optimized for a broad spectrum of workloads, including high-performance computing, machine learning, application servers, video encoding, micro-services, open-source databases and in-memory caches.
“Versus the first Graviton chip, they have four times more compute cores, five times faster memory and overall seven times better performance,” Jassy said. “But arguably, most importantly, they have 40 percent better price/performance than the latest generation of x86 processors. That's unbelievable.”
The M6g is available now, and the R6g and C6g will be available in early 2020.
Inf1 Machine-Learning Inference Instances
Infa1 instances for EC2, powered by AWS Inferentia chips, are now generally available. Jassy described them as the highest-throughput, lowest cost-per-inference instances in the cloud.
Customers can use the instances to run large-scale machine-learning inference to perform tasks such as image recognition, speech recognition, natural language processing, personalization and fraud detection.
“Think about how many devices we have everywhere that are making inferences and predictions,” Jassy said. “About 80 [percent] to 90 percent of the cost is actually in the predictions [versus training infrastructure costs]. And so this is why we wanted to try and work on this problem. You know, everybody's talking about training, but nobody is actually working on optimizing the largest cost for ... machine learning.”
The Inf1 instances have a “lot of things to be excited about,” according to Jassy, with three times higher throughput and up to 40 percent lower cost-per-instance compared to AWS’ G4 instance, which previously was the lowest-cost inference instance in the cloud, he said.
They are integrated with the TensorFlow, Pytorch and MXNet machine-learning frameworks. The instances are available now in EC2, and AWS will make them available for Amazon Elastic Containers Service (ECS), Amazon Elastic Kubernetes Services (EKS) and Amazon SageMaker in early 2020.
The announcements of the new instance types and Arm chips resonated with Chris Wegmann, managing director of the AWS business group at Accenture, an AWS Premier Consulting Partner.
While Amazon has continued to innovate with new services like machine learning and artificial intelligence, it’s also innovating on the core infrastructure side, Wegmann said.
“While enterprises are looking to take and move applications to modern architectures, cloud-native, they still have a lot of stuff running on base infrastructure,” he said. “And Amazon continues to take the cost down of that, which is really important for enterprises. It's continuing to allow them to make the move without spending a lot of money.”