UPDATED 10:20 EDT / DECEMBER 02 2021

CLOUD

AWS chief Selipsky: The cloud, and Amazon, are moving quickly to the edge

Amazon Web Services Inc. has blossomed into a $60 billion-plus annual revenue company by persuading thousands of businesses that running their information technology operations in its cloud makes much more sense than running their own data centers.

But now, a lot of processing is moving to the edge of networks as more workloads such as machine learning can’t afford the latency involved with sending data back and forth in the cloud. And AWS is looking to make sure it’s positioned to serve the growing appetite for edge computing.

“Over time, we’ll have more and more intelligent ways of knowing which data should stay local, which data should move to a more central location. where should computing and analysis take place,” says AWS Chief Executive Adam Selipsky (pictured). “We’re still in the very early innings of a big shift in the definition of how expansive is the cloud, where does it take place and where in that equation do different capabilities get exercised.”

In this final installment of a four-part interview with me and Wikibon Chief Analyst Dave Vellante ahead of the company’s re:Invent conference running this week in Las Vegas, Selipsky also explained why AWS is designing so many of its own chips, with many more to come.

You can also get the big picture from Selipsky’s entire interview here, and don’t miss the other installments of the full interview on SiliconANGLE earlier this week. Also, check out wall-to-wall coverage of re:Invent by theCUBE, SiliconANGLE Media’s livestreaming studio, and SiliconANGLE all this week and beyond for exclusive interviews with AWS executives and others in the AWS ecosystem. If you’re at re:Invent, stop by theCUBE’s studio in the exhibit hall.

This interview was edited for clarity. (* Disclosure: SiliconANGLE and theCUBE are paid media partners at AWS re:Invent. AWS and other sponsors have no editorial control over content on SiliconANGLE or theCUBE.)

In the chips

Vellante: We’ve said that we think that Annapurna acquisition would be one of the greatest in the history of the computer industry: a $350 million acquisition is setting the architectural direction for the entire industry. Google’s copying it, Microsoft, Alibaba, sort of VMware. Talk about how you’re thinking about that capability in the context of alternative processors.

First I’ll just add that we certainly anticipate continuing to be close partners with x86 space and GPU-based vendors. And those partnerships are not going anywhere and matter a lot to us. But as you alluded to, Dave, we started thinking about custom silicon really almost a decade ago, started talking about it, started to get a plan together. We knew it’d be a multiyear plan. And as part of that, a couple years into that, we did the Annapurna acquisition. I agree, it’s been a really important acquisition. And they’ve done amazing work and amazing collaboration with our other teams, particularly our compute teams and our supply chain teams here.

And if you go back to the first couple years, we launched, obviously almost by definition, a much smaller customer base. Chips are really general-purpose for multiple workloads and that just economically made sense. But after a few years, and given our rapid adoption, which has only accelerated, we have enough scale that it makes economic sense for our customers and for us to have purpose-built silicon for different use cases. So even if a use case is only relevant to a very small percentage of our customer base, that’s still going to mean many thousands of customers. That’s an important principle here and has allowed us to really step back and think big about what we can do.

Vellante: How have the chips been received by customers?

We first came out with Graviton and then with Graviton2, which has been a big hit. Graviton2 just offers exceptional gains and price-performance both versus Graviton and versus other comparable EC2 instances. And we continue to see really rapid adoption from our customer base. By the way, we put it underneath other AWS services as well, so you get better capabilities, whether it be in RDS or Lambda or other services

We’ve also innovated in more specialized chips. The machine learning space is the principal area. We have both the Trainium chips and the Inferentia chips for these different use cases, obviously training and inference. You’ll see us continuing to both bring out new generations of those chips as well as to aggressively bring out EC2 instances that use those chips. In addition, I think we’ll continue to look at other areas where purpose-built chips make sense and will help customers with other use cases over time.

On the edge 

Vellante: Is this also a fundamental part of your edge strategy, things like AI inferencing at the edge? Is that a fundamental enabler?

AI and machine learning at the edge are very important because there’s often not a lot of connectivity at these different parts of the edge. So it’d be very important to have a local processing in a lot of use cases … so you’re not shipping all of your data back to the cloud on a very thin pipe, whether that pipe is physical or wireless. And so I think there will be a lot of edge computing capabilities, which are very important. As part of that, machine learning and AI capabilities to do that data processing in an intelligent fashion will become important.

Another one of these trends, which I think is already here, but will continue to play itself out and accelerate over the coming years, is the concept of the edge. And I think what’s really happening is, the edge of the cloud is moving. It is moving outward. The cloud is expanding. And so we’ve already got 81 full availability zones and 25 regions around the world. We’ve announced nine additional regions, which will be coming out in the next three years.

Vellante: What kind of edge capabilities can we expect?

You already see AWS capabilities inside of 5G networks. And that’s what DISH is doing. You already see AWS capabilities inside of factories. And that’s what our machine learning capabilities are.

You already see Outposts being put into on-premises installations for workloads that just aren’t quite ready yet to move to the cloud — or for which there’s data sovereignty reasons, just wanting to keep it where it is. You already see us working with companies like Volkswagen to create their industrial cloud, which is really going to digitize their entire manufacturing. In that case we’ve put the cloud on the factory floor.

Customers are going to demand more and more remote or edge capabilities, be it in a car, be it in a factory, be it on a piece of farm equipment out in a field. We’re going to continue to work aggressively on IoT solutions, on ML solutions, on some of these horizontal industrial types of use cases and applications. And trying to bundle all those together in ways that are solving customers’ problems.

Defining the hybrid opportunity

Furrier: The hybrid conversation is tied directly to that. How’s that going?

We’re very happy with Outposts and not only is it growing, it’s just an important piece. It’s an important tool for enterprises as they make their cloud journey, and as they migrate over time. And it helps actually being able to put some workloads they really want to keep on premises for now [but] enables them to move a lot of stuff into the cloud.

Furrier: If everything’s hybrid, that means it’s a cloud operation. The edge, or hybrid, is just folding into the operating model of public cloud. So how do you guys see Outposts and the hybrid strategy continuing to go? There’s a lot of confusion on that point as people talk about what hybrid is.

I would say there is one bright line distinction: If you’re talking about a walled-off data center, classical old-school walled-off data center that all of our old-guard competitors knew and loved for so long — that still happens. And that is truly not part of the cloud, of any cloud. It’s not part of any edge. And there’s still a vast sea of workloads that operate in that fashion. And of course you see more and more companies moving away from that, but just given how much of that legacy infrastructure is there and the fact that because of the way it was built, it’s not necessarily easy to move. It’ll take years for some of those workloads to actually become part of the cloud in any fashion, despite the velocity with which it’s happening.

But what exactly is the cloud is blurring. In a really positive way, that’s very helpful for customers. It gets back to the idea that we’ve gone from having these regions and this concept of fully resilient availability zones that AWS pioneered to not replacing, but adding onto that model capabilities that are deployed in many different locations. And that’s not the hybrid model that you’re talking about. We still continue to believe that in the fullness of time, almost all workloads will be in the cloud and not siloed off just behind four walls.

I also agree with you that the definition of what you say is “in the cloud” is going to continue to evolve in very exciting ways. So you will be able to have computing capabilities on your industrial equipment. You will be able to have data gathering and computing capabilities in an automobile, you will be able to have those capabilities out in an agricultural field. They will all be hooked in a highly intelligent way to the overall cloud capabilities. Over time, we’ll have more and more intelligent ways of knowing which data should stay local, which data should move to a more central location. where should computing and analysis take place. We’re still in the very early innings of a big shift in the definition of how expansive is the cloud, where does it take place and where in that equation do different capabilities get exercised.

Lingering legacy

I was talking to a CIO running a huge credit-clearing operation on mainframes, and he said, “We don’t touch the hot core. The hot core is the stuff that if we poke too hard, it will crumble. So we go cloud on the stuff we can move to the cloud and refactor.” But they’re building microservices, layers into the cloud and slowly taking it down. So it’s not a rip-and-replace. That seems to be a consistent playbook for most of the big old-school legacy stuff.

 I think there’ll be a couple of plays in that playbook, at least. One is exactly what you just said, which is, Hey, we’re not ready to move that, but let’s just back up a step and why does anybody even want to move these mainframe applications? And it’s really, for three reasons, as far as I can tell. One is because they’re incredibly expensive, even if you’ve already deployed and paid for it, they’re expensive just to maintain. Two is because they’re complex to maintain. And anything you need to change is a highly significant endeavor. And third is because there’s a rapidly decreasing number of people who know how to interact with them. They’re not teaching a whole lot of COBOL in CS classes today, I don’t think.

So there is a real thirst for moving over mainframe workloads to the cloud. But it is hard to move them for all the reasons you said. So I do agree with the one trend that you cited, which is leaving the core of the mainframe workload running on the mainframe. And start to tear off, if you will, adjacent pieces of the workload or adjacent workloads that are on the mainframe, rebuilding them with the microservices, architecture, et cetera. However, we see more and more customers getting impatient with that approach. So I think you’ll see more and more customers.

Is it a timing issue, or a disruption to their business, or what?

It’s that they see lost opportunity. There’s so many enterprises who have moved hundreds or thousands of critical workloads to the cloud. And they see the improved reliability. They see the improved security, they see the lower cost and they see, most importantly, how it makes them more agile, helps them to actually transform the things that they’re trying to get done in their own business.

We’ve actually invested significantly in a lot of capabilities to help customers move mainframes over. Both with using services we’ve built, as well as human based capabilities, professional services that we offer, and we work with a lot of leading SI partners that we have to move those mainframe workloads over. But it’s not easy. And you can look for us definitely to continue to try and innovate on how we make it really simple and to move those mainframe workloads over.

And it’s not only mainframes, it’s in SANs. A lot of it’s in databases and that’s why we continue to innovate on things like Aurora. And that’s why we just released, a few weeks ago, Babelfish, which basically allows you to migrate your SQL server workload seamlessly to Aurora, PostgreSQL. And as a side note, we also released Babelfish into open source. So if you don’t want to use AWS, you just want to migrate your SQL server workload to Postgres in your own data center, you can do it as well.

So you can run the same workloads in the same way in the cloud?

If you’ve got that NetApp workload and you’ve got your processes and you’ve got your tooling and it’s running on-premises, now you just do the same thing on AWS. That’s incredibly powerful. There’s lots of things that actually should be rewritten or redone and not just replicated, but there are a lot of things which actually work pretty well. And you don’t want to reinvent the wheel just for the sake of saying you reinvented it. If you can bring over your processes and your tooling and do the same thing in the cloud as part of a broader workload, that’s incredibly powerful.

And it also speaks to portability; implicitly, you could always always move it back. Because it’s not like NetApp, it is NetApp. And so if you ever wanted to move it back, you’d move it back. Because we hate technical lock-in. We work against technical lock-in whenever we can.

Photo: Robert Hof/SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU