Invisible infrastructure: the rise of serverless computing

By on

This article appeared in the November 2016 issue of CRN magazine.

Subscribe now

Invisible infrastructure: the rise of serverless computing

Serverless, what even is it? The term “serverless” is somewhat of a misnomer, but it’s the jargon (much like “cloud computing”) that has stuck and will likely live on for some time. 

“It’s like calling McDonald’s ‘kitchenless’,” says Sonia Cuff, co-founder of technology consultancy The Missing Chair. If you eat at McDonald’s all the time, Cuff explains, you don’t need your own kitchen.

Serverless computing doesn’t mean there are literally no servers, but rather that developers don’t have to think about computing hardware and instead only need to deal with much higher-level abstractions. 

“For developers, it is serverless. They don’t have to deploy, configure, and patch an operating system,” says Cuff.

Abstractions provide a great deal of convenience and power. Most programmers these days don’t deal with the minutiae of assembly language. They use higher-level languages that provide an abstraction to the CPUs, network and storage devices of the actual server. Serverless takes that another step higher, treating computing infrastructure as abstract pools of resources to be called on when needed.

The poster-child for the serverless movement is Amazon Web Services’ Lambda service, though there are equivalent services on other cloud platforms: Microsoft has Azure Functions and Google has Cloud Functions. 

As the names suggest, these abstractions lend themselves to functional programming. Programmers create small blocks of code (functions) that are called as and when they are needed. The rest of the time, no server resources are consumed, aside from some nebulous storage to hold the code itself.

When the function is needed, the underlying platform spins up resources to execute the code (calculate a number, create a new unique identifier, look up a value in a table) and then returns the result. These functions can be chained together to create a more complex application. Each function is called as and when it’s needed, depending on the control flow through an application. The rest of the time, the application consumes almost no resources.

Using serverless computing means rewriting applications to use the new method, so it’s best suited to entirely new code. However, because of the modular nature of functions, they can be applied to small parts of an application that are likely to be used often, thus maximising the benefits.

Ethan Banks, co-founder of Packet Pushers Interactive, says serverless is about speed-to-market. 

“It doesn’t exclude infrastructure, it just re-characterises it,” he says. “Developers need to be sure they are still designing their app beyond simple consumption of someone else’s platform. The warmth afforded while wrapped in a blanket of infrastructure ignorance gets awfully cold when apps are deployed with no consideration for latency, regional outages, and the ever-present challenges of scale.”

The trouble with using the serverless approach is that it involves a dichotomy: the benefits will accrue fastest to code that will be used in many places by many applications, but if it’s used a lot, then the code will be running a lot, and will end up functioning a lot like code, just deployed and running on a server the whole time. The less often code is used, the greater the benefits of serverless, and yet the benefits are simultaneously smaller.

But if you’re not sure which bits of code will be used the most, or want to keep your spending on infrastructure to the absolute minimum, then serverless is a great match. That’s why it’s seeing the same aggressive adoption from startups and fast-growth companies as cloud infrastructure did at first. Why pay for anything you don’t absolutely need?

Ah, but there’s the rub.

When you deploy serverless code, you may not know what is going to be popular. There could be bugs in the software that cause it to run ten or a hundred times more often that it should, and you’ll pay for each time it runs. A simple programming error could be very expensive.

Similarly, if your service is more successful than expected, that could cost a lot of money. Consider the effect of a distributed denial of service (DDoS) attack: could you end up paying for someone else to attack your site? While this isn’t a new problem – cloud hosted services where you pay for network traffic can cost you a lot of money if you get hit with a DDoS attack – the problem happens at a much more granular level.

Budgeting for this kind of deployment means the code itself needs governors or safety rails that stop the costs from escalating out of control. With physical infrastructure, there are natural governors in place by virtue of the limits of the hardware. If you can use unlimited hardware, then there’s no limit to the costs you might incur.

At this stage the biggest risk for serverless computing comes from its newness. Each of the three big clouds – AWS, Azure, Google – have their own, proprietary implementation. Unlike C code, for example, you can’t simply pick up your program written for AWS Lambda and deploy it in Azure, so the switching costs create a natural barrier to exit from each platform. This works in the vendor’s favour, of course, but we’re already seeing abstractions that provide a way of writing applications that ultimately run on any one of these platforms.

Serverless is just one more step in the constant march towards greater and greater abstraction of underlying resources. We’ll go through a period of rapid change as people try to use it and change how it works as they hit the limits of practicality.

For now, think of it as just another option in an already vast array of possible ways to deploy code. 

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © CRN Australia. All rights reserved.

Most Read Articles

Log In

Username / Email:
  |  Forgot your password?