This is a part of our Serverless DevOps series exploring the future role of operations when supporting a serverless infrastructure. Read the entire series to learn more about how the role of operations changes and the future work involved.
Serverless still has servers.
Before we can explain the impact of serverless on operations engineers, we need to be clear about what we're discussing. Serverless is still a new concept and its meaning is still vague to many people. And as for operations, there are too many definitions and we need to agree on at least one. From there, we can discuss where operations belongs and what operations people bring to the table.
What Is Serverless?
To start, let’s give a brief explanation of what serverless is. Serverless is a cloud systems architecture that involves no servers, virtual machines, or containers to provision or manage. They still exist underneath the running application, but their presence is abstracted away from the developer or operator of the serverless application. Similarly, if you’ve adopted public cloud virtualization already, you know the underlying hardware is no longer your concern.
So what makes something serverless? What would make a simple web application serverless but an application inside of a Docker container not?
These four characteristics are used by AWS to classify what is serverless. They apply to serverless cloud services and applications as a whole. You can use these characteristics to reasonably distinguish what is and what is not serverless.
- No servers to manage or provision: You’re not managing physical servers, virtual machines, or containers. While they may exist, they’re managed by the cloud provider and inaccessible to you.
- Priced by consumption (not capacity): In the the serverless community you often hear, “You never pay for idle time.” If no one is using your service, then you aren’t paying for it. With AWS Lambda you pay for the amount of time your function ran for, as opposed to an EC2 instance where you pay for the time the instance runs as well as the time it was idle.
- Scales with usage: With non-serverless systems we’re used to scaling services horizontally and vertically to meet demand. Typically, this work was done manually until cloud providers began offering auto-scaling services.Serverless services and applications have auto-scaling built in. As requests come in, a service or application scales to meet the demand. With auto-scaling, however, you’re responsible for figuring out how to integrate the new service instance with the existing running instances. Some services are easier to integrate than others. Serverless takes care of that work for you.
- Availability and fault tolerance built in: You’re not responsible for ensuring the availability and fault tolerance of the serverless offerings provided by your cloud provider. That’s their job. That means you’re not running multiple instances of a service to account for the possibility of failure. If you’re running RabbitMQ, then you’ve set up multiple instances in case there’s an issue with a host. If you’re using AWS SQS, then you create a single queue. AWS provides an available and fault tolerant queuing service.
Public vs. Private Serverless
Increasingly, all organizations are becoming tech organizations in one form or another. If you’re not a cloud-hosting provider, then cloud infrastructure is undifferentiated work; work a typical organization requires to function. One of the key advantages of serverless is the reduction in responsibilities for operating cloud infrastructure. It provides the opportunity to reallocate time and people to problems unique to the organization.
That means greater emphasis up the technical stack on the services that provide the most direct value in your organization. Serverless also allows for faster delivery of new services and features. By removing infrastructure as a potential roadblock, organizations can deliver with one less potential friction point.
There’s both public cloud provider serverless options from Amazon Web Services (AWS), Microsoft Azure, and Google Cloud; as well as private cloud serverless offerings like Apache OpenWhisk and Google Knative, which are both for Kubernetes. For the purposes of this piece, we’re only considering public cloud serverless, and we use AWS examples.
We only consider public cloud serverless because, to start, private cloud serverless isn’t particularly disruptive to ops. If your organization adopts serverless on top of Kubernetes, then the work of operations doesn’t really change. You still need people to operate the serverless platform.
The second reason we only consider public cloud serverless is more philosophical. It goes back to the same reasons we largely don’t consider on-prem “cloud” infrastructure in the same light as public cloud offerings. On-prem cloud offerings often negate the benefits of public cloud adoption.
The same is true of public versus private serverless platforms. Private serverless violates all four characteristics that make something serverless. You still have to manage servers, you pay regardless of platform use, you still need to plan for capacity, and you’re responsible for its availability and fault tolerance.
More importantly, many of the benefits of serverless are erased. There’s no reduction of undifferentiated work. No reallocation of people and time to focus further up the application stack. And infrastructure still remains a potential roadblock.
In the end, you’re left with all the complexity of running and managing a serverless platform combined with the new complexity of serverless applications.
More Than Just Tech
Something important to realize most about serverless is that it is more than just something technical. In the debate between private and public serverless, the criticisms of private serverless are not technical. They are criticisms about the inability to fully realize the value of serverless as a technology.
As a historical analogy, look to what made public cloud adoption so successful, or even how it failed in some organizations. Public cloud adoption lead to us rethinking how we architected applications, how our teams worked together, and our expectations of what was possible or even acceptable in how engineers interacted with computing resources. Contrast those experiences with others who saw public cloud, or even private cloud, as just a new form of host virtualization. No technical, organizational, or expectation changes. How did those organizations fair in comparison? What made public cloud adoption so influential wasn’t the technology, but how our organizations changed as a result of it.
While this series covers technical aspects of serverless, it also covers more importantly the impact it will have on operations and the changes brought with it. As you read this always be asking yourself, “How does this technology change things?”
Read The Serverless DevOps Book!
But wait, there's more! We've also released the Serverless DevOps series as a free downloadable book, too. This comprehensive 80-page book describes the future of operations as more organizations go serverless.
Whether you're an individual operations engineer or managing an operations team, this book is meant for you. Get a copy, no form required.