The term “serverless” has infiltrated most cloud conversations, shorthand for the natural evolution of cloud-native computing, complete with many productivity, efficiency and simplicity benefits. The advent of modern “Functions as a Service” platforms like AWS Lambda and Google Cloud Functions heralded a new way of thinking about cloud-based applications: a move away from monolithic, slow-moving applications toward more distributed, event-based, serverless applications based on lightweight, single-purpose functions where managing underlying infrastructure was a thing of the past.
With these early serverless platforms, developers got a taste for not needing to reason about, or pay for, raw infrastructure. Not surprisingly, that led them to apply the benefits of serverless to more traditional workloads. Whether it was simple ETL use cases or legacy web applications, developers wanted the benefits of serverless platforms to increase their productivity and time-to-value.
Needless to say, many traditional workloads turned out to be a poor fit for the assumptions of most serverless platforms, and the task of rewriting those large, critical, legacy applications into a swarm of event-based functions wasn’t all that appealing. What developers needed was a platform that could provide all the core benefits of serverless, without requiring them to rewrite their application — or really have an opinion at all about the workload they wanted to run.
With the introduction of Cloud Run in 2019, the team here at Google Cloud aimed to redefine how the market, and our customers, thought about severless. We created a platform that is serverless at its core, but that’s capable of running a far wider set of applications than previous serverless platforms. Cloud Run does this by using the container as its fundamental primitive. And in the two years since launch, the team has released 80 distinct updates to the platform, averaging an update every 10 days. Customers have similarly accelerated their adoption: Cloud Run deployments more than quadrupled from September 2020 to September 2021.
The next generation of serverless platforms will need to maintain the core, high-value characteristics of the first generation, things like:
- Rapid auto-scaling from, and to zero
- The option of pay-per-use billing models
- Low barriers to entry through simplicity
Looking ahead, serverless platforms will need a much more robust set of capabilities to serve a new, broader range of workloads and customers. Here are the top five trends in serverless platforms that we see for 2022 and beyond.
1. More (legacy) workloads
Serverless’s value proposition isn’t limited to new applications, and shouldn’t require a wholesale rewrite of what is (and has been), working just fine. Developers ought to be able to apply the benefits of serverless to a wider range of workloads, including existing ones.
Cloud Run has been able to expand the range of workloads it can address with several new capabilities, including:
- Per-instance concurrency. Many traditional applications run poorly when constrained to a single-request model that’s common in FaaS platforms. Cloud Run allows for up to 1,000 concurrent requests on a single instance of an application, providing a far greater level of efficiency.
- Background processing. Current-generation serverless platforms often “freeze” the function when it’s not in use. This makes for a simplified billing model (only pay while it’s running), but can make it difficult to run workloads that expect to do work in the background. Cloud Run supports new CPU allocation controls, which allow these background processes to run as expected.
- Any runtime. Modern languages or runtimes are usually appropriate for new applications, but many existing applications either can’t be rewritten, or depend on a language that the serverless platform does not support. Cloud Run supports standard Docker images and can run any runtime, or runtime version, that you can run in a container.
2. Security and supply chain integrity
Recent high-profile hacks like SolarWinds, Mimecast/Microsoft Exchange, and Codecov have preyed on software supply chain vulnerabilities. Malicious actors are compromising the software supply chain — from bad code submission to bypassing the CI/CD pipeline altogether.
Cloud Run integrates with Cloud Build, which offers SLSA Level 1 compliance by default and verifiable build provenance. With code provenance, you can trace a binary to the source code to prevent tampering and prove that the code you’re running is the code you think you’re running. Additionally, the new Build Integrity feature automatically generates digital signatures, which can then be validated before deployment by Binary Authorization.
3. Cost controls and billing flexibility
Workloads with highly variable traffic patterns, or those with generally low traffic, are a great fit for the rapid auto-scaling and scale-to-zero characteristics of serverless. But workloads with a more steady-state pattern can often be expensive when run with fine-grained pay-per-use billing models. In addition, as powerful as unbounded auto-scaling can be, it can make it difficult to predict the future cost of running an application.
Cloud Run includes multiple features to help you manage and reduce costs for serverless workloads. Organizations with stable, steady-state, and predictable usage can now purchase committed use contracts directly in the billing UI, for deeply discounted prices. There are no upfront payments, and these discounts can help you reduce your spend by as much as 17%.
The always-on CPU feature removes all per-request fees, and is priced 25% lower than the standard pay-per-request model. This model is generally preferred for applications with either more predictable traffic patterns, or those that require background processing.
For applications that require high availability with global deployments, traditional “fixed footprint” platforms can be incredibly costly, with each redundant region needing to carry the capacity for all global traffic. The scale-to-zero behavior of Cloud Run, together with its availability in all GCP regions, make it possible to have a globally distributed application without needing a fixed capacity allocation in any region.
4. Integrated DevOps experience, with built-in best practices
A large part of increasing simplicity and productivity for developers is about reducing the barriers to entry so they can just focus on their code. This simplicity needs to extend beyond the “day one” operations, and provide an integrated DevOps experience.
Cloud Run supports and end-to-end DevOps experience, all the way from source code to “day-two” operations tooling:
- Start with a container or use buildpacks to create container images directly from source code. In fact, you don’t even need to learn Docker or containers. With a single “gcloud run deploy” command, you can build and deploy your code to Cloud Run.
- Built-in tutorials in Cloud Shell Editor and Cloud Code make it easy to come up to speed on serverless. No more switching between tabs, docs, your terminal, and your code. You can even author your own tutorials, allowing your organization to share best practices and onboard new hires faster.
- Experiment and test ideas quickly. In just a few clicks, you can perform gradual rollouts and rollbacks, and perform advanced traffic management in Cloud Run.
- Get access to distributed tracing with no setup or configuration, allowing you to find performance bottlenecks in production in minutes.
The code you write and the applications you run should not be tied to a single vendor. The benefits of the vendor’s platform should be applied to your application, without you needing to alter your application in unnecessary ways that lock you in to a particular vendor.
Cloud Run runs standard Docker container images. When deploying source code directly to Cloud Run, we use open source buildpacks to turn your source code into a container. Your source code, your buildpack used and your container can always be run locally, on-prem, or on any other cloud.
Look no further
These five trends are important things to consider as you compare the various serverless solutions in the market in the coming year. The best serverless solution will allow you to run a broad spectrum of apps, without language, networking or regional restrictions. It will also offer secure multi-tenancy, with an integrated secure software supply chain. And you’ll want to consider how the platform helps you keep costs in check, whether it provides an integrated DevOps experience, and ensures portability. Once you’ve answered these questions for yourself, we encourage you to try out Cloud Run, with these Quickstart guides.
By: Jason Polites (Group Product Manager) and Aparna Sinha (Director of Product Management)
Source: Google Cloud Blog