As we discussed in part 2 of this blog series, if you design your edge computing realistically, your systems may not be connected to the network all the time. But there are a variety of tools you can use to manage those edge deployments effectively, and that can even tie them back into your main environment! In this third blog of the series, we’ll discuss the role of software in edge computing, and Google Cloud’s solutions to this end.
Google provides software
When it comes to edge environments, Google Cloud’s role is clear: we treat the edge as the domain of our customers and partners. We do not send remote servers for pickup or preconfigure boxes to sync. Instead, we provide software and tools to configure and maintain all clusters as part of the Anthos suite, combined with Google Kubernetes Engine (GKE) and the open-source Kubernetes. An Anthos cluster at the edge may be a full GKE edge installation or a fleet of microk8s Raspi clusters. As long as the attached cluster is running Anthos Config Management and Policy Controller, the remote cluster may be managed via consistent or intermittent connectivity. In addition, Anthos Fleets facilitates organizing Kubernetes clusters into manageable groups — delineated by cross-service communication patterns, location, environment, and an administrator managing the block of clusters.
This is a different approach from other cloud providers who may provide a similar fully managed experience but with proprietary hardware that inevitably leads to a certain level of lock-in. By focusing on the software stack, Google sets the path for long term successful edge fleet management.
(As an aside, if you are interested in a fully managed experience, Google partners with vendors who will take the responsibility of managing the hardware and configuration of the Anthos edge clusters.)
Let’s look at the various tools that Google Cloud offers and how they fit into an edge deployment.
Kubernetes and GKE
Where does Kubernetes fit in? In a nutshell, Kubernetes brings convention.
The edge is unpredictable by nature. Kubernetes brings stability, consistency and extends familiar control and data planes to the edge. It opens the door to immutable containerized deployments and predictable operations.
Data centers and cloud service providers deliver predictable environments. But the broader reach of the edge introduces instability that platform managers are not accustomed to. In fact, platform managers have been working to avoid instability for the past two decades. Thankfully, Kubernetes thrives in this extended edge ecosystem.
Often, in enterprise we think of massive k8s clusters running complex interdependent microservice workloads. But at its core, Kubernetes is a lightweight, distributed system that also works well when deployed on the edge with just a few and focused deployments. Kubernetes increases the level of stability, offers a standardized open-source control plane API, and can serve as a communications or consolidation hub at edge installations that are saturated with devices. Kubernetes brings a standard container host platform for software deployments. A simple redundant pair of NUCs or Raspi racks can improve edge availability and normalize the way our data centers communicate with our edge footprints.
What about Anthos? Anthos brings order.
Without a good strategy and tools, the edge can be daunting if not impossible to manage cost-effectively. While it’s common to have multiple data centers and cloud providers, edge surfaces can number in the hundreds or thousands! Anthos brings control, governance and security at scale. With Anthos, we overlay a powerful framework of controls that extends from our core cloud and data center management systems to the farthest reaches of your edge deployments.
Anthos allows central administration of remote GKE or attached Kubernetes clusters — running private services to support location-specific clients. We see the Anthos edge story developing in all of these industries:
- Retail Stores
- Manufacturing and Factories
- Telco and Cable Providers
- Medical, Science and Research Labs
Anthos Config Management and Policy Controller
Configuration requirements have advanced in leaps and bounds. Anthos Config Management (ACM) and Policy Controller come to the rescue in these scenarios, enabling platform operations teams to manage large edge resources deployments (fleets) at scale. With ACM, operators create and enforce consistent configurations and security policies across edge installations.
For example, one Google Cloud customer and partner plans to deploy three bare metal servers running either Anthos Bare Metal or attached clusters in an HA configuration (all three acting as both master and worker) at over 200 customer locations. The capacity of the cluster totals to more than 75K vCPU, and they plan to manage the configuration, security and policy at scale across this entire fleet using ACM.
As more edge clusters are added to your Anthos dashboard, cluster configurations become increasingly fragmented and difficult to manage. In order to provide proper management and governance capabilities for these clusters, Google Cloud has the concept of fleets. Anthos Fleets negates the need for organizations to build their own tooling to get the level of control that enterprises typically desire, and provides an easy way to logically group and normalize clusters and help simplify the administration and management of these clusters. Fleet-based management is applicable for both Anthos (edge included) and GKE clusters.
Anthos Service Mesh
The edge is fertile ground for microservices architectures. Smaller, lightweight services improve reliability, scalability and fault tolerance. But they also bring complexity in traffic management, mesh telemetry and security. Anthos Service Mesh (ASM), based on open source Istio, provides a consistent framework for reliable and efficient service management. It provides service operators with critical features like tracing, monitoring and logging. It facilitates zero-trust security implementations and allows operators to control traffic flow between services. These are features we have been dreaming of for years. Virtualizing services decouples networking from applications, and further separates operations from development. ASM together with ACM and Policy Controller is a powerful set of tools to simplify service delivery and drive agile practices without compromising on security.
Pushing the edge to the edge
Even though edge computing has been around for a long time, we believe that enterprises are just beginning to wake up to the potential that this model provides. Throughout this series, we’ve demonstrated the incredible speed of change and high potential that edge technology promises. Distributing asynchronous and intermittently connected fleets of customer-managed commodity hardware and dedicated devices to do the grunt work for our data centers and cloud VPCs opens up huge opportunities in distributed processing.
For enterprises, the trick to taking advantage of edge is to build edge installations that focus on the use of private services, and designing platforms that are tolerant of hardware and network failures. And the good news is that Google Cloud offers a full software stack including Kubernetes, GKE, Anthos, Anthos Fleets, Anthos Service Mesh, Anthos Config Management and Policy Controller that enable platform operators to manage remote edge fleets in places far, far away!
By: Joshua Landman (Customer Engineer, Application Modernization Specialist, Google) and Praveen Rajagopalan (Customer Engineer, Application Modernization Specialist, Google)
Source: Google Cloud Blog
Our humans need coffee too! Your support is highly appreciated, thank you!