Google Anthos: a Few Steps Closer to the Virtual Data Center
Google’s cloud services management platform, Anthos, is not just an incremental feature improvement—it will change how organizations utilize compute resources. Virtual machines let us stop thinking about the details of physical servers. Containers gave us the framework to efficiently implement microservices that ran on a single server. And now Anthos is the start of abstracting away the data center, so we can think in terms of workloads instead of infrastructure.
There’s an old saying in science: extraordinary claims require extraordinary evidence. Claiming Anthos will essentially virtualize the data center requires some substantial support. Three factors drive this assertion.
First, Anthos is a collection of services, including Google Kubernetes Engine (GKE) and GKE On-Prem. GKE is Google’s managed Kubernetes service, which relieves you of managing cluster infrastructure that runs Kubernetes. It runs in the Google Cloud Platform (GCP).
GKE On-Prem is a version of GKE that runs in your data center; it provides the same functionality as GKE, but runs on your infrastructure. Step one of abstracting the data center is having a common compute platform that crosses data center boundaries; that’s exactly what GKE and GKE On-Prem are doing.
Second, Anthos provides a centralized configuration management service. Clusters are described in code, which can run in any of the supported clouds or in your on-premises data center. Policies are defined once and deployed across managed clusters.
Anthos Config Management service can detect changes to code in Github repositories, build images, and deploy changes automatically. If the state of the cluster drifts, the configuration service will detect it and bring the cluster back to the desired state.
Also, role-based access controls, quotas, and other policies can be defined once and applied across public cloud and on-premises clusters. Step two of abstracting the data center is having a centralized management system that enables configuration through a common specification. There is no need to use cloud vendor-specific configuration tools with centralized configuration management. This further abstracts cloud-specific implementation details.
The third enabling element of Google Anthos is a control plane for Kubernetes cluster, which is known as a service mesh. Istio is an open source system for implementing controls within a cluster. Istio has the capabilities to:
- Control the flow of traffic
- Facilitate multiple deployments, like green/blue deployments
- Implement rate limiting and access controls on services running in the cluster
- Collect metrics and logs about cluster activities
The combination of running a common platform across diverse infrastructures in different data centers, configuring clusters and access controls by policy, and providing services like rate limiting and monitoring to microservices will allow the focus to shift to higher-value tasks like optimizing workflows.
Consider an example of a machine learning classifier detecting fraud in retail transactions. The retail web application may run in AWS, the machine learning model is built in Google Cloud, and data is stored in an on-premises data center.
To build a model and keep it up to date, we’ll need to access data in the on-premises data center, then apply some pre-processing logic that runs most efficiently in the on-premises data center. The data is then sent to the Google Cloud, where the machine learning models are built. When the models are complete, they’re implemented in a container and run on the AWS platform with the retail web application.
This may be a logical and efficient workflow, but architects and developers would likely cringe at the idea of distributing a tightly coupled workflow across three different infrastructure platforms. Too much can go wrong when they have to manage this manually:
- Configurations could be out of synchronization
- It’s difficult to manage IP address across the platforms
- Each platform has a different means of specifying access controls
But if Anthos can let us avoid this kind of DevOps toil and management overhead, the barriers to implementing a hybrid-cloud workflow become significantly lower.
Anthos is streamlining the deployment of services across clouds and on-premises infrastructure. The more we can take advantage of abstracting the data center and focus on workflows, the more value we can generate.