Service Mesh and GKE Enterprise (Anthos) for the PCA Exam

GCP Study Hub
Ben Makansi
April 25, 2026

Service mesh and GKE Enterprise are two of the GKE topics that come up in the Professional Cloud Architect exam without ever being deeply explained in the question stem. The exam expects you to recognize the problem each one solves and pick it as the answer when the right keywords appear. Both concepts live above plain GKE in the stack, and both are easy to confuse with networking or governance features that exist elsewhere in Google Cloud. I want to walk through what each of them actually does so you can identify them quickly on the test.

What a service mesh actually is

A service mesh is a layer of infrastructure that manages communication between microservices in a distributed application. The defining feature is that it adds traffic management, observability, security, and fault injection to your services without requiring any changes to your application code. That last part is the key idea. If a question describes a team that wants to add mutual TLS, retries, or detailed per-request telemetry to a microservices app and explicitly says they cannot modify the application code, the answer is almost always a service mesh.

The mechanism is the sidecar proxy. Each pod runs your service container plus a sidecar proxy container next to it. All traffic in and out of the service flows through that proxy. The proxies are configured by a control plane that pushes policy and collects telemetry. Because the proxies sit in the network path between services, they can do things like inject failures (drop requests, add latency) to test resilience, enforce authentication between services, and emit metrics like response time without the service code knowing any of it is happening.

On Google Cloud, the managed implementation is called Cloud Service Mesh. It is a fully managed implementation of open-source Istio. The Professional Cloud Architect exam may use the name Istio, the name Cloud Service Mesh, or just the phrase "service mesh." All three refer to the same category of product and the same set of capabilities. Fault injection for resilience testing, traffic management between service versions, mTLS between pods, and observability tied to performance metrics are the answers a service mesh question is asking you to give.

What a service mesh is not

It is worth being explicit about what does not require a service mesh, because the exam writes plausible distractors. Standard GKE networking, ingress controllers, and external load balancers handle north-south traffic into the cluster. They do not give you per-request mTLS between internal services or programmable fault injection. Cloud Armor and VPC firewall rules handle perimeter security and IP-level filtering. They do not authenticate one service to another. Cloud Logging and Cloud Monitoring give you logs and metrics, but they do not capture per-call traces between microservices the way a sidecar proxy does. When the scenario says "without changing application code" and asks for any of the four mesh capabilities, pick the mesh.

GKE Enterprise and Anthos clusters

GKE Enterprise is the productized name for what used to be marketed as Anthos. It manages fleets of Kubernetes clusters as a group, and the clusters in a fleet can run on Google Cloud, on-premises, or in other clouds like AWS and Azure. That hybrid and multi-cloud reach is the headline feature. If a Professional Cloud Architect scenario describes an enterprise that needs centralized policy and operations across clusters running in their own data center plus clusters running in GCP plus clusters running in AWS, GKE Enterprise is the answer. Plain GKE is not, because plain GKE only runs on Google Cloud.

The components you should be able to name are Anthos Clusters, Anthos Config Management, and Cloud Service Mesh. Anthos Clusters are the Kubernetes clusters across all the environments that GKE Enterprise stitches together. Anthos Config Management enforces Kubernetes policies and configurations consistently across the fleet, so a single source of truth in Git can govern every cluster. Cloud Service Mesh is often paired with GKE Enterprise to give you the same traffic management, security, and observability story across the whole fleet that you would have inside a single cluster.

The product is positioned for production-grade workloads with enhanced security and observability. The exam framing tends to be: large enterprise, mixed footprint, wants one control plane for governance and operations across all of it. That is the GKE Enterprise question.

How to read these questions on the exam

On the Professional Cloud Architect exam, service mesh and GKE Enterprise sit close to each other in the GKE chapter, but they answer different questions. Service mesh is about communication between microservices in a distributed application, with the explicit constraint of not modifying application code. GKE Enterprise is about managing many Kubernetes clusters across hybrid and multi-cloud environments as a single fleet. If a question mentions on-prem plus GCP plus another cloud, you are looking at GKE Enterprise. If a question mentions mTLS, fault injection, traffic splitting, or per-request observability inside a microservices app, you are looking at a service mesh. The two often appear together in marketing, but they answer different questions.

My Professional Cloud Architect course covers service mesh and GKE Enterprise alongside the rest of the containers and serverless material.

arrow