Designing GKE Private Clusters for the PCA Exam

GCP Study Hub
Ben Makansi
January 18, 2026

Private GKE clusters are one of those Professional Cloud Architect topics where the exam keeps circling back to the same handful of design decisions. You will see questions about node isolation, control plane access, outbound internet traffic, API connectivity, and how pods authenticate to Google services. The answers are not complicated, but they only make sense if you understand how the pieces fit together.

I am going to walk through the five concepts that matter for this part of the exam: private clusters, Master Authorized Networks, Cloud NAT, Private Google Access, and Workload Identity. Each one solves a specific problem that comes up when you isolate a cluster from the public internet.

What a Private Cluster Actually Means

A private GKE cluster is a cluster where the worker nodes have only internal IP addresses. They have no public IPs, so they cannot directly reach the internet and the internet cannot directly reach them. The nodes can talk to each other and to other resources inside the same VPC, and that is it.

You can also configure a private control plane endpoint. When you do, the control plane is reachable only through a private IP, with access governed by rules you define. The combination of private nodes plus a private control plane endpoint shrinks the attack surface significantly. Both halves of the cluster are isolated from the public internet.

This is the baseline configuration the Professional Cloud Architect exam expects you to know. When a question describes a regulated workload that cannot have public IP exposure, private cluster with private control plane endpoint is the answer.

Master Authorized Networks

Once the control plane is private, you still need a way to decide who can reach it. That is what Master Authorized Networks does. You specify a list of trusted CIDR ranges, and only traffic from those ranges is allowed to hit the control plane. Everything else is dropped.

Typical entries might be a corporate network like 192.168.1.0/24, a development team subnet like 10.1.0.0/16, or a single workstation like 203.0.113.25/32. Anything outside those ranges is blocked automatically.

Master Authorized Networks works with both public and private clusters. On a public cluster, it restricts who can reach the public control plane endpoint. On a private cluster, it adds another layer on top of the private endpoint. The exam likes to combine these. If a question asks how to lock down control plane access to specific corporate IPs, Master Authorized Networks is the feature.

Cloud NAT for Outbound Internet Access

Private nodes have no public IPs, which means by default they cannot reach the internet at all. That is a problem when your workloads need to pull container images from public registries, hit a public API, or download dependencies.

Cloud NAT solves this. It is a managed service that translates the private IPs of your nodes into a single public IP for outbound traffic. The nodes still have only internal addresses on the cluster side. Cloud NAT sits at the edge of the VPC and handles the address translation, sending traffic out to the internet and routing responses back to the right node.

Nothing on the internet can initiate a connection to your nodes through Cloud NAT. It is outbound only. That preserves the isolation properties of the private cluster while still letting workloads reach external services.

Private Google Access for Google APIs

Cloud NAT handles internet traffic, but a lot of what GKE workloads actually need to reach is not the open internet. It is Google services like Pub/Sub, BigQuery, Cloud Storage, and the various Google APIs. You do not want that traffic going out to the internet and back in just to talk to another Google service.

Private Google Access is the answer. When you enable it on a subnet, workloads with only private IPs can reach Google APIs directly over Google's internal network. No public IPs, no Cloud NAT in the path, no traffic leaving the Google backbone.

For private GKE clusters, this is almost always something you want enabled. If a Professional Cloud Architect exam question asks how a private cluster can reach BigQuery or Pub/Sub without internet exposure, Private Google Access is the feature.

Workload Identity

The last piece is how pods authenticate to Google Cloud services. The wrong way is to create a service account, generate a JSON key file, mount it into the pod, and have the application use that key. Service account keys are long-lived credentials, they are easy to leak, and rotating them is painful.

Workload Identity replaces all of that. It binds a Kubernetes service account to a Google Cloud service account, and pods using that Kubernetes service account automatically authenticate as the corresponding Google service account. No keys, no credential files, no rotation work.

Pods can call Pub/Sub, BigQuery, Cloud Storage, or any other Google service through the bound service account, and the authentication happens transparently. On the exam, when a question asks how GKE workloads should authenticate to Google services without managing keys, Workload Identity is the answer. It is the default recommendation for any new cluster.

How These Pieces Fit Together

A well-designed private GKE cluster usually has all five of these features active. The cluster itself is private, with a private control plane endpoint. Master Authorized Networks restricts who can reach the control plane. Cloud NAT handles outbound internet traffic when it is needed. Private Google Access handles Google API traffic without going through the internet. Workload Identity handles pod authentication without service account keys.

The Professional Cloud Architect exam tests whether you can pick the right feature for the right problem. The questions are usually scenario-based, describing a security or connectivity requirement and asking which feature addresses it. Knowing what each one does and why you would use it is enough to get those questions right.

My Professional Cloud Architect course covers GKE private cluster design alongside the rest of the containers and serverless material.

arrow