GKE Tooling for the PCA Exam: kubectl, kubemci, Helm

GCP Study Hub
Ben Makansi
December 9, 2025

If you are preparing for the Professional Cloud Architect exam, GKE shows up enough that you need to know which command line tool does what. The exam will not test you on every flag, but it will expect you to recognize when a question is about cluster-level operations versus workload-level operations, and to know the names of the tools Google ships for multi-cluster Ingress and packaged deployments.

This article walks through the three GKE tools the Professional Cloud Architect exam expects you to recognize: kubectl, kubemci, and Helm.

kubectl vs gcloud: the split that matters

The first thing to internalize is the boundary between kubectl and gcloud. They sit on opposite sides of the cluster.

kubectl manages workloads and resources inside the cluster. Deploying an application, listing services, scaling pods, checking the status of a Deployment, switching between contexts. Anything that talks to the Kubernetes API is kubectl.

gcloud manages the cluster itself. Creating a cluster, updating a cluster, deleting a cluster, enabling autoscaling on the underlying node pool, fetching credentials so your local machine can talk to the cluster in the first place. Anything that talks to the GKE control plane as a Google Cloud resource is gcloud.

A useful test when you read an exam question: is the action something Kubernetes itself understands, or is it something only Google Cloud understands? If the answer involves Pods, Services, Deployments, Ingress, or contexts, you want kubectl. If it involves the cluster as a billable Google Cloud resource, you want gcloud.

kubectl commands worth recognizing

A few specific kubectl commands come up often enough on the Professional Cloud Architect exam that you should be able to read them without hesitation.

To trigger a rolling deployment update with a new container image:

kubectl set image deployment/[DEPLOYMENT_NAME] [IMAGE_NAME]

This updates the named Deployment to use a new image, and Kubernetes handles the rollout with minimal downtime by replacing pods incrementally.

To switch between Kubernetes contexts (for example, between dev, staging, and prod clusters you have credentials for):

kubectl config use-context [CONTEXT_NAME]

To inspect what your current kubeconfig actually contains:

kubectl config view

The context-switching commands matter because most realistic GKE environments have more than one cluster, and the exam likes to set up scenarios where someone is targeting the wrong cluster.

gcloud commands worth recognizing

On the gcloud side, the two commands that come up most often are credential fetching and autoscaling.

To fetch credentials and configure your local environment to talk to a GKE cluster:

gcloud container clusters get-credentials [CLUSTER_NAME]

This is the bridge command. Without it, kubectl has nothing to authenticate against. After running it, your kubeconfig is populated with the cluster's endpoint and credentials, and kubectl will work.

To enable autoscaling on a cluster's node pool:

gcloud container clusters update [CLUSTER_NAME] --enable-autoscaling --min-nodes=[MIN] --max-nodes=[MAX]

Note that this is cluster-level autoscaling (adding and removing nodes), which is distinct from Horizontal Pod Autoscaling (adding and removing pods inside the cluster). HPA is configured through Kubernetes manifests and managed via kubectl. Cluster autoscaling is configured through gcloud because it is changing the underlying Compute Engine resources Google Cloud bills you for.

kubemci: multi-cluster Ingress

kubemci is a Google-provided tool for configuring multi-cluster Ingress in Kubernetes. The use case is narrow but specific: you have Kubernetes clusters in more than one region, and you want a single global HTTP(S) load balancer to spread traffic across all of them.

Without kubemci, each cluster has its own Ingress and its own load balancer, and you would need a separate layer (DNS, or a manually configured load balancer) to fan traffic across regions. kubemci sets up a global HTTP(S) Load Balancer from Cloud Load Balancing that spans the clusters, so a user in Europe is routed to the cluster in europe-west1 and a user in the United States is routed to the cluster in us-east1, with the global load balancer making the routing decision based on user location.

For the Professional Cloud Architect exam, the thing to lock in is the name and the use case. If a question describes a multi-region GKE deployment that needs a single global entry point with location-based routing, kubemci is the answer. It is not a general-purpose tool you would reach for on a single-cluster deployment, and it is not a replacement for kubectl or gcloud. It is a layer on top of the cluster pair.

Helm: the Kubernetes package manager

Helm is the Kubernetes package manager. It bundles the full set of resources an application needs (Deployments, Services, Ingress, ConfigMaps, anything else) into a reusable template called a Helm chart.

Without Helm, deploying a non-trivial application means applying a stack of YAML manifests by hand, one resource at a time, and remembering which ones go together. With Helm, you install a chart with a single command, and the chart's templating engine generates the right manifests for the environment you are targeting.

A chart can be versioned, shared with a team, and reused across staging and production with different values files. The same chart that runs your application in staging can run it in production with a different replica count, a different ingress hostname, and different resource limits, all driven by the values file you pass at install time.

For the exam, recognize Helm as the answer when the scenario involves repeatable application deployment, sharing a packaged Kubernetes app across environments, or simplifying a complex multi-resource install. It is not a tool for managing the cluster itself (that is gcloud) and it is not a tool for one-off resource changes (that is kubectl). It is the layer above kubectl, packaging the resource definitions kubectl ultimately applies.

How these three fit together

The clean mental model:

  • gcloud creates and manages the cluster as a Google Cloud resource.
  • kubectl manages workloads and resources inside the cluster.
  • Helm packages those workloads into reusable, versioned charts that kubectl applies.
  • kubemci stitches multiple clusters together behind a global load balancer.

If you can read an exam question and immediately identify which of these layers it is asking about, you have the GKE tooling material handled.

My Professional Cloud Architect course covers GKE command line tooling alongside the rest of the containers and serverless material.

arrow