Kubernetes and Containerization for the PCA Exam

GCP Study Hub
Ben Makansi
November 29, 2025

Containers and Kubernetes show up consistently on the Professional Cloud Architect exam, and they tend to trip people up because the terminology stacks: containers, pods, nodes, clusters, deployments, control planes, node pools. None of it is hard once you can see how the pieces nest, but the exam will absolutely test whether you can keep them straight under pressure. This article walks through what you need to know about Kubernetes and containerization for the PCA exam, focused specifically on the foundational concepts before we get into Google Kubernetes Engine specifics in later articles.

What containers actually solve

A container is a lightweight, portable unit that packages an application together with all of its dependencies. The point is consistency. The application runs the same way on a developer laptop, in a staging environment, and in production, because everything it needs to run is bundled inside the container itself. This is the answer to the classic "it works on my machine" problem, and it is the reason containers became the default packaging format for modern applications.

For the PCA exam, you should be able to articulate three benefits of containerization:

  • Portability. Because the container carries its dependencies, it runs consistently across environments.
  • Scalability. You can scale specific services independently by running more containers of just that service, without touching the rest of the application.
  • Reliability. Workloads are isolated from each other, so a failure in one container does not cascade. Updates and rollbacks happen at the container level, not the whole application.

One scenario worth recognizing on the exam: when a customer is preparing to migrate to Google Cloud, a common recommendation is to containerize their microservices first. That packaging step modernizes the workload and makes the actual migration to a platform like GKE much smoother, because the application is already in a portable, consistent unit.

What Kubernetes is, and what it solves

Kubernetes, often written as K8s, is an open-source platform that automates the deployment, scaling, and management of containerized applications. Google originally built it internally to manage the billions of containers it runs every week, and then released it as open source. That origin matters for two reasons on the exam. First, Kubernetes is open source, which means workloads built on it remain portable across cloud providers and on-premises environments. Second, Google's managed offering, Google Kubernetes Engine, is built on the same upstream Kubernetes, so skills and configurations transfer.

The three problems Kubernetes was designed to solve map cleanly to exam scenarios:

  • Manual scaling. Before Kubernetes, teams provisioned and managed servers by hand to handle traffic spikes. Kubernetes automates this with the Horizontal Pod Autoscaler and the Cluster Autoscaler, which adjust the number of running containers and nodes based on current load.
  • Unreliable deployments. Inconsistent environments across servers led to deployment failures. Kubernetes uses declarative configurations, rolling updates, and self-healing mechanisms to keep deployments consistent and recover from failures automatically.
  • Lack of portability. Applications were tightly coupled to their underlying infrastructure. Containerization plus a consistent runtime environment lets the same workload run on-premises, in the cloud, or in hybrid setups.

If you see an exam question about a customer who is struggling to scale during traffic spikes, has unreliable rollouts, or is locked into specific infrastructure, Kubernetes is almost certainly part of the answer.

Google Kubernetes Engine in one paragraph

Google Kubernetes Engine, or GKE, is Google Cloud's managed Kubernetes service. It runs, manages, and scales containerized applications on Google Cloud infrastructure without forcing you to operate the underlying Kubernetes machinery yourself. Because it uses upstream open-source Kubernetes, workloads remain compatible with other platforms. GKE simplifies scaling, updates, and maintenance, integrates natively with the rest of Google Cloud (Cloud Monitoring, Cloud Logging, IAM, and so on), and is well suited to complex microservice architectures. On the exam, GKE is the default answer when the requirement involves managed Kubernetes on Google Cloud.

Cluster, node, pod

The three foundational Kubernetes objects nest inside each other. You need to be able to define each one and explain how they relate.

  • A cluster is a collection of nodes working together to run containerized applications managed by Kubernetes. The cluster is the environment your application operates in.
  • A node is a worker machine, a virtual machine or physical server, inside the cluster. Nodes provide CPU, memory, and storage and run pods. Kubernetes can add or remove nodes to scale dynamically, and if a node fails, Kubernetes reschedules its pods onto healthy nodes.
  • A pod is the smallest deployable unit in Kubernetes. A pod contains one or more tightly coupled containers that share networking and storage. Pods typically host a single application instance or microservice. Multiple identical pods are called replicas.

The mental picture: a cluster contains multiple nodes, each node hosts multiple pods, and each pod contains one or more containers. If an exam question asks about the smallest deployable unit, the answer is a pod, not a container.

Kubernetes manifests

Kubernetes is declarative. You tell it what state you want, and it figures out how to get there and keep it there. The way you express that desired state is a Kubernetes manifest, a file (usually YAML, sometimes JSON) that defines objects like pods, services, and deployments.

Here is a simple Deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
  labels:
    app: example-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example-app
  template:
    metadata:
      labels:
        app: example-app
    spec:
      containers:
      - name: example-container
        image: nginx:1.21
        ports:
        - containerPort: 80
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1

The structure to recognize: apiVersion and kind identify what type of object this is, metadata names and labels it, and spec defines the desired state. Inside spec, you see replicas: 3 (run three instances), a selector that matches pods by label, a template describing the pods to create, and a strategy controlling how updates roll out. The manifest is the source of truth. If the actual state drifts, Kubernetes works to bring it back.

Deployments

A Deployment is a high-level resource that manages scaling, rolling updates, and high availability for application instances. Deployments are ideal for stateless services, and the standard pattern is one Deployment per service.

Key things you specify in a Deployment:

  • Number of pod replicas. Kubernetes maintains this count, scaling up or down to match the desired state.
  • Container image. Defines the version of the application to deploy. Rolling out a new version or rolling back to an old one is just a manifest change.
  • imagePullPolicy. Setting imagePullPolicy: IfNotPresent tells Kubernetes to use a locally cached image on the node when available, avoiding unnecessary registry pulls.
  • Ports. The ports the application uses for communication.
  • Update strategy. Most commonly a rolling update, which replaces old pods with new ones gradually to minimize downtime.

Under the hood, a Deployment manages a ReplicaSet, and the ReplicaSet is what actually creates and scales the pod replicas. You generally interact with the Deployment, not the ReplicaSet directly.

The control plane

Every Kubernetes cluster has a control plane, which is the central management layer. It coordinates everything, manages workloads, and keeps the cluster's actual state aligned with its desired state. On the exam, you should know its three main components:

  • API Server. The entry point for every Kubernetes command and API request. Users, kubectl, and other components all talk to the cluster through the API server.
  • Scheduler. Assigns pods to nodes based on resource availability and constraints, ensuring efficient resource utilization.
  • Controller Manager. Watches the cluster's state and makes adjustments to keep the actual state matching the desired state. Scaling workloads and maintaining ReplicaSets fall under this.

In GKE, Google manages the control plane for you. Private control planes come up later when we get into networking and security topics on the PCA exam, but at this point you only need to know what the control plane is and what its three components do.

Node pools

Within a single cluster, you can create multiple node pools, where each pool is a group of nodes that share a machine type. This is how you tailor resources to the workload running on them.

A typical example: one node pool of compute-optimized nodes for processing-heavy tasks like video encoding or scientific calculations, and another pool of standard nodes for general-purpose services. By organizing nodes into pools and assigning workloads to the right pool, you get both performance optimization and cost efficiency, because each workload runs on appropriately sized machines.

Node pools are a frequent exam topic when the question involves matching workloads to machine types within a single cluster, or when a cost or performance optimization is being asked about.

What to lock in for the PCA exam

The Professional Cloud Architect exam will not ask you to write a manifest from scratch, but it absolutely expects you to recognize and reason about the concepts above. The minimum you should be able to do without thinking:

  • Define container, pod, node, and cluster, and explain how they nest.
  • Explain the three benefits of containerization (portability, scalability, reliability).
  • Identify Kubernetes as the answer when a scenario involves manual scaling pain, unreliable deployments, or infrastructure lock-in.
  • Recognize a Deployment manifest and identify replicas, image, and update strategy at a glance.
  • Name the three control plane components and what each does.
  • Recognize when node pools are the right answer for tailoring machine types within a cluster.

Once these are solid, the GKE-specific material that builds on top of them, such as autopilot vs standard, private clusters, networking, and workload identity, gets much easier to absorb.

My Professional Cloud Architect course covers Kubernetes and containerization alongside the rest of the containers and serverless material.

arrow