If you are studying for the Associate Cloud Engineer exam, the three terms you absolutely have to be solid on are pod, node, and cluster. Almost every GKE question on the ACE exam assumes you know what each one is and how they relate. This article covers exactly that, in the order that makes the relationships clearest.
It does not cover every Kubernetes object type, the internals of the kubelet, or the control plane components in detail. The goal is to give you the three core concepts and the relationships between them, which is what the ACE exam actually tests.
The cleanest way to understand pods, nodes, and clusters is to start small and build up.
A container is a single packaged application. It has its own filesystem, its own dependencies, its own process. Think of a container as one running piece of software.
A pod is the smallest deployable unit in Kubernetes. A pod contains one or more containers that share a network namespace and storage volumes. In practice, most pods have a single container in them. The cases where a pod has multiple containers are usually a main container plus a helper, like a logging agent or a service mesh sidecar. The important thing is that Kubernetes does not schedule containers directly. It schedules pods. Containers come along for the ride.
A node is a worker machine. It can be a virtual machine or a physical server. In GKE, nodes are Compute Engine VMs. A node provides CPU, memory, and storage for the pods that get scheduled onto it. Each node runs multiple pods. If a node fails, Kubernetes reschedules its pods onto other healthy nodes in the cluster.
A cluster is the whole thing. A cluster is a collection of nodes that work together, managed as a single system. The cluster has a control plane (which Kubernetes uses to schedule pods, store configuration, and respond to changes) and a data plane (the nodes that actually run your workloads).
So the chain is: containers go in pods, pods run on nodes, nodes belong to a cluster.
The relationships are what the exam actually tests, more than the individual definitions.
A pod can have multiple containers, but those containers always run on the same node. They cannot be split across nodes. They share the pod's network and storage.
A node typically runs multiple pods. The number depends on the node's size and the pods' resource requests. If a node fails, its pods are not gone forever. The cluster reschedules them onto other nodes.
A cluster typically has multiple nodes. If you have just one node, you have a degenerate cluster, and a node failure is a cluster failure. Real clusters have at least a few nodes for resilience.
Pods are ephemeral. Pods get killed and replaced all the time, by deployments, by autoscaling, by node failures. If you need persistent identity or persistent storage that survives a pod restart, you use Kubernetes objects like StatefulSets and PersistentVolumes. Pods themselves are disposable by design.
It is worth understanding why this layered structure exists, because it makes the exam questions easier to reason about.
Kubernetes was originally built at Google to manage containerized workloads at massive scale. The three problems it was designed to solve are manual scaling, unreliable deployments, and lack of portability. The pod-node-cluster structure is what makes those solutions possible. Pods are abstract enough that Kubernetes can move them around. Nodes are interchangeable, so a failed node does not bring down the application. The cluster as a whole presents a single control plane regardless of how many nodes are underneath.
A few patterns come up.
The first is direct definition questions. The scenario asks "what is the smallest deployable unit in Kubernetes" or "what is a worker machine in a Kubernetes cluster." The answer is pod for the first one, node for the second.
The second pattern is relationship questions. A scenario describes a node failure and asks what happens to the pods on that node. The answer is that they are rescheduled to other healthy nodes by the cluster's control plane. A scenario describes adding capacity to a cluster and asks what was actually changed. The answer is that nodes were added (or scaled by the cluster autoscaler).
The third pattern is about the difference between pods and containers. The exam sometimes tests whether you know that you do not run a single container as your unit of deployment in Kubernetes. You always run a pod, and that pod contains one or more containers.
If you see a question about "smallest deployable unit," think pod. If you see "worker machine" in a Kubernetes context, think node. If you see "collection of nodes managed together," think cluster.
Containers go in pods. Pods run on nodes. Nodes belong to a cluster. That is the structure, and it is the foundation for everything else GKE-related on the Associate Cloud Engineer exam. If you have this clear, every other GKE topic you study will make more sense.
The exam tests these terms both directly and as building blocks for more complex questions about scheduling, scaling, and reliability. Get this layer right and the rest gets much easier.
My Associate Cloud Engineer course covers pods, nodes, and clusters in the Kubernetes foundations section, then builds on them through deployments, services, autoscaling, and the rest of the GKE topics on the ACE exam.