GKE Specialized Workloads and Observability for the PCA Exam

GCP Study Hub
Ben Makansi
April 8, 2026

The Professional Cloud Architect exam expects you to know how GKE handles workloads that don't fit the standard stateless deployment pattern, and how observability is wired in by default. The two specialized workload types that come up are StatefulSet and DaemonSet, and the observability piece is mostly about understanding what Cloud Logging and Cloud Monitoring give you out of the box.

StatefulSet for stateful applications

A standard Kubernetes Deployment treats every pod as identical and interchangeable. That's fine for a stateless web service where any replica can handle any request, but it falls apart when you're running something like a database. Databases need stable identities, persistent storage that survives pod rescheduling, and predictable startup ordering.

StatefulSet is the Kubernetes object built for this. Each pod managed by a StatefulSet gets a unique, stable identifier and a consistent storage volume that persists across rescheduling. If a pod is terminated and rescheduled to a different node, it reattaches to the same persistent disk and resumes with its prior state intact. That's the property databases and other stateful systems need.

The mental model I want you to hold is: pod plus persistent disk plus stable identity. Even if the pod itself moves around or gets restarted, the storage volume and identity stay bound to that specific pod slot. That's what makes a StatefulSet usable for things like a sharded database where each shard needs to know which one it is.

DaemonSet for per-node workloads

DaemonSet solves a different problem. Sometimes you need a specific pod to run on every node in your cluster, not just somewhere in the cluster. The classic examples are log collectors and monitoring agents. If you're shipping logs from each node, you need the log shipper running on every node, not load-balanced across a few of them.

A DaemonSet ensures a copy of a specified pod runs on every node, or on a selected subset of nodes if you configure node selectors. The important part for the Professional Cloud Architect exam is the automatic handling: when you add a new node to the cluster, the DaemonSet automatically schedules its pod onto that new node. When you remove a node, the pod goes away with it. You don't manage the per-node placement yourself.

So the contrast is clean. StatefulSet gives you stable identity and storage for stateful applications. DaemonSet gives you uniform per-node coverage for cluster-wide infrastructure pods.

Native Cloud Logging and Cloud Monitoring integration

GKE integrates natively with Cloud Logging and Cloud Monitoring. There's no agent to install, no additional configuration to write. When you create a cluster, logs and metrics from your containers, nodes, and the control plane are collected automatically and made available in the Cloud Logging and Cloud Monitoring consoles.

Cloud Logging aggregates logs from every container running in the cluster. You can filter by service, by container, by pod, by namespace. When you're diagnosing a pod crash or an application error, this is where you go first. The aggregation across containers is the value: you don't have to know which node a pod ran on to find its logs.

Disabling logs and managing ingestion

You don't always want every log from every container flowing into Cloud Logging. Log ingestion has cost implications, and some containers produce noisy output you don't care about.

There are two levels of control. At the cluster level, you can disable the Cloud Logging integration entirely in the cluster settings. That turns off log collection for the whole cluster. More commonly, you'll want to keep cluster-level logging on but exclude specific containers. For that, you go into the ingestion settings in Cloud Logging and disable the log source for the specific GKE container resource you want to silence.

Knowing both knobs exist matters for the Professional Cloud Architect exam. If a question asks how to reduce log volume from one chatty workload without losing visibility into the rest of the cluster, the answer is per-container ingestion exclusion in Cloud Logging, not turning off the cluster integration.

Observability for GKE for extra metrics

Beyond the default Cloud Logging and Cloud Monitoring integration, you can enable Observability for GKE on a cluster. This gives you additional metrics around pod health, CPU and memory usage at the pod level, and network performance. These are the metrics you need when you're optimizing resource utilization or trying to right-size a cluster.

The default integration covers the basics. Observability for GKE is the opt-in tier when you want deeper signals into how your workloads are actually behaving inside the cluster.

My Professional Cloud Architect course covers GKE specialized workloads and observability alongside the rest of the containers and serverless material.

arrow