Pub/Sub vs Apache Kafka and Other Message Buffers for the PCA Exam

GCP Study Hub
Ben Makansi
November 17, 2025

Pub/Sub shows up across the Professional Cloud Architect exam as the default messaging buffer for almost every streaming or decoupling scenario, and Google often frames it against open-source alternatives like Apache Kafka. If you understand what role Pub/Sub plays in an architecture and which other message buffers it tends to replace, you can answer most of these questions quickly.

What Pub/Sub actually is

Cloud Pub/Sub is a global-scale messaging buffer. It sits between systems that produce data and systems that consume data, decoupling the two so that neither side has to know anything about the other. A publisher drops a message onto a topic, and any subscriber attached to that topic can pull or receive that message later. The producer does not wait on the consumer, and the consumer does not need the producer to be online.

That decoupling is the whole point. When two systems talk to each other directly, a slowdown or outage on one side propagates to the other. When you put Pub/Sub in the middle, the producer keeps publishing into the buffer and the consumer drains it at whatever pace it can manage. The systems become loosely coupled, and the architecture becomes more reliable.

Pub/Sub is also serverless and fully managed. There are no brokers to provision, no clusters to size, no partitions to balance. Google handles all of that. For exam purposes, treat Pub/Sub as a no-ops service. If a question describes a team that does not want to manage messaging infrastructure, Pub/Sub is almost always the right answer.

Pub/Sub vs Apache Kafka

Pub/Sub is GCP's version of Apache Kafka. The functionality overlaps heavily. Both are distributed messaging systems built around topics and subscribers, both buffer streaming data, and both are used as the entry point for real-time pipelines. The difference is operational. Kafka is open source, which means someone has to run the cluster, manage brokers, handle scaling, and deal with failures. Pub/Sub hands all of that to Google.

On the Professional Cloud Architect exam, this distinction usually appears as a migration question. A company is running Kafka on-prem or on VMs and wants to reduce operational overhead, or they want global scale without sharding clusters across regions. The answer is to move to Pub/Sub. The exam is not asking you to weigh the merits of Kafka in detail. It is asking you to recognize that managed beats self-managed when the customer's stated goal is reducing ops burden or scaling more easily.

Common use cases

The most common use case for Pub/Sub is data ingestion. You have streams of data coming in from many sources, and you need somewhere to land them before processing. Pub/Sub absorbs the firehose, buffers it, and lets downstream systems consume at their own pace. This pattern shows up everywhere: IoT telemetry, application logs, click events, transactional data feeds.

The second pattern is connecting Pub/Sub to pipeline services. Dataflow is the usual partner. Pub/Sub collects and buffers the data, then Dataflow pulls from the subscription and runs streaming transformations or aggregations. The combination of Pub/Sub plus Dataflow is the canonical GCP streaming pipeline, and you should expect to see it as the right answer whenever a question describes real-time ingestion followed by transformation.

Other message buffers to recognize

There are plenty of message buffers in the wild outside the Google ecosystem. Amazon SQS, Redis Pub/Sub, Apache ActiveMQ, and RabbitMQ all show up in production architectures. They serve similar roles to Pub/Sub in their respective environments.

The reason to know these names for the Professional Cloud Architect exam is pattern recognition. If a question mentions any of them, there is a good chance the correct answer involves transitioning to or integrating with Cloud Pub/Sub. The exam is not testing your knowledge of RabbitMQ internals. It is testing whether you recognize that when a customer is using one of these technologies and migrating to GCP, Pub/Sub is the natural replacement.

This is the same logic that applies to most managed-service questions on the exam. Google has a service that fills the same role, and the GCP-native solution is almost always the preferred answer when a customer is moving workloads onto the platform.

How to read these questions on the exam

When you see a scenario involving streaming data, decoupling producers from consumers, or replacing an existing messaging system, your first instinct should be Pub/Sub. Look for keywords around buffering, ingestion, decoupling, or any of the alternative messaging technologies. If the question emphasizes reducing operational overhead, global scale, or no-ops, the answer is Pub/Sub almost without exception.

The trickier questions ask you to pair Pub/Sub with the right downstream service. For real-time transformations, that is Dataflow. For storage of the raw stream, it might be Cloud Storage or BigQuery via a Dataflow pipeline. For triggering serverless functions on each message, Cloud Functions or Cloud Run. Pub/Sub itself is rarely the full answer. It is the buffer in the middle of a larger pipeline.

My Professional Cloud Architect course covers Pub/Sub and message buffer patterns alongside the rest of the messaging and pipelines material.

arrow