
Cloud Monitoring shows up on the Professional Data Engineer exam in a handful of predictable shapes. You will see a scenario where a pipeline is running slow and the question asks which metric to investigate, or you will see a scenario where a team has five projects and needs one pane of glass. The right answers come from understanding the three buckets of metrics, how alerting policies route notifications, and the workspace pattern that lets a single project monitor many.
In this post I want to walk through what you actually need to know about Cloud Monitoring for the Professional Data Engineer exam, without dragging in everything the product can do operationally. The exam stays close to a few core ideas.
Cloud Monitoring is the metrics half of the Cloud Observability suite. Cloud Logging is the logs half. They share the same console, the same IAM surface, and the same project model. If you remember the old name, Stackdriver Monitoring, that is the same product. Anything on the exam written as Stackdriver is referring to what is now called Cloud Monitoring or Cloud Logging.
The job of Cloud Monitoring is to collect, analyze, and visualize metrics from your Google Cloud resources so you can keep track of the health and performance of your applications and infrastructure. It supports real-time metrics, alerting on those metrics, and custom dashboards. For a data engineer that means watching Dataflow worker CPU, BigQuery slot utilization, Pub/Sub backlog age, and Cloud Storage request rates without writing your own scraper.
The exam likes to test whether you know which metric source fits which scenario. There are three buckets.
One rule of thumb that the course leans on and that lines up with the exam: use built-in metrics where possible. They are free in the sense that you do not have to instrument anything, and they are tuned to the Google Cloud services they describe. Reach for custom metrics only when no built-in signal answers the question you actually have.
Real-time alerts can be configured on any metric you collect. The pattern is always the same: pick a metric, define a condition (threshold, rate of change, absence), and attach one or more notification channels.
Useful examples to keep in your head for the exam:
Notification channels are where teams plug into their existing incident process. The supported types you should recognize are email, SMS or text, and third-party integrations like Slack and PagerDuty. If a question describes a team that already runs on PagerDuty rotations, the answer is to point the Cloud Monitoring alerting policy at a PagerDuty notification channel, not to build something new.
This is the slide that hides the most exam value. In most real Google Cloud organizations, workloads are split across many projects, often one per environment or one per team. The exam will give you a setup with several departments inside one company, each using different projects, and ask how to give leadership a single dashboard.
The answer is the Cloud Monitoring workspace pattern. You designate one project as the primary, set up a Cloud Monitoring workspace there, and link the other projects to it through the Google Cloud Console. From that point on, dashboards, alerting policies, and uptime checks in the primary workspace can reference metrics from any of the linked projects.
A few things to know about this pattern when an exam question pushes on it:
If a Professional Data Engineer scenario asks about giving an analytics team a cross-environment dashboard, the right move is almost always to link the projects into one monitoring workspace, not to export metrics elsewhere or stand up a third-party tool.
For exam day, anchor on four things. One, Cloud Monitoring and Cloud Logging are the two halves of the Cloud Observability suite, and Stackdriver is the old name. Two, metrics come in three flavors, with built-in being the default. Three, alerts run off any metric and route to email, SMS, or third-party tools like Slack and PagerDuty. Four, multi-project visibility comes from a primary monitoring workspace with the other projects linked in.
My Professional Data Engineer course covers Cloud Monitoring alongside Cloud Logging, IAM, and the other operations topics that show up across the exam.