
Cloud Composer questions on the Professional Data Engineer exam almost always come down to one capability: managing dependencies between steps that run on different GCP services. The exam will give you a scenario where a Dataflow job needs to wait for a BigQuery load, or where Vertex AI training has to kick off only after fresh data lands in a table, and you need to know which Airflow operators wire those pieces together. This article walks through the operator vocabulary I expect every PDE candidate to have memorized before exam day.
Composer is managed Apache Airflow. That distinction matters because Airflow is what gives you the dependency graph. When a question describes a multi-step pipeline where step B must not start until step A succeeds, and step C runs only if both A and B finished cleanly, you are looking at a DAG. Cloud Scheduler can fire a single job on a cron. Workflows can chain a small number of HTTP calls. Neither one was designed to express dependencies between services with retries, branching, and backfills, which is exactly the territory Composer owns.
If a PDE question mentions phrases like orchestrate, complex dependencies, retry policy across services, or end-to-end pipeline with multiple stages, the answer is almost always Cloud Composer. Lock that pattern in.
BigQuery is the most common destination in PDE pipelines, so the BigQuery operators show up everywhere. The two you should recognize on sight are BigQueryInsertJobOperator (the modern way to run SQL) and BigQueryCheckOperator (a guard that fails the task if a query returns an empty or zero result, useful for data quality gates).
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
run_aggregation = BigQueryInsertJobOperator(
task_id="daily_aggregation",
configuration={
"query": {
"query": "INSERT INTO analytics.daily_metrics SELECT ... FROM raw.events",
"useLegacySql": False,
}
},
)
If you see older study material referencing BigQueryOperator, that is the legacy name for the same idea. The exam tends to phrase questions in terms of the capability rather than the exact class, so understand that Composer runs SQL on BigQuery as a step in a DAG.
A very common pipeline shape is files land in GCS, get loaded into BigQuery on a schedule, and trigger downstream processing. The operator that does the middle step is GCSToBigQueryOperator. The exam loves this pattern because it tests whether you reach for Composer instead of writing a Cloud Function that polls a bucket.
from airflow.providers.google.cloud.transfers.gcs_to_bigquery import GCSToBigQueryOperator
load_events = GCSToBigQueryOperator(
task_id="load_events",
bucket="raw-events-prod",
source_objects=["events/{{ ds }}/*.json"],
destination_project_dataset_table="raw.events${{ ds_nodash }}",
source_format="NEWLINE_DELIMITED_JSON",
write_disposition="WRITE_TRUNCATE",
)
Notice the Jinja templating with {{ ds }}. Composer fills in the logical execution date at runtime, which is how Airflow handles partitioned ingestion and backfills cleanly. That templating is a recurring PDE tell that something is an Airflow-shaped problem.
Dataflow is where most heavy transformation happens in a Google-native data stack, and Composer is how you schedule it. There are two flavors of operator worth knowing.
The exam scenario to anchor on is the classic load then transform chain: load_events >> run_dataflow >> run_aggregation. Composer guarantees the Dataflow job starts only after the BigQuery load reports success, and the aggregation query runs only after the Dataflow job finishes.
For Spark and Hadoop workloads, DataprocSubmitJobOperator is the standard, paired with DataprocCreateClusterOperator and DataprocDeleteClusterOperator when you want ephemeral clusters. The PDE exam favors ephemeral clusters for cost reasons, so a typical answer pattern is create cluster, submit job, delete cluster, all as tasks in the same DAG with the delete task set to run regardless of upstream success so you never leave a cluster orphaned.
Pub/Sub operators handle the publish side of event-driven workflows. PubSubPublishMessageOperator lets a DAG announce that a stage completed, which downstream consumers (including other DAGs or Cloud Functions) can react to. For the consume side, exam answers usually point to Dataflow with Pub/Sub as a source rather than an Airflow task polling a subscription, because Composer is for orchestration, not for being the streaming runtime itself.
ML pipelines on the PDE exam frequently look like ingest, transform, retrain, deploy. The Vertex AI operator family covers training jobs, batch prediction, and model deployment, so Composer can drive the retrain step as the last node in a data pipeline. The classic exam phrasing is something like kick off model retraining after the feature table refresh completes, and the answer is a Composer DAG with the Vertex AI training operator downstream of the BigQuery refresh.
The single most useful mental model for the exam is this five-stage chain:
If you can sketch that DAG on a whiteboard with the right operators, you will handle most Composer questions on the Professional Data Engineer exam without breaking stride.
One detail that occasionally trips people up: Composer is not GCP-only. Airflow has provider packages for AWS and Azure, so an Airflow DAG can orchestrate an S3 transfer, an Azure VM job, and a BigQuery load in the same workflow. The exam rarely asks deep multi-cloud questions, but if a scenario mentions cross-cloud dependencies, Composer remains a valid answer.
Recognize Composer as the right tool whenever a question describes dependencies, retries, or backfills across multiple services. Be able to name the operator family that runs SQL on BigQuery, loads from GCS, launches Dataflow, submits Dataproc jobs, and triggers Vertex AI. Understand that Composer orchestrates other services rather than processing data itself, and that the value comes from the DAG, not from any single task.
My Professional Data Engineer course covers Composer in depth alongside Dataflow, BigQuery, and Vertex AI so you can recognize orchestration scenarios on sight and pick the right operator family every time.