
Cross-project deployments with Cloud Build show up on the Professional Data Engineer exam in a very specific shape, and once you see the pattern a couple of times it becomes one of the easier scoring opportunities. The setup is simple to describe but easy to get wrong under exam pressure if you have not thought through the IAM side of it. In this post I want to walk through how Cloud Build promotes artifacts from a development project into a production project, where the service account permissions live, and what the exam is really asking when it dresses this up as a Cloud Composer scenario.
Google's reference architectures lean hard on the idea of one project per environment. Development, staging, and production each get their own project boundary. The reason is blast radius. IAM bindings, quotas, billing, audit logs, and even accidental console clicks are all scoped to the project. If a data engineer with broad permissions in dev fat-fingers a delete, you do not want that to reach the production data warehouse or the production Composer environment. Keeping environments in separate projects gives you a hard wall between them.
That wall is exactly what makes deployment interesting. You still need a single CI/CD pipeline that can build, test, and promote code through every environment. Cloud Build is usually that pipeline. The question becomes where Cloud Build itself lives and how it reaches across the wall to deploy into the upper environment.
The configuration the Professional Data Engineer exam wants you to recognize puts Cloud Build in the development project. That same Cloud Build instance is responsible for two things in sequence. First it deploys the code into the dev environment so tests can run. Then, only after the tests pass, it promotes the deployment into the production project.
You could instead run a separate Cloud Build in each project and have each environment deploy independently. Both designs are valid in the real world. The exam tends to favor the single-pipeline version because it is the cleanest way to enforce the rule that prod only gets touched after dev tests pass. If you see an answer choice that talks about a Cloud Build trigger in dev that promotes to prod, that is the one to gravitate toward.
This is the part most people get tripped up on. Cloud Build runs as a service account. By default that service account exists in the project where Cloud Build is configured, which in our pattern is the dev project. When Cloud Build needs to deploy to the prod project, the dev service account has to be granted permissions in prod. Permissions do not flow across projects on their own.
Two pieces matter here:
roles/storage.objectAdmin on that bucket. Deploying a Cloud Run service means roles/run.admin. Updating a Composer environment's GCS bucket means storage permissions on that bucket. The pattern is the same. Grant the dev Cloud Build service account the narrowest role in prod that lets it do its job.roles/iam.serviceAccountUser on that prod service account. This trips people up because the error message points at the deploy step, not at the missing impersonation grant.A quick way to grant cross-project access from the command line:
gcloud projects add-iam-policy-binding PROD_PROJECT_ID \
--member="serviceAccount:DEV_PROJECT_NUMBER@cloudbuild.gserviceaccount.com" \
--role="roles/storage.objectAdmin"
gcloud iam service-accounts add-iam-policy-binding \
prod-deployer@PROD_PROJECT_ID.iam.gserviceaccount.com \
--member="serviceAccount:DEV_PROJECT_NUMBER@cloudbuild.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"Note the member identity. It is the Cloud Build default service account in the dev project, identified by the dev project's number. That is the principal you are extending across the project boundary.
The Professional Data Engineer exam loves to dress this pattern up as a Composer question. Here is the version to memorize. You have a Cloud Composer environment in dev and another in prod. DAG files live in source control. The pipeline needs to copy DAGs into the dev Composer environment's GCS bucket, let the dev Composer pick them up and run tests, and then copy the validated DAG files into the prod Composer environment's GCS bucket so prod starts running them.
Cloud Build sits in dev and orchestrates the whole thing. It writes the DAGs to the dev bucket, waits for tests, and then writes the same DAGs to the prod bucket. The promotion step is a plain gsutil cp from one bucket to another, but it only works if the dev Cloud Build service account has been granted write access to the prod Composer bucket. That bucket lives in the prod project, so the grant is cross-project.
When you see a question about promoting code or artifacts from a lower environment to an upper environment, look for three things. Is each environment in its own project. Where does the Cloud Build pipeline live. And does the correct answer mention granting the Cloud Build service account from the source project the relevant role in the target project. If all three are present, you are almost certainly on the right answer.
The Professional Data Engineer exam is really testing whether you understand that Cloud Build is a workload running as an identity, and that identity needs explicit IAM bindings in any project it touches. Once that clicks, cross-project deployment stops being a special topic and starts being just another IAM problem.
My Professional Data Engineer course covers Cloud Build cross-project deployments alongside the rest of the CI/CD and orchestration material you need for the exam, including the Cloud Composer DAG promotion pattern in full.