
A good deployment pipeline does not just get code into production. It also gets bad code out of production fast. When the latest release ships a bug that breaks the app, you do not want to be hand-editing instances or scrambling for the right artifact. You want a button to push, or a command to run, that returns the system to the last known good state. That is what rollback means in a CI/CD context, and it shows up directly on the Professional Cloud Architect exam.
A rollback is the act of returning a deployed application to a previous version after a release introduces problems. In a CI/CD pipeline, every successful build produces a versioned, immutable artifact. A container image with a tag, a deb package with a version number, a compiled binary stored in Artifact Registry. When you roll back, you redeploy one of those previous artifacts through the same pipeline you used to deploy the broken one.
The key idea is that rollback is just another deployment. You are not patching the live system or undoing changes in place. You are taking a known-good artifact from earlier in your release history and pushing it out through the normal channel. Same automation, same validation steps, same tooling. The only difference is the version number.
The exam tests this with a familiar setup. You have an application running on a managed instance group. You ship a deployment, and the new release has a bug that is affecting users. You need to stabilize the system while you investigate the root cause. The right answer is not to start SSHing into VMs and reverting files. The right answer is to roll back the application to the previous version by redeploying the last known stable version through the pipeline.
This works because your MIG is built around an instance template that points to a specific version of your application. To roll back, you point the MIG at the previous instance template, the one running the version before the bad release, and trigger a rolling update. The MIG replaces the broken instances with new instances built from the older template.
The mechanics depend on what you are deploying to, but the pattern is consistent.
For a MIG, you keep your previous instance templates around. The pipeline creates a new template for each release rather than mutating an existing one. To roll back, you call gcloud compute instance-groups managed rolling-action start-update with the older template, and the MIG drains and replaces instances incrementally. Health checks make sure unhealthy instances do not stay in service.
For Cloud Run, every deployment creates a new revision. Rollback is moving traffic back to a prior revision with gcloud run services update-traffic. The old revision is still there, ready to receive traffic. No rebuild required.
For GKE, you use kubectl rollout undo on the deployment, which reverts to the previous ReplicaSet. Kubernetes keeps a configurable history of revisions so you can roll back several versions if needed.
In every case, the pipeline itself is the rollback mechanism. The artifact already exists. You are just telling the platform to use the older one.
The exam is checking that you understand the relationship between automation, versioned artifacts, and operational stability. A pipeline is not just for shipping new code. It is the controlled path in and out of production. When something goes wrong, the right move is to use that controlled path to restore the previous state, not to bypass it with manual fixes that no one else can audit or reproduce.
The other tempting answers on questions like this usually involve SSHing into instances, manually rolling back database migrations, or rebuilding from source. Those are the wrong answers because they do not use the pipeline that already exists, they introduce drift between what the pipeline thinks is deployed and what is actually running, and they do not scale beyond a single engineer remembering what they changed.
If you are designing a deployment pipeline as a Professional Cloud Architect, the rollback story should be explicit from day one. That means versioned artifacts in Artifact Registry rather than mutable tags, instance templates retained across releases instead of overwritten, traffic-splitting on Cloud Run treated as a first-class deployment primitive, and a documented runbook that says "to roll back, run this command." If your team has to think about how to roll back during an incident, the pipeline is incomplete.
This also connects to canary and blue-green strategies. Both reduce the blast radius of a bad release, but neither removes the need for rollback. They just give you a smaller mess to clean up when you trigger one.
My Professional Cloud Architect course covers rolling back deployments alongside the rest of the architecture and compliance material.