
Artifact Registry shows up on the Professional Cloud Architect exam in two ways. The first is as a destination for container images in a deployment pipeline. The second is as a source of test questions about IAM roles, vulnerability scanning, and tagging conventions. I want to walk through the deployment flow end to end so you can answer either kind of question without hesitation.
Container Registry was Google Cloud's original solution for storing private Docker images. It is being retired and Artifact Registry is the replacement. Artifact Registry stores container images, but it also stores language packages like Maven artifacts and npm packages, along with other build dependencies. For the Professional Cloud Architect exam, treat Artifact Registry as the default answer for any artifact storage question. If a scenario explicitly references Container Registry, the same deployment concepts apply, but the IAM roles differ.
Every container deployment on Google Cloud follows the same shape. You build the image, push it to a registry, grant read access to whoever is doing the deploy, and then deploy the image to a compute target.
Step one is the build. Docker takes your application code and dependencies and packages them into a container image. This happens locally, in Cloud Build, or in any CI system that can run a Docker build.
Step two is the push. The image goes from your build environment into Artifact Registry. The image now lives in a private repository inside your Google Cloud project, identified by a URI like us-central1-docker.pkg.dev/myproject/myrepo/myservice:tag.
Step three is permissions. Whatever identity is going to deploy the image needs read access to the registry. This is usually a service account attached to Cloud Run or to a GKE node pool. Without read access, the deploy fails when the runtime tries to pull the image.
Step four is the deploy. The compute target, typically Cloud Run or GKE, pulls the image and runs it. The image URI you reference at deploy time tells the platform exactly which version to pull.
The exam tests this directly. Container Registry stores images in Cloud Storage buckets under the hood, so the role for pulling from Container Registry is roles/storage.objectViewer. That is the Storage Object Viewer role on the bucket.
Artifact Registry does not use Cloud Storage buckets directly. The Storage Object Viewer role is irrelevant. Instead, you grant roles/artifactregistry.reader to allow image pulls. There is also an Artifact Registry Viewer role that adds permission to view repository metadata, but Reader is what you need for the deploy itself.
If you see a question where a Cloud Run service or a GKE pod cannot pull an image from Artifact Registry, the answer is almost always that the runtime service account is missing the Artifact Registry Reader role on the repository. Granting Storage Object Viewer is the wrong answer for Artifact Registry. That trap appears on the exam.
Both Container Registry and Artifact Registry can scan images automatically against a database of known vulnerabilities, including CVEs. The scan compares the contents of the image, including its OS packages and language libraries, against the vulnerability database, and produces a findings report. The report lists each detected vulnerability, its severity, and the recommended remediation.
This is a security control you turn on as part of your container workflow. It catches vulnerable base images and outdated dependencies before they reach production. On the Professional Cloud Architect exam, if a scenario asks how to detect known CVEs in container images stored in Artifact Registry, vulnerability scanning is the answer. You do not need a third-party tool for this baseline check.
Tags are identifiers attached to images after they are built. The image URI gcr.io/myproject/myservice:d4f6a2e uses d4f6a2e as the tag.
The recommended practice is to use the Git commit hash as the tag. Each commit in your Git repository has a unique hash. When you build an image from a specific commit and tag the image with that commit hash, every image is traceable back to the exact code state that produced it. If a deployed service has an issue, you can look at the image URI, identify the commit, and inspect that exact code. If you need to roll back, you redeploy the image tagged with the previous commit hash.
The alternative, tagging everything with latest, breaks traceability. Two deploys with the same latest tag can point at completely different code, which makes debugging and rollback difficult. The Professional Cloud Architect exam will sometimes describe a team that cannot figure out which code version is running in production. The answer to that scenario involves tagging images with commit hashes rather than mutable tags like latest.
For the exam, hold these four facts in your head. Artifact Registry is the current generation of artifact storage and Container Registry is being retired. The deployment flow is build, push, grant read access, deploy. Pulling from Artifact Registry requires the Artifact Registry Reader role, not Storage Object Viewer. And tagging images with Git commit hashes is the practice that gives you traceability and clean rollbacks.
My Professional Cloud Architect course covers Artifact Registry deployment flow alongside the rest of the containers and serverless material.