
If you take the Professional Cloud Architect exam, you will hit at least one question where a company is moving fast with CI/CD and asking how to keep security defects out of the codebase without slowing down releases. The answer the exam wants is to push security checks left into the pipeline itself, specifically through static code analysis and vulnerability scanning. This article walks through what those two controls actually are, where they sit in a GCP delivery flow, and the exam framing you should expect.
The setup is almost always the same. A company is focused on rapid software development. They use CI/CD to ship frequently. Leadership is worried about security vulnerabilities in the codebase. Someone on the team will argue that adding security gates slows everything down. The exam wants you to recognize that the right move is not to choose between speed and security but to bake security into the pipeline so it runs automatically on every commit and every build. Two controls show up by name on the Professional Cloud Architect exam for this scenario: static code analysis and vulnerability scanning.
Static code analysis means inspecting source code without executing it. A tool reads through your code and flags patterns that tend to produce bugs or security flaws. SQL string concatenation, hardcoded credentials, unsafe deserialization, missing input validation, and dependency calls into deprecated functions are the kind of thing it catches. Because it runs on the source, it does not need a deployed environment to do its job.
In a GCP CI/CD pipeline, the typical pattern is to wire this into Cloud Build as a step that runs after checkout and before the artifact build. Tools like SonarQube, Semgrep, or Snyk Code each have container images that drop into a Cloud Build step. The build step exits non-zero if findings exceed your configured severity threshold, which fails the build and blocks the merge. The developer sees the result inside their pull request and fixes it before the change ever leaves the branch.
Vulnerability scanning is different. Instead of looking at your source code, it looks at the artifacts you ship, mostly container images and their dependencies. It compares package versions against known CVE databases and tells you which images contain libraries with published vulnerabilities. A clean codebase can still ship a critical CVE if your base image bundles an outdated OpenSSL.
On GCP this is built into Artifact Registry. When you enable the Container Scanning API, every image pushed to Artifact Registry gets analyzed automatically. The findings are exposed through the Container Analysis API and surface in the Artifact Registry UI with severity ratings. You can also run on-demand scans before pushing, which fits cleanly into a Cloud Build step that scans the image, parses results, and fails the build when a critical vulnerability appears. Binary Authorization can then refuse to deploy any image that did not pass scanning, which closes the loop at the GKE or Cloud Run admission point.
Static code analysis runs early, against the source. Vulnerability scanning runs later, against the built artifact. In a typical Cloud Build configuration the order looks like checkout, dependency install, static analysis, unit tests, container build, push to Artifact Registry, vulnerability scan, then deploy. Both gates run on every commit. Both fail the build on threshold breach. Both report back to the developer through the Cloud Build UI and through whatever Pub/Sub or Cloud Logging integration you have wired up for notifications.
When you see a question that frames security as a brake on development velocity, the right answers are the ones that integrate security into the pipeline rather than the ones that add manual review steps or post-deployment scans. If the question asks how to prevent vulnerabilities in the codebase, you pick static code analysis in the CI/CD pipeline. If it asks how to prevent shipping known CVEs in dependencies or container images, you pick vulnerability scanning in the CI/CD pipeline. If both options appear and the question allows multiple selections, you pick both. Anything that defers security to a manual gate or to production monitoring is wrong on this kind of question.
The same exam objective often pairs with environment isolation. The standard answer for keeping development, staging, and production separate on GCP is separate projects per environment. Project-level isolation gives you clean IAM boundaries, separate quotas, separate billing, and no risk of a developer service account reaching into production by accident. Static analysis and vulnerability scanning sit inside each environment's pipeline. Project isolation sits around them. The exam treats both as part of the same secure CI/CD story.
My Professional Cloud Architect course covers static code analysis and vulnerability scanning in CI/CD alongside the rest of the architecture and compliance material.