
API keys, database passwords, OAuth tokens, signing keys. Every non-trivial application has a handful of these, and where they live is one of the most consequential architectural decisions a team makes. Hardcode them in source, and they leak the moment a repo goes public or a junior dev pushes to the wrong remote. Drop them into a config file on a VM, and you have to solve distribution, rotation, and access control yourself. Cloud Secret Manager is Google Cloud's answer to this problem, and the Professional Cloud Architect exam expects you to know when to reach for it and how it integrates with the rest of the platform.
I'm Ben Makansi, founder of GCP Study Hub, and in this article I want to walk through what Cloud Secret Manager actually does, how it plugs into a real workload, and the architectural reasoning the Professional Cloud Architect exam wants you to apply when secrets show up in a scenario question.
Cloud Secret Manager is a managed service for storing and retrieving sensitive credentials. Instead of embedding secrets in code or configuration files, applications fetch them from a dedicated, encrypted store at runtime. The service handles four things that you would otherwise have to build or operate yourself.
First, encryption at rest. Every secret is encrypted automatically without any configuration on your part. You do not provision keys, you do not pick algorithms, you do not manage envelope encryption. It just works.
Second, fine-grained access control through IAM. Each secret is a resource you can grant access to independently. A particular service account might be allowed to read the production Stripe key but have no visibility into the staging Twilio token. Permissions are scoped at the secret level, which is what makes least-privilege possible in practice rather than just on paper.
Third, versioning. Every time a secret is updated, the prior value is retained as a previous version. Workloads can pin to a specific version or always pull the latest. This matters for rollback. If a key rotation breaks a downstream service, you can repoint to the previous version while the issue is debugged, without scrambling to recover the old value from a backup somewhere.
Fourth, auditing and rotation. Every access to a secret is logged through Cloud Audit Logs, which means you can trace exactly which workload retrieved which secret at which time. Rotation can be automated, so credentials are refreshed on a schedule without an engineer manually pasting new values into a config map.
The pattern that comes up most often on the Professional Cloud Architect exam is a containerized service running on Cloud Run that needs to call an external API. Here is how the pieces fit together.
You create a secret in Cloud Secret Manager. The secret has a name, say stocks_api_secret, and a value, which is the actual API key issued by the third party. The value is encrypted at rest the moment it is stored.
You deploy the Cloud Run service and configure an environment variable, say STOCKS_API_KEY, but instead of assigning a plaintext value to that variable, you reference the secret. Cloud Run does not store the key itself. It holds a reference, and at runtime it resolves that reference by calling Secret Manager and injecting the value into the container's environment.
The application then reads the variable like any other environment variable:
import os
import requests
STOCKS_API_KEY = os.getenv("STOCKS_API_KEY")
response = requests.get(
"https://api.stocktrading.com/v1/portfolio",
headers={"Authorization": f"Bearer {STOCKS_API_KEY}"},
)The application code is unchanged from what it would look like if the key were hardcoded as an environment variable in a Dockerfile. The difference is upstream. Cloud Run is configured to pull the value from Secret Manager, and access is gated by the runtime service account having the Secret Manager Secret Accessor role on that specific secret.
It is worth being explicit about what this architecture buys you, because the Professional Cloud Architect exam frequently asks you to compare it against weaker patterns.
Secrets are never in source control. Whatever happens to the repo, the secret stays in Secret Manager. There is no risk of a credential leaking through a git log, a forked branch, or a mirrored repo on someone's laptop.
Secrets are never in plaintext as a static environment variable. The Cloud Run configuration references the secret, but the actual value is resolved at runtime from an encrypted source. The container's environment holds the value in memory while the process is running, but the deployment manifest does not.
Access follows least privilege. The service account attached to the Cloud Run service has read access to exactly the secrets it needs. A different service running in the same project, with a different service account, has no path to those secrets unless explicitly granted.
Every retrieval is auditable. If you suspect a credential has been compromised, you can pull the audit log and see precisely which principals accessed it and when. That is the difference between a security incident with a clear blast radius and one where you have no idea who saw what.
Scenario questions on the Professional Cloud Architect exam tend to drop secrets into a broader architecture problem. A question might describe an application that currently stores credentials in a Kubernetes ConfigMap, or in a build artifact, or in a deployment script, and ask what should change. The answer involves moving the value into Secret Manager, attaching the appropriate IAM role to the workload's identity, and configuring the runtime to fetch the secret rather than carry it inline.
Other scenarios test whether you understand the audit and rotation story. If a question mentions compliance requirements, traceability of credential access, or the ability to roll keys without a code deploy, those are the cues that Secret Manager is the right answer.
The wrong answers in these questions tend to involve clever-but-bespoke alternatives. Storing secrets in a Cloud Storage bucket with object-level ACLs. Encrypting them with KMS and putting the ciphertext in a config file. Using a custom HashiCorp Vault deployment on GKE. These can all be made to work, but they are not the managed, native answer the exam is looking for, and they shift operational burden onto the team for a problem Google has already solved.
A Professional Cloud Architect exam scenario describes a Cloud Run service that authenticates with a third-party API. The current deployment hardcodes the API key in the container image. The team needs to rotate the key quarterly, audit all access, and ensure the credential never appears in source control. What should the architect recommend?
The right approach is to store the key in Cloud Secret Manager, grant the Cloud Run service's runtime service account the Secret Manager Secret Accessor role on that specific secret, and configure the Cloud Run deployment to expose the secret as an environment variable by reference rather than by value. Rotation becomes a matter of adding a new version to the secret, and audit logs capture every retrieval automatically.
Secret Manager is one of those services that is straightforward once you have used it, but easy to overlook if you have only read about it. The pattern is consistent across compute targets. Cloud Run, Cloud Functions, GKE workloads, and Compute Engine VMs all integrate with Secret Manager through their respective service-account identities and the same Secret Accessor role.
If you want a structured walkthrough of Secret Manager alongside the rest of the security material on the Professional Cloud Architect exam, the GCP Study Hub Professional Cloud Architect course covers the full security domain with hands-on examples and exam-style questions.