
Service accounts authenticate workloads to Google Cloud, but the question that trips up Professional Cloud Architect candidates is how those service accounts actually prove their identity. The answer is keys, and there are two distinct types with very different operational profiles. Picking the wrong one for a given workload is a security mistake, and on the exam it is a wrong answer.
I will walk through both key types, then show how they combine in a realistic migration pattern that the exam loves to test.
Google-managed keys are exactly what the name suggests. Google Cloud creates them, stores them, and rotates them on a regular schedule. You never see the key file, you never download anything, and you never have to write code that loads a credential from disk.
These keys are attached directly to GCP resources. When a Compute Engine VM runs with an attached service account, the metadata server hands short-lived credentials to any process running on that VM. The same pattern works for Cloud Run, Cloud Functions, GKE workloads with Workload Identity, and most other Google Cloud compute surfaces.
Because Google handles the lifecycle, the attack surface is small. There is no JSON key file sitting on a developer laptop, no expired key checked into a Git repository, and no rotation script that someone forgot to run. If the workload lives inside Google Cloud, Google-managed keys are almost always the right call.
User-managed keys, sometimes called manually-generated keys, are the other option. You create them in the Cloud Console or with gcloud, Google hands you a JSON file, and that file becomes the credential. Anything that holds the file can authenticate as the service account.
The reason these exist is that not every workload runs inside Google Cloud. An on-premises ETL job, a Jenkins server in a colo, a third-party SaaS tool that needs to read from a Cloud Storage bucket. None of these can use the metadata server, so they need a credential they can carry with them. That is what the JSON key file provides.
The trade-off is operational. The file has to live somewhere, it has to be protected, and it has to be rotated by you. Leaked service account keys are one of the most common ways Google Cloud projects get compromised, and they show up in security audits constantly. If you can avoid using a user-managed key, you should.
The Professional Cloud Architect exam likes scenarios where you have to pick the correct key type for each piece of an architecture. The scenarios are usually migrations, because migrations involve workloads on both sides of the boundary at the same time.
Here is the canonical pattern. An on-premises system is migrating to Google Cloud. The data is moving to Firestore, and the application workloads are moving to Compute Engine. During the cutover, the on-prem system still needs to read and write to Firestore so the data stays in sync. After the cutover, the new VM running on Compute Engine needs to access Firestore as the application's primary database.
Two service accounts are involved, one for each system. The question is what kind of key each one uses.
For the on-premises system, you use a user-managed key. The on-prem server cannot reach the Google Cloud metadata service, and it cannot run as a GCP resource. It is an external system, so it needs a JSON key file it can use to authenticate against the Firestore API. You generate the key, hand it to the on-prem system through whatever secure channel you trust, and the system uses it for the duration of the migration.
For the Compute Engine VM, you use a Google-managed key. The VM is a native Google Cloud resource, so you attach a service account to it at creation time. Google handles the rest. The VM authenticates to Firestore through the metadata server, no key file needed.
This is a clean architecture because each workload uses the right tool for its environment. The external system gets the only kind of credential it can use, and the GCP-native workload uses the safer Google-managed pattern. Once the migration is finished and the on-prem system is decommissioned, you delete the user-managed key and the entire system runs on Google-managed credentials only.
Two rules cover almost every Professional Cloud Architect question on this topic. If the workload runs inside Google Cloud, attach a service account and let Google manage the keys. If the workload runs outside Google Cloud, you need a user-managed key, and you accept the operational burden that comes with it.
The migration scenario is just both rules applied at the same time to a single architecture. Spot the boundary, apply each rule on its side of the boundary, and the answer falls out.
My Professional Cloud Architect course covers service account keys and migration authentication patterns alongside the rest of the IAM and governance material.