Cloud IAM Essentials for the Professional Cloud Architect Exam

Ben Makansi
March 28, 2026

Identity and Access Management, or IAM, is not the service that most people think of when they think of "exciting" or "cutting edge" cloud technologies in GCP. But it is definitely one of the most important.

Cloud IAM is how you manage who has what access to which resources in GCP. I think this one sentence is a great way to capture the entire purpose of Cloud IAM, and every concept in this guide maps back to one of those three components: the "who" (principals), the "what access" (roles and permissions), and the "which resources" (organizations, folders, projects, and services).

IAM has a good chance of showing up on the Professional Cloud Architect (PCA) exam.

You couldsee questions about service accounts, role types, least privilege, federated authentication, and more. This post covers the IAM concepts you need to know and how they tend to appear on the exam.

Principals: The "Who"

A principal is any identity that can be granted access to GCP resources. IAM policies are applied to principals, and there are two types you need to understand.

User accounts are tied to human beings. They provide the identity and credentials a person needs to authenticate and access cloud services. Users authenticate with a username and password.

Service accounts exist for applications, processes, and virtual machines rather than for people. Service accounts authenticate via keys and tokens instead of usernames and passwords, and they are the backbone of any automated or scheduled task in the cloud.

Service Accounts

Think about a cloud application with multiple components: storage, databases, message queues, APIs. These components need to communicate securely with each other. Service accounts make that possible by ensuring only the right components have access to one another.

Consider an App Engine application that needs to pull data from BigQuery, write processed data back, and occasionally trigger a Cloud Function. The App Engine service account is what enables all of those interactions. Without it, the application has no way to authenticate against those downstream services.

The same applies to scheduled workloads. A cron job, a batch process, a CI/CD pipeline. None of these should rely on a human manually providing credentials at runtime. You assign a service account, and the process runs autonomously.

Google-Managed vs. User-Managed Service Accounts

GCP has two categories of service accounts.

Google-managed service accounts are created automatically by GCP services. They come with default permissions, follow a predetermined naming format, and handle system tasks within GCP services.

For example, the default Compute Engine service account follows a format like PROJECT_NUMBER-compute@developer.gserviceaccount.com. The default App Engine service account looks like PROJECT_ID@appspot.gserviceaccount.com. If you have a GCP project, you already have these.

User-managed service accounts are ones you create yourself. You choose the name, assign custom permissions from the start, and tailor them to specific applications or processes. A user-managed service account might look like myappserviceaccount@PROJECT_ID.iam.gserviceaccount.com. These give you full control.

The distinction matters for the exam and for real life GCP use. Google-managed accounts handle baseline service operations. User-managed accounts are what you reach for when you need precise, scoped access for a specific workload. If a Professional Cloud Architect question describes a custom application that needs tightly scoped access to a few services, the answer almost always involves a user-managed service account.

Something to note for the exam: to create and manage user-managed service accounts, you need the Service Account Admin role. This role lets you create, modify, delete, and disable service accounts across your project. If your team runs automated systems backed by service accounts, having someone explicitly responsible for managing them through this role is a smart move.

A Quick Rule: Never Use Your User Account for Scripts

If you have a script that connects to a GCP service (a Cloud SQL database, a storage bucket, anything), authenticate it with a service account. User accounts are tied to individuals and are not designed for programmatic access. That is the entire point of service accounts. Assign one with the minimum permissions the script needs (for example, the Cloud SQL Viewer role for a read-only database script) and move on.

Service Account Key Management

Service accounts need credentials to authenticate, and those credentials come in two flavors.

Google-managed keys are handled entirely by GCP. Google creates them, stores them securely, and rotates them automatically. These keys are attached directly to GCP resources like Compute Engine instances or Cloud Functions. You never download a key file, and you never worry about rotation. For GCP-native resources, this is the default and preferred approach.

Manually generated keys are created and downloaded by you, typically as a JSON file. These exist for a specific reason: when something outside of GCP needs to authenticate with GCP services. An on-premises server, a third-party application, a local development environment. These scenarios require a downloadable key because the external system has no other way to present credentials. Because you download this file, keeping it secure is your responsibility.

Imagineyou're migrating an on-prem system to GCP. The data goes to Firestore, and the application workloads move to a Compute Engine VM. During migration, the on-prem system needs to access Firestore to transfer data, so you use a manually generated key for that authentication. The Compute Engine VM also needs Firestore access, but since it's a GCP-native resource, it uses a Google-managed key. This combination (manual keys for external systems, Google-managed keys for GCP resources) is the standard pattern.

This is the kind of scenario the Professional Cloud Architect exam likes to test.

Permissions and Roles: The "What Access"

Principals need to do things. Permissions define exactly what those things are at the most granular level. Reading files in Cloud Storage, writing to BigQuery datasets, creating and canceling Dataflow jobs. Each of these is a discrete permission. There are thousands of permissions in GCP, and even a single user will typically need many of them.

Roles are simply collections of permissions grouped together. GCP organizes roles into three categories, each progressively more specific.

Basic Roles

These are broad, project-level roles. There are three of them. The Owner role provides full access to all resources, including the ability to manage roles and permissions for others. The Editor role grants read and write access to most resources but cannot manage roles or permissions. The Viewer role is read-only. Owners are for project admins and team leads. Editors are for developers and operators. Viewers are for auditors and stakeholders who need to review without modifying.

Predefined Roles

These are created and maintained by Google, tailored for common job functions within specific services. BigQuery Job User lets you run queries and load data. Storage Object Viewer gives read-only access to Cloud Storage. Cloud Run Developer is for deploying and managing Cloud Run services. Compute Engine Admin manages VMs and disks. Each predefined role bundles the exact permissions needed for a particular function within a particular service.

Take the Composer User role as an example. It bundles multiple permissions together, including executing DAGs, getting environments, managing secrets, and several others. All of these permissions work together to let someone operate within Cloud Composer without you having to assign each one individually.

Custom Roles

When predefined roles don't fit, you build your own. Custom roles let you select the exact combination of permissions your use case requires. Consider a financial services company where data analysts need to query BigQuery, run Dataflow jobs, read Cloud Storage objects, and access specific Compute Engine logs, but nothing else. A custom role bundles precisely those permissions and nothing more.

Role Name Formats

You'll encounter two formats, and the exam uses both. The descriptive name, like "Data Catalog Entry Viewer," is the human-readable version used in documentation and conversation. The technical name, like datacatalog.entryViewer, follows a service.roleName format and is what you use in the console and CLI. Know both, because the exam may present either format in its answer choices.

The Principle of Least Privilege

This is one of the most important concepts in cloud security and a near-certainty on the PCA exam. Expect questions that describe an overprivileged scenario and ask you to identify the correct fix.

The principle is straightforward: always allocate the minimum necessary permissions for a person or service account to do what it needs. Nothing more.

Consider a developer with access to dev, staging, and production environments. If they accidentally push code to production instead of dev, the code skips testing and breaks the live environment. This happens because the developer had more access than their day-to-day work required.

Or consider a user with read and write access to BigQuery, Cloud SQL, Dataflow, Cloud Run, Cloud Storage, and App Engine, but who only actually uses three of those services. If their account is compromised, the attacker now has access to services the user never even touched. The blast radius of the breach is larger than it needed to be.

Even worse, if a compromised account has access to Cloud IAM itself, the attacker can escalate their own privileges and grant themselves access to anything in the environment. This is called privilege escalation, and it's one of the most dangerous consequences of overprivileged access.

Following least privilege mitigates all of these risks. It reduces the blast radius of compromised accounts, limits the potential for insider threats, lowers the chance of accidental damage, and simplifies compliance and auditing. Implementing it typically means reaching for predefined or custom roles rather than basic roles.

Enforcing Least Privilege Through Environment Separation

One of the most effective ways to enforce least privilege is to structure your GCP resource hierarchy around environment separation. Create separate projects (or folders) for each stage of your development lifecycle: development, testing, staging, and production.

With this structure in place, access boundaries become clear. Developers get access to the development and testing projects. The QA team gets testing and staging. The ops team gets staging and production. Each team has access only to the environments it needs, and the principle of least privilege is enforced by the hierarchy itself.

This kind of setup can use separate projects or separate folders containing multiple projects. The right approach depends on the scale and complexity of your workloads. Either way, the result is clear boundaries, reduced risk, and a workflow that supports both security and operational efficiency. This type of scenario shows up on the PCA exam, so be ready to map team roles to environment-level access.

Groups: Scaling Access Management

When you have multiple team members who need identical access, assigning permissions individually gets unmanageable fast. The solution is Google Groups.

The process is simple. Create a Google Group using your Google Workspace or Cloud Identity domain. Add the relevant team members. Then add the group to IAM and assign the appropriate roles. Permissions apply to the group as a whole, so you manage access once instead of per person.

Groups are typically named to reflect environment, project, and role. Something like dev-database-readers@yourcompany.com for team members who need read-only database access in the dev environment, or data-scientists-prod@yourcompany.com for users who need production analytics access. This approach scales cleanly across teams and is considered a best practice. The PCA exam often presents scenarios where multiple users need the same access. If the answer choices include assigning roles individually versus using a group, the group is almost always correct.

Federated Authentication

Federated authentication lets users access GCP resources using their existing credentials from a trusted identity provider (IdP) like Okta or Azure Active Directory. No new accounts, no new passwords, no credentials stored in GCP.

Here's how the flow works. A user requests access to a GCP resource. Instead of authenticating directly with GCP, the request is redirected to the organization's identity provider. The IdP verifies the user's credentials and issues a secure token (a SAML assertion or OIDC token) back to GCP. GCP receives the token and grants access based on whatever IAM policies apply to that user.

GCP supports any IdP that implements SAML 2.0 or OpenID Connect (OIDC), which covers the vast majority of enterprise identity systems.

This setup has a few practical advantages. Passwords never touch GCP, which satisfies data protection regulations. Users keep logging in with the system they already know, making cloud transitions practically invisible. And identity management stays centralized, making it easier to enforce security policies and scale access as the organization grows.

Here is the kind of question you can expect on the PCA exam. An organization migrating a CRM to GCP needs to avoid storing user passwords in the cloud while keeping the login experience seamless. The answer is federated authentication using SAML 2.0 with the existing IdP. GCP never sees the password. It only receives the secure token generated by the IdP.

Essential gcloud Commands for IAM

A few commands worth knowing for managing roles and policies from the command line.

Copying roles between projects: gcloud iam roles copy replicates an IAM role from one project to another. Useful when you need consistent custom roles across multiple projects.

Retrieving IAM policies: The get-iam-policy command shows which members and roles have access to a given resource. It works across resource types:

  • gcloud projects get-iam-policy PROJECT_ID
    • retrieves roles and members for a project
  • gcloud storage buckets get-iam-policy gs://BUCKET_NAME
    • retrieves principals with access to a storage bucket
  • gcloud compute instances get-iam-policy INSTANCE_NAME --zone=ZONE
    • retrieves principals with access to a Compute Engine instance

These commands are how you audit access in practice and confirm that your IAM policies match your intentions.

Cloud IAM is one of the most important services to know about even if it's not the most exciting. And it's definitely something you should be familiar with for the Professional Cloud Architect exam. Understanding principals, permissions, roles, least privilege, environment separation, groups, and federated authentication will prepare you for the IAM questions you encounter on exam day and for the real-world usage of GCP.

arrow