
One of the first decisions you make when picking a GCP service is how much of the infrastructure you want to run yourself. Google Cloud groups services along a spectrum from unmanaged to managed to serverless, and the Professional Cloud Architect exam expects you to know where each service sits and what that means for your team's operational responsibility.
The question that organizes the whole spectrum is: do you manage the infrastructure, or does Google? Different projects answer that question differently. Some need full control over the OS, networking, and scaling logic for customization or compliance reasons. Others want to ship code and let Google handle everything underneath.
GCP slots its services into three buckets to match those needs. Unmanaged on one end, where you handle setup, maintenance, and configuration. Managed in the middle, where Google takes over operational tasks like server setup and OS maintenance while you control how your application is deployed. Serverless on the other end, where Google abstracts the infrastructure entirely and you focus on code, data, and application logic.
With an unmanaged service, GCP provides the infrastructure but you manage it entirely. Compute Engine is the canonical example. It is GCP's infrastructure-as-a-service offering, and when you spin up a VM you are responsible for configuring scaling, networking, and the operating system yourself.
Scaling means deciding how the application grows. Do you add more instances during peak traffic? Do you scale down when things are quiet? With Compute Engine, those decisions and the configuration that implements them fall on you. Networking means setting up subnets, IP addresses, and firewall rules so your instances communicate securely. The operating system means picking a Linux distribution or Windows image and maintaining it.
The trade-off is the headline fact. Unmanaged gives you maximum customizability, because you can tailor every layer to your project's exact needs. It also gives you the most management overhead, because every layer is yours to operate.
With a managed service, GCP takes responsibility for the underlying infrastructure. That means server setup, software installation, and operating system maintenance are handled by Google. You still control how your application is configured and deployed, so the shape of your workload is up to you, but the foundation underneath is Google's job.
App Engine is a managed service for deploying applications without worrying about server maintenance. Google Kubernetes Engine is managed in the sense that the control plane and node infrastructure are handled for you, while you control deployment configurations, workloads, and scaling behavior. Cloud Bigtable is a managed NoSQL database, where Google runs the storage layer and you design schemas and queries.
Most services in Google Cloud are at least managed. That is worth internalizing for the Professional Cloud Architect exam, because it reframes the question from "is this managed" to "how managed is this, and is it also serverless."
Serverless services, also called no-ops, are the highest level of abstraction. GCP automatically manages the infrastructure and the servers, including scaling. You focus on the code, data, or application logic and Google handles the rest.
The relationship between managed and serverless is a containment one. All no-ops services are also managed, but not all managed services are no-ops. App Engine, GKE, and Bigtable are managed but not no-ops in the sense that you still make decisions about deployment configuration, cluster sizing, or node counts. Serverless services strip even those decisions away.
Pub/Sub is serverless event-driven messaging. You publish and subscribe without provisioning anything underneath. Cloud Functions runs individual functions in response to events. Cloud Run runs stateless containers on a fully managed environment. Dataflow handles streaming and batch data processing without manual server management.
When a Professional Cloud Architect scenario asks you to pick a service, the management level is usually doing real work in the question. If the requirement says "minimize operational overhead" or "the team has no infrastructure expertise," you are being pushed toward serverless options like Cloud Run, Cloud Functions, or Pub/Sub. If the requirement says "full control over the OS" or "custom kernel modules," you are being pushed toward Compute Engine. If the requirement is somewhere in between, like "containerized workloads with control over deployment but not the underlying nodes," you are being pushed toward GKE or App Engine.
The shorthand that helps is to read the requirement, decide where on the unmanaged-to-serverless spectrum it lands, and then pick the service in that band. The exam rewards knowing which services live at which level of abstraction more than it rewards knowing every flag of every service.
My Professional Cloud Architect course covers the unmanaged, managed, and serverless spectrum alongside the rest of the foundational architecture material.