
Direct VPC Egress and Serverless VPC Access show up on the Professional Cloud Architect exam as two different answers to the same underlying problem. A serverless service like Cloud Run needs to talk to a private resource like AlloyDB, and by default it cannot. The exam wants you to know what each option actually does, when each one applies, and which one Google now treats as the preferred path. I want to walk through how I think about this for the exam.
The starting point is understanding why this is a problem at all. Serverless services run in Google's managed network. Cloud Run, certain Dataflow configurations, and a handful of other services live there because the whole point of serverless is that Google handles the infrastructure. You deploy code, and the underlying network placement is not something you configure.
Private resources are the opposite. AlloyDB, Compute Engine VMs configured with private IPs, Cloud SQL set up for private access, and Memorystore all live inside your VPC. They are configured to accept connections only from private IP addresses inside that VPC, which is a security best practice because it keeps databases off the public internet and reduces the attack surface.
The result is two networks that have no path between them. The serverless side has no VPC IP. The private side only accepts traffic from inside the VPC. That is the gap both Direct VPC Egress and Serverless VPC Access exist to close.
The clearest version of this on the Professional Cloud Architect exam is the Cloud Run to AlloyDB scenario. Cloud Run is sitting in Google's managed network with no VPC IP. AlloyDB is sitting inside your VPC at a private address like 10.0.1.5. You want the Cloud Run service to query the AlloyDB instance, and the connection fails because Cloud Run has no way to send traffic into your VPC's private address space.
This is the setup the exam tends to use because it is concrete and the failure mode is intuitive. Once you recognize the shape of this scenario in a question, you know the answer is going to be one of the two private-network options. The remaining work is figuring out which one.
Direct VPC Egress places the serverless service into your VPC for outbound traffic. After it is enabled, the Cloud Run service is given its own private IP inside your VPC, something like 10.0.2.10. Both Cloud Run and AlloyDB now sit inside the same VPC at private addresses, and traffic between them moves the same way it would between any two private resources in that network.
There is no extra component in the path. There is no connector to provision, no separate piece of infrastructure to size, and no additional hop the traffic has to take. From a routing perspective, Cloud Run is just another resource in the VPC for the purpose of outbound connections.
That simplicity is the main reason Google has been pushing this option as the preferred approach.
Serverless VPC Access is the older approach to the same problem, and it works by introducing a connector. The Cloud Run service stays where it always was, in Google's managed network with no VPC IP. The connector sits between the serverless side and your VPC and acts as a bridge. Traffic from Cloud Run goes through the connector, and the connector forwards it into the VPC where AlloyDB can accept it.
It works reliably. It has been the standard answer for years, and there are still scenarios where it is the only available option. The cost is operational. The connector is a piece of infrastructure you have to provision, size, and manage. You have to think about throughput, instance counts, and the connector's own lifecycle, on top of whatever you are already managing for the serverless service itself.
The two approaches serve the same purpose, which is letting a serverless service reach a private resource inside a VPC. The difference is how they get there.
Direct VPC Egress puts the service into your VPC directly. There is no connector, the setup is more streamlined, and there are fewer limits on the configuration. Serverless VPC Access keeps the service outside the VPC and uses a connector as the bridge. The connector adds a piece of infrastructure to the picture that you have to manage and size.
The direction Google is moving is clear. Direct VPC Egress is becoming the more common choice on Google Cloud, and on the Professional Cloud Architect exam it is the more likely correct answer in scenarios where both options are available. The catch is that Direct VPC Egress is not yet available for every service and every region. When a question constrains you to a service or region where Direct VPC Egress is not supported, Serverless VPC Access is still the right answer.
When a Professional Cloud Architect exam question describes a serverless service that needs to reach a private resource, I check two things. The first is whether the scenario is set up cleanly enough that Direct VPC Egress is on the table. If the service is Cloud Run or another service where Direct VPC Egress is supported, and the region is one of the regions where it is offered, Direct VPC Egress is the answer the exam is moving toward.
The second is whether the question is forcing a constraint that rules Direct VPC Egress out. If the scenario specifies a service that does not support it, or a region that does not yet have it, or describes an existing setup that has already invested in connectors, Serverless VPC Access is the right answer. The exam does not always pick the newest option just because it exists. It picks the option that fits the constraints the question gives you.
One quick distractor to watch for is anything that suggests exposing the private resource to the public internet, or putting the database behind a public IP, or using the internet as the bridge between the serverless service and the VPC. That is never the right answer in a question that introduces a private resource on purpose. The whole reason the resource is private is to keep it off the public internet, and the correct architectures preserve that property.
Direct VPC Egress puts the serverless service inside your VPC with its own private IP. Serverless VPC Access keeps the service outside the VPC and routes through a connector. Both close the gap between Google's managed network and your private resources. Direct VPC Egress is the cleaner option and the one the exam increasingly favors, but Serverless VPC Access remains the answer when the service or region does not yet support the newer approach.
If you want to go deeper on this, my Professional Cloud Architect course covers Direct VPC Egress and Serverless VPC Access alongside the rest of the advanced architecture material.