
Cloud Interconnect questions on the Professional Cloud Architect exam usually come down to a single decision: how much redundancy does this workload actually need? The answer drives whether you build out multiple Dedicated Interconnect connections across separate metro areas or pair a single Interconnect with Cloud VPN as a backup. Both patterns show up on the exam, and they map to different cost and uptime requirements.
I want to walk through the colocation model first, then the two HA topologies that the Professional Cloud Architect exam tends to test.
Dedicated Interconnect requires a physical connection point. Google does not run fiber to your data center directly. Instead, you connect to a colocation facility, which is a third-party building that has cross-connects into Google's network backbone. From the colo, traffic enters Google's infrastructure and reaches your VPC.
Colocation facilities are grouped into metro areas. A metro area is a geographic zone, usually named after a city, that contains one or more colocation facilities. The exam can use the term "metro area" directly, so it is worth knowing.
One detail that trips people up: colocation facilities are not always near a GCP region. There are colos in Phoenix and Salt Lake City, for example, even though neither has a GCP region. The colo is an entry point into Google's network, not a data center where your workloads run. Once your traffic is on Google's backbone, it can reach any region.
The first pattern is what you reach for when you have a production workload and uptime is the priority. The setup looks like this:
The redundancy works at two levels. Inside a single metro, if one colocation facility has an issue, the other connection in the same metro still carries traffic. Across metros, if an entire metro area has a regional outage, the second metro keeps the hybrid link alive. You are protected against both facility-level and regional failures.
This is the topology to pick on the exam when the question says something like "production workload" or "highest availability" and does not flag cost as a constraint. The four-connection layout is what Google's documentation describes as 99.99% SLA configuration, and it is the right answer when uptime is non-negotiable.
The second pattern is for disaster recovery scenarios where you need continuity but do not need the absolute uptime of the four-connection setup. The architecture is:
The Dedicated Interconnect handles normal traffic, including bulk data replication and backup transfers. Cloud VPN sits there as a fallback. If the Interconnect goes down, traffic flows over the VPN's encrypted tunnel across the public internet.
Cloud VPN is not as performant as Dedicated Interconnect. You lose the dedicated bandwidth, the low latency, and the private path. What you gain is continuity at a fraction of the cost of a second Interconnect. For a disaster recovery plan where the goal is "keep replication running during an outage" rather than "never miss a packet," this is the right trade.
One detail worth knowing for the exam: Cloud VPN is a better backup option than Direct Peering. Direct Peering gives you connectivity to Google services but does not have the same failover characteristics as VPN tunnels managed through your VPC. If a question pits VPN against peering as a backup for Interconnect, VPN is the answer.
The deciding factor on the Professional Cloud Architect exam is the workload profile in the question stem.
If the question describes a production workload, mission-critical traffic, or asks for the highest level of availability, pick the four-connection layout across two metros. The phrase "redundant, geographically distributed connections" is a strong signal for this topology.
If the question describes a disaster recovery plan, data replication for backup, or asks for a cost-effective resilient setup, pick Dedicated Interconnect with Cloud VPN as the failover path. The phrase "absolute uptime is less critical" or any framing around cost-effectiveness points here.
Both patterns are valid HA architectures. The exam is testing whether you can match the topology to the requirements rather than whether you know one canonical answer.
My Professional Cloud Architect course covers HA Interconnect topologies alongside the rest of the networking material.