
When I teach load balancing for the Professional Cloud Architect exam, two related concepts come up constantly: the unified front-end and path-based routing. They show up together in scenario questions because they solve the same class of problem. You have multiple backends and you want clients to see one address.
The idea is straightforward. A load balancer sits in front of your backends and presents a single entry point to the world. That means one DNS name, one IP address, and one SSL certificate. Clients hit example.com on 203.0.113.0 and the load balancer takes care of distributing traffic to whatever sits behind it.
What sits behind it can be anything. Three versions of an API, a fleet of microservices, instance groups in different regions. The clients do not need to know. They just see one address.
This matters for a few practical reasons. First, you only manage one SSL certificate instead of one per backend. Second, when you deploy a new backend version or shift traffic between regions, clients do not need to reconfigure anything. Third, your security posture is simpler because there is one ingress point to monitor and protect.
The unified front-end gives you one entry point. Path-based routing is what makes that entry point useful when you have more than one backend service. The load balancer examines the URL path on each incoming request and forwards it to the appropriate backend based on routing rules you define.
So a request to example.com/api/v1 goes to the v1 backend. A request to example.com/api/v2 goes to the v2 backend. A request to example.com/api/users routes to the user service. Same hostname, same IP, same certificate, different backends.
One detail that matters for the Professional Cloud Architect exam: path-based routing operates at Layer 7 of the OSI model. It needs to read the HTTP request to see the path, which means only HTTP(S) load balancers can do this. Network load balancers and other Layer 4 options cannot inspect URL paths because they do not parse HTTP. If a question asks you to route based on path and offers a TCP or network load balancer as an answer, that answer is wrong.
There are three patterns that come up frequently in scenario questions, and each one maps to the same solution.
The first is managing multiple API versions. A company wants to roll out a new API while keeping the old version accessible for existing clients. The solution is one load balancer with path-based routing. /api/v1 goes to the old backend, /api/v2 goes to the new one. No DNS changes, no separate certificates, no client coordination.
The second is multi-region microservices. A global gaming company wants players worldwide to hit a single IP address, with backend microservices deployed in different regions. The answer is a global load balancer for the unified front-end, with path rules routing to the right microservice. The global load balancer also handles geographic proximity so users get routed to the closest backend.
The third is a mobile app update. A company is updating its mobile app to use a new API but still needs the legacy API available during the transition. One load balancer handles both APIs based on path. No DNS records change, no SSL certificate changes, and the rollout is invisible to users on either version of the app.
The pattern across all three is the same. Multiple backends, single front-end, path-based rules to route between them. When you see a scenario where someone needs to support more than one backend behind a single address, this is what you reach for.
My Professional Cloud Architect course covers path-based routing and unified front-ends alongside the rest of the networking material.