
Cloud Run has a security gap that the Professional Cloud Architect exam likes to set up as a trick scenario. You stand up a service, put a load balancer in front of it, attach Cloud Armor to the load balancer, and assume your traffic is now protected. It is not. Every Cloud Run service has two entry points by default, and the second one bypasses everything you just configured. The fix is a single ingress setting, and the exam expects you to know which setting and why.
When you deploy a Cloud Run service, Google automatically assigns it a default URL on the run.app domain. Something like myservice.run.app. That URL is public and routable from anywhere on the internet, and it points directly at your service.
The standard pattern for protecting a Cloud Run service is to put an external HTTPS load balancer in front of it. You configure a custom domain like api.mydomain.com that routes to the load balancer, and you attach a Cloud Armor security policy to the load balancer to handle things like IP allowlisting, rate limiting, and DDoS protection. The architecture diagram looks clean. Traffic flows from users to the custom domain to the load balancer with Cloud Armor and then to Cloud Run.
The problem is that the default run.app URL is still live. A user who knows or guesses that URL can hit it directly and reach your Cloud Run service without going through the load balancer at all. Cloud Armor only evaluates traffic that arrives at the load balancer, so a request that lands on the run.app URL is never inspected. Your IP allowlist does not apply. Your rate limits do not apply. The DDoS posture you configured is not in the path.
This is the configuration the Professional Cloud Architect exam tests. The scenario describes a team that has set up a load balancer with Cloud Armor, and the question asks why traffic is reaching the service without being filtered, or how to ensure all traffic is evaluated by the security policy.
Cloud Run exposes an ingress setting on each service that controls which sources are allowed to reach it. The setting has three values, and the one that solves this problem is "Internal and Load Balancer Only."
When you set ingress to "Internal and Load Balancer Only," Cloud Run stops accepting traffic on the default run.app URL from the public internet. The only traffic that gets through is traffic that arrived through a load balancer in your project, plus traffic from internal sources like other services in your VPC. Direct hits on the run.app URL from outside the load balancer path are rejected.
That single change closes the gap. Every external request now has to go through the load balancer, and every request that goes through the load balancer is evaluated by Cloud Armor. The default URL is still technically there, but it is no longer a way around your security policy.
The Professional Cloud Architect exam does not always use the exact phrase "Internal and Load Balancer Only," but the scenario shape is recognizable. You will see something like this. A team has deployed a Cloud Run service that processes sensitive data. They have configured an external load balancer with a Cloud Armor policy that blocks specific IP ranges and enforces rate limits. They notice that some requests are reaching the service without being filtered by the policy. What is the most likely cause, or what configuration change should they make?
The cause is that the default run.app URL is still accessible. The fix is to set Cloud Run ingress to allow only internal traffic and load balancer traffic. Any answer that involves changing Cloud Armor rules, modifying the load balancer configuration, or adjusting the custom domain is missing the actual problem, which is that traffic is not flowing through the load balancer at all.
The other two ingress values are worth knowing for context. "All" is the default, and it lets any source reach the service, including the public run.app URL. This is the setting that creates the bypass problem, and it is the setting you would change away from when you put a load balancer in front of the service.
"Internal" allows only traffic from inside your VPC and from other Google Cloud services in your project. It does not allow load balancer traffic from the external HTTPS load balancer, so it is too restrictive for a service that needs to handle public requests routed through Cloud Armor. "Internal" is the right choice for backend services that should never be reachable from the internet, but it is wrong for the public-facing scenario the exam usually describes.
"Internal and Load Balancer Only" sits in the middle. It blocks public access to the run.app URL while still permitting load balancer traffic, which is the combination you want when Cloud Armor is the security boundary.
When a Professional Cloud Architect exam question describes a Cloud Run service behind a load balancer with Cloud Armor, I run through a quick check. Is the scenario complaining that security policies are being bypassed, or that some requests are not getting filtered? If yes, the answer is almost always about ingress control on Cloud Run, not about the load balancer or the Cloud Armor policy itself. The control point that prevents the bypass lives on the Cloud Run service, and the value that closes it is "Internal and Load Balancer Only."
If the scenario instead describes a service that should be reachable only from inside the VPC, the answer shifts to "Internal." If the scenario does not mention a load balancer at all and the service needs to be public, "All" is fine and the exam is probably testing something else.
If you want to walk through Cloud Run ingress control alongside the rest of the advanced architecture material, my Professional Cloud Architect course covers the full set of patterns the exam expects you to recognize.