
Cloud Firewall sits at the heart of how Google Cloud controls traffic into and out of your resources, and it shows up everywhere on the Professional Cloud Architect exam. I want to walk through how firewall rules actually work, why network tags exist, and the patterns the exam expects you to recognize.
Cloud Firewall lets you define and enforce rules that control network traffic to and from your resources. The primary target is VMs, both standalone and inside services that run on VMs under the hood like Dataproc. It also applies to Load Balancers, GKE clusters, and Cloud SQL instances. Rules govern both ingress (incoming) and egress (outgoing) traffic.
If you treat Cloud Firewall as a VM-only feature, you will miss questions where the resource in play is a managed service that happens to run on VMs underneath. The exam wants you to know the surface area is broader than just Compute Engine.
Rules can match on protocols, IP ranges, ports, network tags, or identity (service accounts). Each rule has a priority attribute that determines evaluation order. Priorities range from 0 to 65535, and lower numbers mean higher priority. A rule with priority 100 is evaluated before a rule with priority 200.
Consider two rules:
Incoming TCP traffic from range X is allowed by the first rule and never reaches the second. SSH traffic that does not match range X falls through to the deny rule and gets blocked. The ordering matters because the higher-priority allow lets some SSH traffic through if it happens to come from range X over TCP, even though a lower-priority rule would otherwise deny all SSH.
This layering is the whole point. You start with broad rules at lower priority and write more specific rules at higher priority to handle exceptions and tune access control. On the exam, when a question asks why traffic is reaching a VM that "should be blocked," the answer is almost always a higher-priority allow rule overriding a broader deny.
When instances and services have service accounts associated with them, you can write firewall rules that match on those service accounts instead of IPs. This is identity-based access control at the network layer.
Imagine a Cloud Run service running as Service Account A, VM Instance 1 running as Service Account B, and VM Instance 2 running as Service Account C. A firewall rule that says "deny if source = Service Account B and target = Service Account C" blocks VM Instance 1 from reaching VM Instance 2, regardless of what IP either VM has at the moment. The Cloud Run service running as Service Account A is unaffected because the rule does not mention it.
This matters because IPs change. Service accounts do not. If you want a security policy that survives autoscaling, instance replacement, and IP churn, identity-based rules are the durable choice. The Professional Cloud Architect exam likes this pattern when the scenario involves zero-trust thinking or least-privilege between services.
This is a small detail that comes up on the exam. Firewall rule logging is disabled by default. To enable it, open the configuration page for the rule, find the "Logs" section, and set Logs to "On." From the command line:
gcloud compute firewall-rules update <firewall-rule-name> --enable-logging
Once enabled, the rule logs connections that match it, which is what you need for troubleshooting, auditing, and forensic analysis. If a question describes someone being unable to figure out which traffic is hitting which rule, the gap is almost always that logging was never turned on.
Network tags are labels you assign to VM instances. On their own they do nothing, but firewall rules can reference them, which lets you write rules that target groups of VMs instead of specific IPs.
The reason this exists is dynamic environments. If you scale a service up or down, instances come and go and their IPs change. Writing firewall rules against IPs in that world is a maintenance nightmare. Tag the VMs as "web-server" or "database-tier" and write rules against the tag, and any new instance that gets the tag automatically inherits the rules. Any instance you remove the tag from drops out of the scope. No firewall configuration changes required.
Take three VMs. VM1 and VM2 carry the tag "web-server." VM3 carries the tag "test-server." A firewall rule allows HTTP and HTTPS traffic to anything tagged "web-server."
An incoming HTTPS request to VM1 or VM2 matches the rule and is allowed. The same request to VM3 does not match because VM3's tag is "test-server," and so the request hits the default deny on inbound traffic and is blocked. If you later decide VM3 should accept HTTPS, you add the "web-server" tag to it and the rule applies automatically. You did not touch the firewall configuration at all.
This is the flexibility tags buy you. The same pattern works for tiers, environments, regions, or any other grouping you care about.
Picture a VPC running a microservices architecture. Microservice-a, microservice-b, and microservice-c each run as multiple VMs, and the count goes up and down with traffic. The architectural question is how to control inter-service communication when IPs are not stable.
The answer is to tag every VM with its service name and write firewall rules between tags. A rule that says "allow microservice-a to reach microservice-b on port 8080" applies automatically to every VM tagged microservice-a as the source and every VM tagged microservice-b as the destination, no matter how many of them exist at any moment. The firewall configuration stays static while the underlying fleet scales.
This is the kind of scenario the Professional Cloud Architect exam likes. When a question describes scaling microservices and asks how to manage traffic without static IPs, network tags with firewall rules is the expected answer.
The other classic scenario is a three-tier application. UI tier on top, business logic tier in the middle, data storage tier at the bottom. You tag the VMs in each tier as "ui-tier," "business-tier," and "data-tier" respectively.
The firewall rules then enforce the tier boundaries:
This forces all data access to go through the business logic layer, which is what proper tier separation is supposed to enforce. New VMs added to any tier inherit the tag and the rules apply without any manual updates. Removed VMs drop out cleanly.
You should recognize this pattern on sight. If a Professional Cloud Architect exam question describes a tiered architecture and asks how to enforce that the UI cannot reach the database directly, network tags with appropriate firewall rules is the answer.
The mental model worth carrying in: firewall rules match on protocol, IP, port, tag, or service account. Priority is numeric, lower wins, and higher-priority allows override lower-priority denies. Logging is off by default. Network tags decouple firewall rules from IPs so the rules survive autoscaling and instance churn. Service-account-based rules give you identity-level control that does not break when IPs change.
If you internalize those facts, the firewall questions on the exam stop being a memorization exercise and start being a recognition exercise.
My Professional Cloud Architect course covers Cloud Firewall rules and network tags alongside the rest of the networking material.