
BigQuery pricing surprises people because it has two completely independent meters running at the same time. You pay for queries based on how much data they read, and you pay for storage based on how much data you keep around. The two bills do not interact in the way most database pricing does, and the Professional Cloud Architect exam expects you to know exactly where each cost is incurred.
I want to walk through the pieces of BigQuery pricing that come up on the Professional Cloud Architect exam, starting with how queries are billed and ending with the storage tier transition that catches most people off guard.
When you run a query, BigQuery charges you for the bytes read during query execution. Nothing else in the query lifecycle costs money. You are not charged for submitting the query, for BigQuery's engine evaluating how to optimize it, for the processing step that crunches the data, or for returning the result back to you. The bill is tied to one specific moment in the pipeline: the bytes read from storage.
This matters because it shapes how you write queries. If your query scans 100 GB to answer a question that only needs 1 GB, you paid for 100 GB. The cost is decoupled from the size of the result. A query that returns a single row can still process terabytes if it is written carelessly.
And critically, this query bill is separate from your storage bill. Storing data in BigQuery costs money on its own, regardless of whether anyone ever queries it.
Because you only find out the cost after the bytes have been read, BigQuery gives you two ways to preview the size of a query before you run it.
If you are using the bq command-line tool, you can perform a dry run. The dry run returns an estimate of how many bytes the query would process, without actually executing the query or charging you anything. You get the size estimate, and that is it.
If you are working in the BigQuery web UI, the estimate appears automatically. As you type the query, BigQuery shows you the projected number of bytes the query will process, displayed near the Run button. You can see the cost coming before you commit to running it.
Both options are useful for cost control, especially when you are exploring a dataset for the first time and have no intuition for how much data your query will touch.
A slot is BigQuery's unit of compute. Each slot represents a chunk of CPU and memory that BigQuery uses to execute part of a query. When you submit a query, BigQuery evaluates the compute it needs, pulls the right number of slots from the available pool, hands the data to those slots for processing, and returns the result.
Slot allocation is dynamic. While a query is running, BigQuery can recruit more slots if the workload spikes, or release slots if fewer are needed. You do not configure this. The engine handles it for you, which is part of why BigQuery is described as serverless even though there is real compute under the hood.
More slots generally mean faster queries, because more parts of the query run in parallel. Fewer slots mean slower queries but lower cost.
BigQuery offers two pricing models for slots, and the Professional Cloud Architect exam expects you to know the difference.
On-demand is the default. You pay based on bytes processed by your queries, and BigQuery automatically allocates slots as needed. There is a soft cap of 2,000 slots per query under on-demand pricing, which is generally plenty but can become a constraint for very large analytical workloads. On-demand makes sense when your query volume is unpredictable or relatively low.
Capacity-based pricing flips the model. You purchase a dedicated number of slots upfront, typically through reservations, and your queries draw from that pool regardless of how much data they process. If you have high or predictable usage, capacity-based pricing is more cost-effective because you are no longer paying per byte scanned.
The choice between them is a workload question. Bursty, exploratory analytics tends to favor on-demand. Steady, heavy usage favors capacity-based.
This one trips people up, and it shows up on the exam regularly. When you query data that lives in one project from another project, the query cost is billed to the project where the query is executed, not the project where the data lives.
So if Project A holds the source dataset and a user in Project B runs a query against those tables, the compute cost goes on Project B's bill. Project A pays for storing the data. Project B pays for the bytes the query reads.
This decouples data ownership from analysis activity, which is useful for organizations that want a central data project shared across many analytical teams. Each team's project gets billed for its own queries, and the data project's bill stays focused on storage. Just make sure your IAM is set up to allow the cross-project access in the first place, and that your billing forecasts account for the split.
Storage in BigQuery has two tiers, and the transition between them is automatic.
When a table is first imported into BigQuery, it lands in Active Storage. As long as the table is being queried or modified, it stays there. If the table goes 90 consecutive days without being queried or modified, BigQuery automatically transitions it to Long-term Storage. The transition happens without any action on your part.
Long-term Storage is cheaper than Active Storage. The data stays just as accessible, with the same query performance and no change in latency. The only thing that changes is the per-GB storage rate. The lower price reflects the fact that the data is sitting cold rather than being actively used.
If a table in Long-term Storage gets queried or modified again, it moves back to Active Storage and the 90-day clock resets. The cycle starts over.
The same logic applies to partitioned tables, but at the partition level. One partition might be hot and stay in Active Storage while another partition in the same table sits cold and transitions to Long-term Storage. Each partition is tracked individually. This is a useful detail to remember because it means partitioning gives you automatic storage tier optimization for free, with no manual archival policy to manage.
The mental model the Professional Cloud Architect exam wants you to hold is that BigQuery has two independent bills. Compute is metered by bytes read at query time, billed to the project running the query, and shaped by whether you chose on-demand or capacity-based slots. Storage is metered by bytes held, with an automatic Active to Long-term transition after 90 days of inactivity that applies at the partition level.
If you can answer where the cost lands in any given scenario, you have the pricing material covered.
My Professional Cloud Architect course covers BigQuery pricing alongside the rest of the storage and analytics material.