
BigQuery compute pricing is one of the few topics on the Professional Data Engineer exam where you can lose points by knowing the product too well at a surface level and not well enough at the billing level. The question stems sound like cost-optimization scenarios, but they're really testing whether you can match a workload profile to the right pricing model. I want to walk through the framing I use when I prep candidates for this part of the Professional Data Engineer blueprint.
The first thing the exam expects you to internalize is that BigQuery decouples storage from compute. Storage is billed on active and long-term tiers regardless of whether you run a single query against the data. Compute is billed only when a query runs, and only against the bytes read during execution. That separation is why a dormant table is cheap and why an unconstrained SELECT * against a wide partitioned table is the most common cost-blowup pattern.
The query lifecycle the exam wants you to recognize goes from query initiation, to compute evaluation, to bytes read, to processing, to result. Costs are incurred at the bytes-read stage, not at submission. That distinction matters because it justifies two estimation tools you should know by name: a bq command-line dry run, and the byte-preview indicator next to the Run button in the BigQuery UI. Both estimate the bytes a query would scan without charging you. Expect a question where the right answer is dry-running the query before letting an analyst execute it on a billion-row table.
A slot is a chunk of CPU and memory that BigQuery uses to execute a piece of a query plan. Slots are allocated dynamically. BigQuery evaluates the query, pulls slots from a pool, processes the data, and can recruit more slots or release them mid-execution as the plan demands. You don't manage slots per query manually, but you do choose how slots are paid for, and that choice is the whole point of the pricing-model question on the Professional Data Engineer exam.
Two facts about slots that show up in stems:
On-demand is the default. You pay per query based on the bytes processed. There is no upfront commitment, no reservation, and no minimum. This is the right answer when the stem describes ad-hoc analytics, exploratory work, unpredictable workloads, or a small team that runs queries irregularly. If the question mentions a data science team running occasional jobs against a few datasets and the company doesn't want to commit to capacity, on-demand is what they want.
Where on-demand starts to break down is when the bytes-scanned bill becomes both large and predictable. Once you can forecast your monthly compute, paying per byte is almost always more expensive than reserving slots.
Capacity-based pricing flips the unit. Instead of paying for bytes read, you pay for slot-hours. You're buying compute time directly. Capacity pricing comes in editions, and the exam expects you to distinguish them.
Standard Edition is the lightweight option. It's effectively on-demand pricing rebilled as slot-hours, with autoscaling up to 1,600 slots. There are no baseline slots, which means everything you use is autoscaled and you can't reserve fixed capacity in advance. There are no commitment plans either, so you pay as you go for the slot-hours you consume. Standard Edition carries a 99.9 percent SLO and fits medium-predictability workloads where you want capacity-style billing without locking anything in.
Enterprise Edition is where reservations become real. You can configure a baseline of dedicated slots that are always available, and stack autoscaling slots on top of that baseline to handle spikes. Capacity is higher than Standard and is governed by location-based quota that you can request increases against. Enterprise also offers one-year and three-year commitment plans with discounted pricing, and the SLO is 99.99 percent. This is the right answer when the stem describes large, predictable workloads, mission-critical pipelines that need an availability guarantee, or an org that wants centralized slot management across multiple teams.
Enterprise Plus exists above Enterprise for the most demanding workloads with the strictest compliance and reliability requirements. On the Professional Data Engineer exam, the Standard versus Enterprise distinction is the one that drives most question stems, but knowing Enterprise Plus sits at the top is enough.
When you read a pricing question, look for three signals in the stem:
A useful mental check: if the workload is large, predictable, and continuous, and the question mentions discounts or long-term value, Enterprise Edition with baseline plus autoscaling on a commitment plan is almost always the answer. If the workload is small, sporadic, or exploratory, on-demand wins. Standard Edition lives in the middle and shows up when the stem rules out commitments but still wants the capacity model.
Before any pricing question, also remember the dry-run and byte-preview tools. They show up in optimization scenarios where the question isn't asking you to pick a model but to prevent a runaway bill on the existing one.
My Professional Data Engineer course covers BigQuery compute pricing in the same way the exam frames it, walking through slot mechanics, the on-demand cap, the edition tradeoffs, and the scenario language to listen for so you can match the model to the workload without second-guessing.