
One of the small but recurring questions on the Google Cloud Professional Data Engineer exam is how BigQuery actually bills you for storing data over time. It is not a single flat rate. BigQuery splits storage into two tiers, Active and Long-term, and it moves your tables between them on its own. If you do not understand the rules, you can miss easy points on the exam and overpay on real workloads.
Here is how I explain it to candidates preparing for the Professional Data Engineer cert, and how I would expect a question on it to look.
When you load a table into BigQuery, no matter how you did the load, it lands in Active storage. That is the default state for anything fresh.
If that table sits untouched for 90 consecutive days, meaning nobody queries it and nobody modifies it, BigQuery automatically transitions it to Long-term storage. There is no button to press, no lifecycle policy to configure, and no separate bucket to set up. The transition happens behind the scenes.
The important detail for the exam is what changes and what does not when this happens:
If the table is accessed again, by a query, an update, or a modification, BigQuery moves it back into Active storage. The 90-day inactivity clock resets to day one, and you pay the active rate until another 90 days of inactivity pass.
This is the trick I see show up in practice questions most often. The 90-day rule does not just apply at the table level. It applies at the partition level for partitioned tables.
So if you have a date-partitioned table where the last 30 days are queried constantly and the partitions from a year ago are never touched, you will end up with a single table whose recent partitions are in Active storage and whose older partitions are in Long-term storage. The pricing reflects that automatically. You do not have to split the table or move data anywhere.
If someone runs a query that touches a year-old partition, only that partition's clock resets. The other cold partitions stay in Long-term. This is why partitioning is so useful for cost control on append-heavy tables like event logs or transactional history.
The other half of the exam framing is the cost comparison to Cloud Storage. You should know these rough equivalences:
Notice what is not on that list. BigQuery does not have a tier that maps to Coldline or Archive. So if you have data you are confident you will not touch for years, and you do not need to query it directly, it is usually cheaper to park it in Cloud Storage under Coldline or Archive than to leave it sitting in BigQuery, even in Long-term.
The decision is not purely about cost though. If you want SQL access to the data without going through an export-import dance, BigQuery is the right home even if the data is cold. If you truly never need to query it and only need to retain it for compliance or backup purposes, Cloud Storage Archive is the cheapest path.
If a Professional Data Engineer question describes a table that has been sitting unused for several months and asks why the storage cost is lower than expected, the answer is the automatic transition to Long-term storage after 90 days of inactivity.
If the question describes a partitioned table where some old partitions are cheap and recent ones are not, the answer is that the 90-day rule applies per partition.
If the question asks where to store data that will be untouched for years, the answer is Cloud Storage Coldline or Archive, not BigQuery Long-term, because BigQuery does not go that cold.
And if a question tries to trick you into thinking Long-term storage means slower queries or a restore step, remember that the only thing that changes is the bill. Access patterns and latency are identical.
My Professional Data Engineer course covers BigQuery storage tiers, partitioning, and the full set of cost-optimization patterns you need for the exam.