
Cloud Storage on Google Cloud is more than a pile of buckets. The Professional Cloud Architect exam expects you to know the data management features that sit on top of object storage: lifecycle rules, object versioning, retention policies with bucket lock, and Autoclass. Each of these answers a different question. How do I cut storage cost over time? How do I recover from accidental overwrites? How do I prove to an auditor that data could not have been tampered with? How do I let Google handle storage class transitions for me?
I want to walk through each one in the order I think about them, and call out the interactions you need to remember for the exam.
Lifecycle rules let you automate what happens to objects in a bucket once they meet a condition. The two actions you care about are deletion and changing storage class. The most common condition is age, but you can also key off things like number of newer versions, creation date, or current storage class.
You can configure lifecycle rules in the Cloud Console or as a JSON file applied with gsutil or gcloud. A rule has two parts: a condition and an action. When the condition is true for an object, the action runs.
The action you should remember is SetStorageClass. It changes the storage class of an object that meets the condition. So a rule like "if the object is older than 30 days, set storage class to Nearline" automatically demotes data that has cooled off, without you running any scripts. The other action is Delete, which removes objects that hit the condition. Together these two actions cover most cost-optimization patterns: tier data down as it ages, and eventually delete it when it is no longer needed.
The reason this matters for the Professional Cloud Architect exam is that lifecycle rules are the cheapest answer when a question describes data that becomes less valuable over time and asks how to minimize storage cost. They are also the right answer when the scenario already knows the access pattern, because you can encode that pattern directly in the rule rather than letting the system infer it.
Object versioning is a different concern. It is not about cost, it is about not losing data when something gets overwritten or deleted.
You enable versioning at the bucket level. With gsutil:
gsutil versioning set on gs://[BUCKET_NAME]
Once versioning is on, two things change. First, if you upload an object with the same name as an existing one, the existing object is not overwritten. It becomes a noncurrent version, kept in the same bucket. Second, when you delete an object, Cloud Storage creates a delete marker, and the previous version becomes a noncurrent version. The object looks gone from a normal listing, but it is still recoverable.
This is the feature you reach for when the requirement is recovery from accidental writes or deletes. It does not by itself control cost or duration, so you usually pair it with a lifecycle rule that deletes noncurrent versions older than some number of days, otherwise old versions accumulate forever.
Retention policies look superficially similar to versioning because they also protect data, but the goal is the opposite. Versioning lets you keep a history. A retention policy enforces that objects cannot be deleted or replaced for a defined period. It is a compliance feature.
You set a retention policy on the bucket. The policy specifies a duration. During that duration, no object in the bucket can be deleted or overwritten, including by the bucket owner. The policy applies retroactively, so it covers existing objects as well as anything new written into the bucket. Each object effectively has a clock starting at its creation time, and it cannot be removed until that clock has run out.
Then there is bucket lock. A retention policy on its own can be removed or shortened by someone with the right permissions. Bucket lock makes the policy permanent. Once locked, the policy cannot be removed and the duration cannot be reduced. It can still be increased. A locked policy also prevents the bucket itself from being deleted until every object has met the retention period. This is the configuration auditors usually want to see for regulated data, because it means even an administrator with full IAM permissions cannot wipe the data ahead of schedule.
The interaction to remember for the Professional Cloud Architect exam is that retention policies and object versioning cannot be used at the same time on a bucket. They serve different goals. If the requirement is "keep all versions of changes," that is versioning. If the requirement is "data must not be deletable for N years," that is a retention policy, and you lock it if the requirement says it cannot be undone.
Autoclass is the hands-off option for storage class transitions. When you enable Autoclass on a bucket, Google Cloud automatically moves objects between Standard, Nearline, Coldline, and Archive based on access patterns. There is no rule to write. You enable the feature and the system handles transitions.
The transitions follow a fixed pattern based on time without access. Every object starts in Standard. After 30 days without access, the object moves to Nearline. After 90 more days without access, it moves to Coldline. After 365 days without access, it moves to Archive. If an object is accessed at any point, it moves back toward Standard.
Autoclass is convenient, but it is not always the cheapest option, and that is the trap on the exam. Every object enters at Standard, which is the most expensive class. If you already know the data will be infrequently accessed from day one, lifecycle rules can place it directly in Nearline, Coldline, or Archive based on age and skip the time it would otherwise spend at Standard. Autoclass also has its own per-object management fee.
So the heuristic is: pick Autoclass when the access pattern is unpredictable and you want Google Cloud to figure it out. Pick lifecycle rules when you already know the access pattern and want to optimize cost beyond what Autoclass can do.
The features overlap in their general theme of "managing objects in a bucket," but each maps to a distinct question. Lifecycle rules answer how to cut cost on aging data when the access pattern is known. Object versioning answers how to recover from overwrites and deletes. Retention policies, especially with bucket lock, answer how to prove data immutability for regulators. Autoclass answers how to let Google Cloud manage storage class transitions when access patterns are unknown.
Read the scenario carefully and identify which question is being asked. The Professional Cloud Architect exam will often combine two of these in a single bucket design, like versioning plus a lifecycle rule that deletes noncurrent versions, so you also need to know which combinations are allowed and which are not. Versioning plus a retention policy is the one that does not work.
My Professional Cloud Architect course covers Cloud Storage data management alongside the rest of the storage and analytics material.