Compromised CMEKs on GCP for the PDE Exam

GCP Study Hub
619c7c8da6d7b95cf26f6f70
May 4, 2026

A compromised customer-managed encryption key is one of those topics that looks niche on paper but shows up reliably on the Professional Data Engineer exam. The question stems are usually short and the answer choices are close enough that you have to know the exact recovery flow, not a general sense of "rotate the key." In this post I want to lay out what to do when a CMEK is compromised, how the recovery differs between Cloud Storage and BigQuery, and the distinction between disabling and destroying a key version that the exam loves to test.

What "compromised" actually means here

A CMEK lives in Cloud KMS and wraps the data encryption keys that Google uses to encrypt the underlying objects. If the key material itself is leaked, or if you suspect someone with access to the key may have copied it out, the key is compromised. That status is not something Google detects for you. You decide a key is compromised based on your own signals, usually from Cloud Audit Logs, IAM access reviews, or an incident response process.

The important consequence is that any data currently encrypted with that key has to be re-encrypted with a new key. Disabling the compromised key is not enough on its own, because anyone who already exfiltrated the key material can still use it offline against ciphertext they hold. Re-encryption is what actually reduces the blast radius.

The re-encryption pattern

The recovery pattern is consistent across services. Create a new CMEK, configure the destination resource to use the new key, and copy the data from the old resource to the new one. The two services the Professional Data Engineer exam will almost certainly ask about are Cloud Storage and BigQuery, and they have small but testable differences.

Cloud Storage

For a bucket protected by a compromised CMEK, the flow is:

  • Create a new CMEK in Cloud KMS, in the same location as the bucket.
  • Create a new bucket and assign the new CMEK as its default encryption key.
  • Copy all objects from the old bucket to the new bucket. The copy operation re-encrypts the objects on write using the new bucket's default CMEK.
  • Update applications to point at the new bucket, then delete the old bucket.

The key idea is that the copy itself is the re-encryption event. You do not run a separate "re-encrypt" command on the old bucket. A common wrong answer on the exam is "rotate the existing key in place" or "call gsutil rewrite with a new key on the existing bucket." Rewrite can work in some scenarios, but the canonical compromised-CMEK answer is a fresh bucket with the new CMEK assigned at the bucket level during the transfer.

BigQuery

BigQuery is similar in shape but the configuration sits at the dataset level:

  • Create a new CMEK, again in the same location as the dataset.
  • Create a new dataset and configure it to use the new CMEK as its default encryption key.
  • Copy data into the new dataset without specifying a key on the copy, because the dataset-level configuration takes care of it.
  • Update downstream queries and pipelines to reference the new dataset.

The subtle point the exam likes to test is whether you should specify the key during the copy. If your CMEK is set at the dataset level, you do not. If you had configured CMEKs at the table level (which is allowed for different keys per table), you would need to specify the key during each table creation. Setting it at the dataset level is the simpler, more common pattern, and the answer choice that mentions copying "without specifying a key" is usually the correct one.

Disable versus destroy

Once your data is safely re-encrypted, you still have to deal with the compromised key version. Cloud KMS gives you two operations, and they are not interchangeable.

  • Disable a key version makes it unusable for new encrypt or decrypt operations, but the key material is preserved. You can re-enable it later. This is the safe first move during an incident when you are not yet sure whether you have re-encrypted everything.
  • Destroy a key version schedules the key material for deletion. The default scheduled destruction period is 24 hours. During that window the key version is in a destroy scheduled state and you can still restore it. After the 24 hours elapse, the key material is destroyed and any data still encrypted with it is permanently unrecoverable.

The exam framing here is usually a scenario where someone destroyed a key version and then realized data still depended on it. If you are inside the 24-hour window, you can restore. Outside the window, the data is gone. That is why the recovery flow above re-encrypts the data first and only then moves to destroy the compromised version.

Audit log review

The last piece of the response is understanding what the compromised key was used for. Cloud Audit Logs record every encrypt and decrypt call against a CMEK in the Data Access logs. Reviewing those logs tells you which resources were actually accessed using the compromised version and which principals made the calls. That review is what shapes your communication to stakeholders and any required disclosures, and it is also how you confirm you have not missed a dataset or bucket during the re-encryption sweep.

My Professional Data Engineer course covers the full CMEK lifecycle, the Cloud Storage and BigQuery recovery flows, and the disable-versus-destroy distinction in the depth the exam expects.

Get tips and updates from GCP Study Hub

arrow