
When you sit for the Google Cloud Professional Data Engineer exam, you will see Bigtable questions that go beyond schema design and node sizing. Some of them test whether you know how to investigate what is actually happening inside a Bigtable instance once it is running in production. The Bigtable console gives you a lot of useful monitoring data out of the box, but the exam expects you to know what to do when that surface-level view is not enough. That is where Cloud Logging and the Logs Explorer come in.
This is one of those topics that looks small on paper but shows up in a predictable pattern on the Professional Data Engineer exam. If you can recognize the scenario and remember the exact filter, you can lock in the point quickly and move on. Let me walk through what you need to know.
The Bigtable console shows you CPU utilization per node, read and write throughput, error rates, latency percentiles, storage utilization, and a few other operational metrics. For most day-to-day monitoring, that is fine. You can see whether a cluster is hot, whether you need to add nodes, and whether replication is keeping up.
The exam scenarios that move past the console usually sound like this. An application team reports intermittent failures against a Bigtable instance. A security review asks who modified a table schema last Tuesday. A performance investigation needs to correlate specific request patterns with a latency spike. In all of these cases, the aggregate charts in the Bigtable console will not give you the answer. You need request-level and admin-level events, and those live in Cloud Logging.
The single most testable fact in this area is the resource type filter for Bigtable in Logs Explorer. Open Logs Explorer in Cloud Logging and apply this filter:
resource.type="bigtable_instance"That is it. That filter narrows the entire log stream for the project down to events tied to your Bigtable instances. If a Professional Data Engineer exam question describes a scenario where someone needs detailed Bigtable activity and asks you to pick the correct Logs Explorer filter, that string is the right answer. Wrong answers will usually swap in bigtable_table, bigtable_cluster, or invent a fake resource type. The correct resource type is bigtable_instance.
Once you apply the filter, Cloud Logging splits Bigtable activity across a few audit log streams. Admin Activity logs capture administrative operations such as creating or deleting instances, modifying clusters, changing IAM policies, and altering table schemas. These are on by default and you do not pay extra for them. If the exam scenario involves figuring out who created or dropped something, Admin Activity is where you look.
Data Access logs are the more granular stream. They capture read and write requests against your tables, including the identity that issued the request and the operation performed. These are the logs you would dig into when an application is misbehaving and you need to see the actual data plane traffic. The catch, and this is the second testable concept in this area, is that Data Access logs for Bigtable are not enabled by default. You have to explicitly turn them on in the IAM and Admin section under Audit Logs, scoped to the Bigtable API. You can enable Admin Read, Data Read, and Data Write independently.
The reason Data Access logs are off by default is volume. A busy Bigtable instance can generate an enormous number of read and write operations per second, and every one of those produces a log entry when Data Access logging is enabled. That hits Cloud Logging ingestion costs directly, since you are charged for log volume beyond the free allotment.
The exam often frames this as a trade-off question. A team wants visibility into all data plane operations against Bigtable, but they are worried about logging costs. What do you recommend? The right answer is usually a combination of enabling only the Data Access categories you actually need, scoping them with exemptions for high-volume service accounts that do not need to be audited, and routing the logs through a sink to BigQuery or Cloud Storage with a retention policy that matches your compliance requirement. You almost never want to enable Data Read for every principal on a high-throughput Bigtable instance without thinking about cost.
The pattern of using Logs Explorer with a resource type filter is not unique to Bigtable. The Professional Data Engineer exam tests the same workflow for BigQuery, where the filter is resource.type="bigquery_resource" or bigquery_dataset depending on what you are inspecting, and for Dataflow, Pub/Sub, and Cloud Composer. If you internalize the general approach of going to Logs Explorer, picking the right resource type, and choosing between Admin Activity and Data Access logs, you can answer this whole family of questions even if you forget the exact string for one service. For Bigtable specifically, remember bigtable_instance.
There are really only four things to hold in your head for this topic. First, the Bigtable console is for aggregate operational metrics and is not where you find request-level detail. Second, Logs Explorer with resource.type="bigtable_instance" is the entry point for detailed Bigtable logs. Third, Admin Activity logs are on by default and free, while Data Access logs must be explicitly enabled and cost money based on volume. Fourth, when an exam scenario gives you a cost concern alongside a visibility concern, the right answer almost always involves selective enablement and a sink to cheaper storage rather than blanket logging.
My Professional Data Engineer course covers Bigtable monitoring, Cloud Logging audit log categories, and the rest of the operational topics the exam loves to test in this exact pattern, so when you see a scenario question you already know which filter, which log type, and which trade-off to pick.