Log Sinks in Google Cloud: Exporting Logs for Compliance and the ACE Exam

Ben Makansi
April 10, 2026

Cloud Logging retains logs for 30 days by default and then deletes them. For many organizations that is not enough. Compliance requirements often mandate retention of certain logs for years. Security teams want to run SQL queries against log data that Cloud Logging's interface does not support efficiently. Operations teams want to trigger automated responses when specific log patterns appear. Log sinks address all of these needs by routing log data to external destinations where it can be retained, analyzed, and acted upon.

What a Log Sink Is

A log sink is a configuration that continuously routes log entries matching a filter to a specified destination outside of Cloud Logging. You define the filter (which logs to route), the destination (where they go), and the sink runs automatically as long as it is active. New log entries matching the filter are exported in near real-time.

Log sinks are created at the project, folder, or organization level. A project-level sink routes logs from that project. A folder-level or organization-level sink can aggregate logs from all projects within the folder or organization into a single destination, which is the standard pattern for centralized log management in large organizations.

Destination Options

There are three supported destination types for log sinks: Cloud Storage, BigQuery, and Pub/Sub.

Cloud Storage is the destination when the primary requirement is long-term retention. Logs written to a Cloud Storage bucket can be retained indefinitely according to the bucket's lifecycle rules and retention policies. For compliance use cases where you need to prove that logs were retained for a specific period, a Cloud Storage sink with a locked retention policy is the right choice. The storage class you choose depends on how often you expect to access the logs. Logs you never expect to retrieve actively are candidates for Coldline or Archive storage to minimize cost.

BigQuery is the destination when you want to analyze log data with SQL. Logs exported to BigQuery are organized into tables that you can query using standard SQL. This is valuable for operations teams that want to analyze error rates, identify patterns in user behavior, correlate logs across multiple services, or build reports that Cloud Logging's native interface cannot support. A Cloud Composer environment's DAG execution logs, for example, might be routed to BigQuery so the operations team can query run durations and failure rates over time.

Pub/Sub is the destination when you need to trigger downstream processing in real time. A log entry arrives in Pub/Sub and a Cloud Function or Dataflow job picks it up and takes action. This is the pattern for integrating GCP logs with a SIEM (Security Information and Event Management system) or for building automated responses to specific log events, like automatically revoking access when a specific audit log entry is detected.

Creating a Log Sink

Log sinks are created from the Cloud Logging console under Log Router, or with gcloud:

gcloud logging sinks create my-sink   bigquery.googleapis.com/projects/PROJECT_ID/datasets/my_log_dataset   --log-filter='resource.type="gce_instance" AND severity>=ERROR'

This creates a sink that routes all error and above severity logs from Compute Engine instances to a BigQuery dataset. The sink creates a service account automatically, and you need to grant that service account write access to the destination before logs will flow.

What the ACE Exam Tests About Log Sinks

The Associate Cloud Engineer exam tests log sinks in compliance and operational scenarios. Key question patterns to recognize: a company needs to retain audit logs for seven years (Cloud Storage with a locked retention policy), a security team needs to query log data with SQL (BigQuery), a team needs logs forwarded to their SIEM in real time (Pub/Sub).

The exam also tests the storage class selection for compliance log archival. When long-term retention is required and the logs are rarely accessed, Coldline or Archive storage is more cost-effective than Standard storage. The choice between Coldline (90-day minimum storage duration) and Archive (365-day minimum) depends on the retrieval expectations.

Service Account Permissions for Log Sinks

When you create a log sink, Cloud Logging automatically creates a service account for that sink. This service account is what writes the log data to the destination. For the sink to function, that service account needs write access to the destination resource, and granting that access is your responsibility, not something GCP does automatically.

For a BigQuery destination, the sink's service account needs the BigQuery Data Editor role on the target dataset. For a Cloud Storage destination, it needs Storage Object Creator on the target bucket. For a Pub/Sub destination, it needs Pub/Sub Publisher on the target topic. If you create a log sink and logs are not flowing to the destination, the most common cause is that the sink's service account has not been granted the necessary permissions.

This permission requirement is one of the most commonly tested log sink details on the Associate Cloud Engineer exam. A question describes a log sink that was correctly configured but logs are not appearing in the BigQuery dataset. The answer involves finding the sink's service account email (visible in the sink configuration) and granting it BigQuery Data Editor on the destination dataset.

Excluding Logs from Sinks

Log sinks route all logs matching their filter, but you can also create exclusions to prevent specific log entries from being ingested into Cloud Logging at all. Exclusions are useful for high-volume, low-value logs that would inflate your log ingestion costs without providing meaningful observability. A Compute Engine instance that generates thousands of debug-level log entries per minute might warrant an exclusion for debug severity logs while still routing error and above to a log sink.

The difference between exclusions and sink filters: an exclusion prevents log entries from entering Cloud Logging entirely, which also means they cannot be exported through any sink. A sink filter routes a subset of already-ingested logs to the destination while leaving all logs visible in the main Cloud Logging interface. For cost control, exclusions are the right tool; for routing, filters are the right tool.

My Associate Cloud Engineer course covers log sinks alongside audit logs and Cloud Monitoring, including the specific compliance scenarios that regularly appear on the Associate Cloud Engineer exam.

arrow