
Cloud SQL is a managed relational database, but managed does not mean unbounded. Every Cloud SQL instance has a hard ceiling on how much data it can hold, and a feature that quietly grows storage as you fill it up. Both show up on the Professional Cloud Architect exam, often together, because the question writers want to know whether you understand the limits of Cloud SQL before you reach for it as the default answer.
Here is what I want you to walk away with: 64 terabytes is the maximum, Automatic Storage Increase keeps you from hitting outages on the way there, and once you cross the ceiling you are no longer talking about Cloud SQL.
A single Cloud SQL instance can hold up to 64 terabytes of storage. That is the cap. There is no premium tier or special configuration that pushes past it. If your data set is smaller than that, Cloud SQL is a strong default for relational workloads. If you know up front that you will exceed 64 TB, or if your growth rate suggests you will get there, Cloud SQL is the wrong primary store.
The Professional Cloud Architect exam tests this through scenarios where a workload outgrows Cloud SQL. The right move depends on what you are willing to give up. There are three alternatives worth knowing.
Spanner is the option when you still need strong consistency and full ACID compliance and you are willing to pay for it. Spanner is built for large-scale, globally distributed relational workloads with consistency guarantees that Cloud SQL cannot match at that scale. The trade-off is cost. Spanner is significantly more expensive than Cloud SQL, so you do not pick it lightly.
Bigtable is the option when NoSQL is acceptable and you still need strong consistency and ACID compliance at the row level. Bigtable scales to massive throughput and is well suited to analytics and high-volume operational workloads, but you lose relational features like joins and SQL semantics. If the application can be modeled as wide-column NoSQL, Bigtable is a better fit than trying to force the data into a relational shape.
BigQuery is the option when consistency and ACID compliance are not requirements and the workload is analytical. BigQuery is built for OLAP, meaning data warehousing, reporting, and complex aggregations across large data sets. If the question describes a team running analytical queries against historical data and storage is exceeding 64 TB, BigQuery is the answer, not Cloud SQL.
The pattern to lock in: Cloud SQL stops at 64 TB, and the replacement depends on whether you need relational, whether you need strong consistency, and whether the workload is transactional or analytical.
Inside the 64 TB envelope, Cloud SQL has a feature called Automatic Storage Increase. When you enable it, Cloud SQL monitors the instance and expands its storage capacity automatically when it is nearly full. The point is to prevent service disruptions caused by running out of space. Without it, an instance that fills up will start rejecting writes, and your application will start failing in ways that are not obvious until somebody pages you.
Two details matter for the Professional Cloud Architect exam.
First, Automatic Storage Increase is bounded by the 64 TB maximum. It is not infinite. Once the instance hits the ceiling, the feature stops helping you and you have to intervene manually, which in practice means migrating to one of the alternatives I described above.
Second, Automatic Storage Increase counts as vertical scaling. It increases the capacity of an existing instance rather than adding new instances. That is the definition of vertical scaling in the GCP context, and the exam will sometimes phrase questions around which scaling category a feature belongs to.
It is also worth knowing what Automatic Storage Increase is not. It is not autoscaling. Real autoscaling adjusts capacity in both directions, growing under load and shrinking when load drops. Automatic Storage Increase only grows. Once Cloud SQL has expanded the storage, that storage stays at the expanded size even if you delete data and free it up. So if a question asks whether Cloud SQL autoscales storage, the answer is no, and the reason is that the feature is one-way.
The Professional Cloud Architect exam tends to wrap these two ideas into one or two questions in the databases section. The shape is usually one of three patterns.
A scenario describes a Cloud SQL workload approaching capacity, and you have to pick the feature that prevents downtime. The answer is Automatic Storage Increase, and the wrong answers will include things like read replicas or larger machine types, which solve different problems.
A scenario describes a Cloud SQL workload that has already exceeded or will exceed 64 TB, and you have to pick the migration target. The answer depends on the consistency, ACID, and workload-type clues in the question. Strong consistency plus relational plus willing to spend points to Spanner. NoSQL acceptable plus high throughput points to Bigtable. Analytical plus relaxed consistency points to BigQuery.
A scenario asks whether a feature counts as autoscaling, vertical scaling, or horizontal scaling. Automatic Storage Increase is vertical scaling, not autoscaling, because it only grows.
If you keep the 64 TB ceiling, the three alternatives, and the one-way nature of Automatic Storage Increase straight in your head, this part of the Professional Cloud Architect exam is straightforward.
My Professional Cloud Architect course covers Cloud SQL storage limits and Automatic Storage Increase alongside the rest of the databases material.