
One of the recurring scenarios on the Professional Cloud Architect exam asks which of a company's traditional enterprise processes will change the most after a serious cloud adoption. The TerramEarth case study version of this question is a good example. The answer is not security, not compliance, not application development. It is the cluster of processes that touch how a company plans for, pays for, and accounts for infrastructure.
Specifically, three things change together: capacity planning, total cost of ownership assessments, and the allocation between operational and capital expenditure. These are not three separate questions. They are three faces of the same shift, and the exam expects you to recognize all of them.
In a traditional data center, capacity planning is a static exercise. You forecast peak demand for the next two or three years, you order servers and storage to meet that peak with some headroom, and you wait six to twelve weeks for the hardware to arrive and get racked. If your forecast is too low, you run out of capacity and your application degrades. If your forecast is too high, you have paid for hardware that sits idle for years.
Cloud capacity planning is dynamic. Resources scale up and down based on actual demand, often automatically. Instead of forecasting a peak years in advance, you set autoscaling policies, you configure managed instance groups or serverless services, and you let the platform respond to load in real time. The planning question stops being "how much hardware do I buy" and becomes "what scaling policies do I configure, and what budget guardrails do I put around them."
That is a fundamentally different skill set. Capacity engineers who used to spend their time on multi-year hardware refresh cycles now spend their time tuning autoscalers, setting up budget alerts, and monitoring usage patterns. The exam wants you to recognize that this transformation is real and that it touches the people, the tooling, and the cadence of how an organization plans.
Total cost of ownership in a data center includes the hardware purchase, the data center space, the power and cooling, the networking gear, the staff to maintain all of it, the software licenses, and the depreciation schedule. Most of those costs are predictable on a multi-year horizon because they are tied to physical assets you have already bought.
In the cloud, the cost structure inverts. There is almost no upfront purchase. Instead you pay per second of compute, per gigabyte of storage, per gigabyte of egress. The hardware, the data center, the power, and most of the maintenance staff become Google's problem and show up in your bill as a unit price. Your TCO assessment now has to account for usage variability, for sustained-use and committed-use discounts, for egress charges, for the cost of managed services versus self-managed alternatives, and for how all of that scales with your actual workload patterns.
The new TCO question is harder to answer with a spreadsheet built in 2015. It requires usage data, not asset lists. And it requires you to know which Google Cloud pricing levers exist, which is exactly the kind of knowledge the Professional Cloud Architect exam tests across questions about committed-use discounts, custom machine types, preemptible and spot VMs, and storage class transitions.
The third piece is the accounting change. Buying servers is a capital expenditure. The company writes a check, the asset goes on the balance sheet, and the cost is depreciated over several years. Paying for cloud services is an operating expenditure. Each month's bill is an expense in that month, and there is no asset on the balance sheet.
This is not just a labeling change. It affects how the company budgets, how it forecasts cash flow, how the CFO talks to the board, and how individual project teams justify spending. Under Capex, a project team fights for capital approval once and then uses the hardware for years. Under Opex, the same project team has a monthly run rate that finance can see every cycle, which makes cost transparency higher but also means cost overruns get noticed faster.
For the exam, the relevant point is that this shift forces a new conversation between engineering and finance. Tools like billing exports, budget alerts, and committed-use discount planning exist precisely because companies need to manage Opex with the same discipline they used to apply to Capex.
You will see this material in two forms. The first is direct case-study questions, like the TerramEarth one, that ask which traditional enterprise process is most affected by cloud adoption. The answer that wraps capacity planning, TCO, and Capex versus Opex together is the right answer because all three move together.
The second form is more subtle. Questions about right-sizing instances, choosing committed-use discounts, designing autoscaling policies, or selecting storage classes are all downstream of the same shift. The exam is asking you to demonstrate that you can architect a system that takes advantage of usage-based pricing rather than fighting against it.
The thing to internalize is that on a Professional Cloud Architect exam, when you see a question about how a legacy enterprise process changes after moving to Google Cloud, the cluster of capacity planning, total cost of ownership, and Capex-to-Opex is almost always the highest-impact answer. It is not the only thing that changes, but it is the thing the exam keeps coming back to.
My Professional Cloud Architect course covers the Capex to Opex shift alongside the rest of the architecture and compliance material.