
Once you have the concept of an agent in place, the next thing the Generative AI Leader exam expects you to know is what actually drives those systems. In the GCP ecosystem, those engines are called foundation models, and Google offers a suite of them rather than one model for every job. This article is a family-overview pass. Subsequent articles dig into each model on its own.
Google provides a suite of foundation models that have been trained to power its own products and that are also available for you to build your own applications on. The reason for the divided suite is efficiency. Rather than forcing a single model to handle every possible task, GCP provides specialized tools for specific modalities, whether that is text, image, video, speech, code, or local deployment.
That framing matters for the Generative AI Leader exam because exam questions tend to present a specific business scenario, things like generating marketing videos or running a model on the company's own hardware, and ask you to select the correct model family. If you remember the suite as one undifferentiated pile of models, those questions get harder than they need to be. If you remember which model is built for which modality, they get easy.
The foundation models you should be able to recognize on the Generative AI Leader exam are:
Together those six are the fundamental building blocks for any generative application you intend to build on Google Cloud. Some are first-party managed offerings, like Gemini, Imagen, Codey, and Chirp. Gemma is the open-weight one that you can take and deploy yourself.
The pattern the Generative AI Leader exam tends to follow is a short business description followed by a model-family choice. A useful way to read those questions is to start from the modality and the deployment constraint.
The deeper article on each model goes through the specific use case checks the exam tends to lean on. At the family-overview level, getting the modality-to-model mapping right is most of what you need.
Almost all of these models are accessible through Vertex AI, which is the platform layer that sits above the models layer in the AI landscape stack. Gemini also has its own consumer-facing surfaces, including the Gemini web app and the Gemini API, plus product integrations like Gemini for Workspace and Gemini for Google Cloud. Gemma is the exception to the managed-API pattern. Because it is open-weight, you take it and run it on your own hardware or on standard VMs.
The points to lock in from this topic are:
My Generative AI Leader course walks through each of these models in more depth alongside the rest of the foundational material, including the specific use case checks the exam leans on for Imagen, Veo, Gemma, Chirp, and Codey.