Types of Agents for the Generative AI Leader Exam

GCP Study Hub
Ben Makansi
September 14, 2025

Agents are one of the bigger blocks on the Generative AI Leader exam, and the questions almost always come down to picking the right architecture for a described scenario. There are three architectural styles you can build with: deterministic, generative, and hybrid. Each one has a defining feature, a clear set of strengths, and a clear set of limitations. Once those are locked in, the four canonical agent examples on the exam (customer service, data, code, security) become a quick mapping exercise.

The three architectural styles

I think of these three styles as a spectrum. On one end you have full control and zero surprises. On the other end you have natural, flexible conversation. In the middle you get the best of both, paid for in extra design complexity.

Deterministic agents: hard-coded logic

The defining feature of a deterministic agent is hard-coded logic. Every decision branch is written in advance, so the agent can only do what it has been explicitly programmed to do.

Strengths:

  • Fully predictable behavior
  • Perfect for compliance paths
  • Auditable decisions
  • No hallucination risk, because the model is not generating anything, it is executing logic

Limitations:

  • Rigid, cannot handle novel inputs that were not anticipated at design time
  • Expensive to maintain as the ruleset grows
  • Feels scripted, not conversational

Generative agents: LLM-driven reasoning

The defining feature of a generative agent is LLM-driven reasoning. Instead of following predefined rules, the agent uses a language model to reason and respond in natural language.

Strengths:

  • Natural, fluid conversation
  • Handles open-ended questions
  • Adapts to unexpected inputs
  • Easy to define in plain language rather than as logic trees

Limitations:

  • Less predictable output
  • Can hallucinate or go off-script
  • Not ideal for strict compliance scenarios

Hybrid agents: intent-based routing

The defining feature of a hybrid agent is intent-based routing. A router detects the user's goal and switches between scripted rules and generative AI as needed, all while maintaining a single conversation flow. Compliance-critical flows like processing a refund or verifying an identity are handled deterministically. General conversation and open-ended questions are handled generatively.

Strengths:

  • Compliance and conversation in one agent
  • Strict control when needed
  • Natural feel for general topics

Limitations:

  • More complex to design
  • Requires routing logic on top of the two underlying styles

The exam decision rule

The decision point on the Generative AI Leader exam usually comes down to one sentence in the scenario. If the scenario emphasizes compliance, auditability, or zero tolerance for unexpected outputs, the answer is deterministic. If it emphasizes natural conversation and flexibility, the answer is generative. If it needs both, the answer is hybrid.

Four agent examples on the exam

The exam pairs the three architectural styles with four real-world agent types. None of the four is purely one architecture by accident. The right choice always follows from what the agent needs to do.

Customer service agent

Handles inbound customer inquiries, answers FAQs, resolves issues, processes requests, and escalates complex cases to human agents. Compliance flows like verifying an account or processing a return need scripted paths, while general questions need to feel conversational. Neither architecture alone can do both, so this agent combines scripted compliance flows with conversational AI. That is hybrid.

Core capabilities:

  • Intent recognition and routing
  • FAQ and knowledge base retrieval
  • Order and account status lookups
  • Human escalation triggers
  • Multi-language support via LLMs

Data agent

Accesses, processes, and surfaces data from structured and unstructured sources, generates reports, creates visualizations, and answers data queries. The exam is explicit on one point: a data agent does NOT perform security analysis or threat detection. That belongs to a different agent type entirely.

Core capabilities:

  • Natural language to SQL queries
  • Report and dashboard generation
  • Data pipeline monitoring
  • Cross-source data joining
  • Scheduled data summaries

Code agent

Assists developers across the software development lifecycle: writing new functions, reviewing pull requests, debugging errors, explaining unfamiliar codebases, and generating unit tests. None of that can be reduced to predefined rules. It requires the model to reason, adapt, and generate, which is exactly what generative architecture does best.

Core capabilities:

  • Code generation in any language
  • Bug identification and fix
  • Code review and quality analysis
  • Unit test generation
  • Documentation writing

Security agent

Ingests security data from multiple sources, correlates events to identify attack patterns, distinguishes genuine threats from false positives, and recommends or automates response actions. Pattern matching for known threat signatures benefits from strict rules. Reasoning about novel or ambiguous threats benefits from generative capability. The design works best when both styles are present.

Core capabilities:

  • Multi-source log ingestion
  • Event correlation and pattern detection
  • False positive filtering
  • Threat severity classification

How I map this on the exam

When a Generative AI Leader exam item describes an agent, I read it twice. The first read is for the task: what does the agent actually need to do. The second read is for the constraint words: compliance, auditability, predictability, conversation, open-ended, novel inputs. Those words are how the question tells you which architecture to pick. If both sets of words show up in the same scenario, the answer is almost always hybrid.

My Generative AI Leader course covers types of agents alongside the rest of the foundational material.

arrow