Deploying Containers from Dockerfiles on GKE for the PCA Exam

GCP Study Hub
Ben Makansi
December 13, 2025

Deploying a containerized application on Google Kubernetes Engine starts well before kubectl. It starts with a Dockerfile, the text file that defines exactly what goes into your container. For the Professional Cloud Architect exam, you need to know the full sequence from Dockerfile to running pod, and you need to recognize what a well-optimized Dockerfile looks like.

Why Dockerfiles Matter on GKE

Docker is a tool for creating and managing containers, and Docker containers are a standardized implementation of containers. Once a Docker image is built, it can be deployed to pods running on a Kubernetes cluster like GKE. The Dockerfile is the source of truth for what goes into that image. It defines the operating system, the application code, the libraries, the dependencies, and any other files the container needs at runtime.

The Docker image itself is a blueprint. The Dockerfile builds the blueprint, GKE runs the container that the blueprint produces. Understanding that separation is what makes the deployment sequence make sense.

The Five-Step Deployment Sequence

The Professional Cloud Architect exam expects you to know this flow in order. Skipping a step or scrambling the order is a common trap on container questions.

  1. Create or acquire a Dockerfile. The Dockerfile contains all the instructions needed to build your image.
  2. Build the Docker image. Run the build against the Dockerfile to package the OS, code, libraries, and dependencies into an image.
  3. Push the image to a registry. Push to Artifact Registry, or Container Registry on older deployments. The registry is what your GKE cluster pulls from.
  4. Create a Kubernetes Deployment manifest. This YAML file references the location of the image in the registry and describes how the container should be deployed, including replica counts, labels, and ports.
  5. Apply the manifest with kubectl. Use kubectl to point at the Deployment manifest and roll the workload onto the cluster.

If you remember nothing else, remember that the registry sits between the build step and the cluster. The cluster never reads your Dockerfile directly. It pulls a built image from a registry that the manifest points to.

Optimizing Dockerfiles

The exam will sometimes hand you a Dockerfile and ask which change improves the build. Four principles cover almost every variant of that question.

Use lightweight base images. A smaller base image downloads faster, extracts faster, and produces a smaller final image. It also has a smaller attack surface. Replacing debian:latest with something like python:3.9-slim or python:3.9-alpine is almost always the right answer when an option offers it.

Minimize the final image size. Clean up unnecessary files during the build and avoid pulling in tools you do not need at runtime. Smaller images are faster to deploy and cheaper to store.

Leverage layer caching. Docker builds images in layers, one per instruction in the Dockerfile. If a layer has not changed, Docker reuses the cached version on the next build. Reorder your Dockerfile so that instructions that change rarely (installing dependencies) come before instructions that change often (copying application code). That way a code change does not invalidate the dependency layer.

Avoid unnecessary steps. Every instruction creates a layer and adds time to the build. Combine related commands where it makes sense, and remove steps that do not contribute to the final image.

A Concrete Example

Here is the kind of Dockerfile the exam might show you, written without optimization in mind.

FROM debian:latest

COPY . /app

RUN apt-get update && apt-get install -y python3 python3-pip

RUN pip3 install -r /app/requirements.txt

Three things are wrong with this. First, debian:latest is heavy when a Python-specific slim image would do. Second, COPY . /app runs before the install step, so any change to the application code invalidates the layer cache for the dependency install that follows. Third, the pip install pulls requirements.txt from the full app copy, which means the dependency layer rebuilds every time any source file changes.

A better version swaps the base image, copies the requirements file first, installs dependencies, and only then copies the rest of the application code. The dependency layer now stays cached across most code changes, and the image is smaller from the start.

FROM python:3.9-slim

COPY requirements.txt /app/requirements.txt

RUN pip3 install -r /app/requirements.txt

COPY . /app

Same application, much faster rebuilds, smaller image.

What to Take Into the Exam

For the Professional Cloud Architect exam, lock in two things. The deployment sequence: Dockerfile, build, push to Artifact Registry, write a Deployment manifest, apply with kubectl. And the four optimization principles: lightweight base, minimal image size, layer caching through smart ordering, and no unnecessary steps. Most container questions on the PCA come down to one of those points.

My Professional Cloud Architect course covers Dockerfile optimization and GKE deployments alongside the rest of the containers and serverless material.

arrow