Este contenido solo está disponible en Inglés.

Aún sin traducción para este idioma.

Programacion & Dev

How Docker Containers Actually Work: From Code to Orchestration

Equipe Blueprintblog9 min
How Docker Containers Actually Work: From Code to Orchestration

You know how to run docker run. Maybe you can write a basic Dockerfile. But if someone stopped you right now and asked — "what happens between the moment you type docker build and the moment your app is running in production with auto-scaling?" — could you explain every step?

Most developers can't. Not because they lack the ability. Because Docker is taught in disconnected pieces: a Dockerfile tutorial here, a Kubernetes video there, a networking article somewhere else. Nobody shows the entire path at once.

This article does.

From the Dockerfile to Kubernetes orchestration — five stages, each building on the previous one. By the end, you'll understand not just how each piece works, but why it exists.


1. The Dockerfile: the recipe

Everything starts with a text file. No extension, no magic — just instructions telling Docker how to assemble the environment where your code will run.

Think of a Dockerfile as a cooking recipe. FROM picks the base ("start with a kitchen that already has an oven and a sink"). COPY brings the ingredients ("put your code here"). RUN prepares ("install dependencies"). CMD serves ("when the container starts, execute this").

dockerfile
# Example: Node.js app
FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm ci --only=production

COPY . .

EXPOSE 3000
CMD ["node", "server.js"]

The order of instructions matters. Docker caches each layer. If you put COPY . . before RUN npm ci, any change in any project file invalidates the installation cache — and Docker reinstalls everything from scratch. By copying only package.json first, the dependency cache survives as long as dependencies haven't changed.

Each Dockerfile instruction becomes a layer in the final image. Which brings us to the next stage.


2. Build and Layers: the immutable image

When you run docker build, Docker reads the Dockerfile and creates an image — a read-only template containing everything your app needs to run: base OS, dependencies, code, configuration.

The image is built in stacked layers. Each Dockerfile instruction generates one layer. Layer 1 is the base OS — in the example above, Alpine Linux with Node.js 20. Layer 2 is the dependencies installed by npm ci. Layer 3 is your app code copied by COPY . ..

Two properties make this system work:

Immutability. Once created, the image doesn't change. If you need to modify something, you create a new image. This guarantees that what runs in dev is identical to what runs in production — the classic "works on my machine" problem disappears.

Layer reuse. If two images share the same base (node:20-alpine), Docker stores that layer only once. Ten different apps with the same base don't take 10x the space — they take 1x the base plus each app's delta.

bash
# Build the image
docker build -t my-app:1.0 .

# Inspect the layers
docker history my-app:1.0

The image is the blueprint. To turn it into something that actually runs, you need a container.


3. Container Runtime: where code comes alive

When you run docker run, Docker takes the image (read-only) and creates a live instance of it — the container. It's like the difference between a class and an object in programming: the image is the class, the container is the instance.

The container gets a writable layer on top of the image's read-only layers. Everything the app writes at runtime — logs, temp files, session data — goes into this layer. When the container is destroyed, this layer is discarded. The data disappears.

But what makes a container different from just running the app directly on the operating system? Two Linux kernel technologies:

Namespaces control what the container sees. They isolate processes (PID), network (NET), filesystem (MNT), hostname (UTS), users (USER), and inter-process communication (IPC). The container thinks it's the only process in the world.

Cgroups control what the container consumes. They limit CPU, memory, swap, and disk I/O. Without cgroups, a container with a memory leak would take down the entire host.

Under the hood, the heavy lifting is done by containerd (the runtime daemon) together with runc (which creates the container using kernel APIs). The Docker CLI is an interface — the real work happens in these two.

bash
# Run the container
docker run -d --name my-app -p 3000:3000 my-app:1.0

# See running containers
docker ps

# Monitor resource consumption
docker stats my-app

Container ≠ VM. A virtual machine runs a complete operating system with its own kernel. A container shares the host's kernel and uses namespaces and cgroups to simulate isolation. That's why containers start in milliseconds and consume a fraction of a VM's resources.


4. Networking: how containers talk

An isolated container isn't very useful. Your app needs to receive requests from the outside world. Your app needs to talk to the database. And the database is... another container.

Docker solves this with a bridge network — an internal virtual network (docker0) that connects containers to each other. Containers on the same bridge network can communicate directly by name. The app container calls the database at db:5432 instead of an IP address. Docker handles DNS resolution.

For the outside world to reach a container, you use port mapping: map a host port to a container port.

bash
# -p hostPort:containerPort
docker run -d -p 80:3000 my-app:1.0
# Now: http://localhost:80 → container:3000

With Docker Compose, networking is created automatically:

yaml
# docker-compose.yml
services:
  app:
    build: .
    ports:
      - "80:3000"
    depends_on:
      - db

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_PASSWORD: secret
    volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

The pgdata volume is critical. Remember that the container's writable layer is discarded when it dies? Without the volume, your Postgres data dies with it. Volumes live outside the container — they persist across restarts, upgrades, and recreations.


5. Orchestration with Kubernetes: when one container isn't enough

Docker Compose works well for local development and simple projects. But in real production — with thousands of requests, high availability requirements, zero-downtime deploys — you need something more.

That's where Kubernetes (K8s) comes in. If Docker is what creates and runs containers, Kubernetes is what manages containers at scale.

The K8s architecture has two levels:

Control Plane (Master Node) — the brain. Contains the API Server (receives commands), the Scheduler (decides where to run each container), the Controller Manager (ensures actual state matches desired state), and etcd (the cluster's database).

Worker Nodes — the hands. Each node is a machine (physical or virtual) that runs containers. Containers live inside Pods — the smallest unit in Kubernetes. A Pod usually contains one container, though it can contain more when they need to share resources.

What Kubernetes does that Docker Compose doesn't:

Scaling — auto-scales Pods based on CPU, memory, or custom metrics. Traffic spiked? More Pods. Traffic dropped? Fewer Pods.

Self-healing — if a Pod dies, K8s restarts it automatically. If an entire node goes down, Pods are rescheduled to other nodes.

Rollouts — gradual deployment — updates Pods one by one, checking health at each step. If something breaks, automatic rollback.

Service Discovery — internal DNS and Services. Pods find other Pods by name, with automatic load balancing across replicas.

yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app:1.0
        ports:
        - containerPort: 3000
        resources:
          limits:
            memory: "256Mi"
            cpu: "500m"
          requests:
            memory: "128Mi"
            cpu: "250m"
yaml
# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: LoadBalancer
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 3000

Kubernetes is declarative. You don't say "start 3 containers." You say "I want 3 replicas running." K8s takes care of reaching that state — and maintaining it. If a replica dies, it creates another. You declare the destination; K8s handles the journey.


When to use what

Not every project needs Kubernetes. This is a distinction many Docker articles ignore.

Docker only (no Compose) — when you have a single container. A script, a CLI tool, an isolated service.

Docker Compose — when you have multiple containers that need to talk (app + database + cache). Ideal for local dev and small team projects.

Kubernetes — when you need auto-scaling, self-healing, zero-downtime rollouts, and you're running in production with real traffic. K8s adds significant complexity — only use it when the scale justifies it.

If you're starting with Docker, don't jump to Kubernetes. Master the Dockerfile, understand layers, practice Compose. Kubernetes is layer 5 — and it only makes sense when layers 1 through 4 are solid.


The full path in six sentences

  1. Dockerfile — the recipe. Defines the environment, copies code, installs dependencies, declares the startup command. Instruction order affects caching.
  2. Image — the immutable template. Built in stacked layers, read-only, reusable. docker build creates it, docker history inspects it.
  3. Container — the live instance. Image + writable layer. Namespaces isolate what it sees. Cgroups limit what it consumes.
  4. Networking — bridge network connects containers internally. Port mapping exposes them externally. Volumes persist data across restarts.
  5. Kubernetes — orchestration at scale. Auto-scaling, self-healing, rollouts, service discovery. Declarative: you define the desired state, K8s maintains it.
  6. Docker Compose for dev and small projects. Kubernetes for production at scale. Don't skip steps.

Etiquetas del articulo

Articulos relacionados

Recibe los ultimos articulos en tu correo.

Follow Us: