What is Docker? A no-jargon guide for new developers

Docker is the tool that ate the deployment world. Almost every modern web service runs in a Docker container. Universities barely teach it. Bootcamps mention it. Then your first internship hands you a Dockerfile to fix on day three. Here's the plain-English explanation, plus the eight commands you'll actually use.

The one-paragraph version

Docker packages your application together with everything it needs to run, the code, the runtime, the libraries, the system tools, into a single portable bundle called a container. That container runs the same way on your laptop, on a coworker's laptop, in CI, and in production. The "it works on my machine" class of bugs essentially disappears. That's why every modern company uses it.

The metaphor that helps

Imagine shipping freight. Before standardized containers, each cargo type, barrels of oil, boxes of bananas, crates of machines, needed its own loading process, its own ship configuration, its own dockworker training. Loading a ship took weeks.

Then someone invented a standardized metal box. Suddenly the contents didn't matter. Any container fit any ship, any train, any truck. The whole industry sped up by orders of magnitude.

Docker did the same thing for software. The container is a standardized box for an application, Python service, React app, database, AI model, whatever. The runtime (Docker) doesn't care what's inside. It runs containers the same way regardless. That makes deployment fast, predictable, and identical across environments.

The three things you need to know

You can be productive with Docker by understanding just three concepts.

1. Image

A read-only blueprint for a container. Think of it as a snapshot of "everything the app needs in order to run." Images are built once and reused. They have layers (each layer is a build step) and live in registries (like Docker Hub, AWS ECR, GitHub Container Registry).

Examples of images: python:3.11, postgres:16, node:20-alpine, your own app's my-startup/api:v1.4.2.

2. Container

A running instance of an image. The image is the recipe; the container is the meal you actually cook. You can run the same image as 50 separate containers, all isolated from each other.

Containers are ephemeral by default, when they stop, anything they wrote to their own filesystem disappears (unless you mount a volume to persist it).

3. Dockerfile

A plain-text recipe that tells Docker how to build an image. You write a Dockerfile once; docker build turns it into an image you can ship anywhere.

# Dockerfile for a Python web service
FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Reading top to bottom: "Start with a slim Python 3.11 image. Set the working directory to /app. Copy requirements.txt in and install Python deps. Then copy the rest of the code. Document that the container listens on port 8000. When the container starts, run uvicorn."

That's a complete Dockerfile for a real Python web service. Most production Dockerfiles are 10-30 lines.

Container vs. virtual machine: the difference

Both isolate applications from each other. The difference is the layer where the isolation happens.

Virtual MachineContainer
What's virtualizedEntire OS, including kernelJust the app + libraries
Boot time30-90 seconds50-500 milliseconds
Memory costHundreds of MB to GBs eachTens of MB each
How many can a host run?DozensHundreds to thousands
Use caseHard isolation, multi-OS, legacyApp packaging, modern microservices

VMs aren't dead, they're still used for hard security isolation and for running entirely different operating systems. But for "ship a Python service to production," containers are the modern default.

The 8 Docker commands you'll use 99% of the time

# 1. Build an image from a Dockerfile in the current directory
docker build -t my-app:dev .

# 2. List all images on your machine
docker images

# 3. Run a container from an image
docker run --rm -p 8000:8000 my-app:dev

# 4. List running containers
docker ps

# 5. List ALL containers including stopped ones
docker ps -a

# 6. Stop a running container
docker stop <container-name-or-id>

# 7. View logs from a container
docker logs <container-name-or-id>

# 8. Open a shell inside a running container (debugging)
docker exec -it <container-name-or-id> /bin/sh

The flags worth memorizing:

Practice fixing real broken Dockerfiles

InternQuest's DevOps track gives you broken containers, broken Compose configs, and broken CI pipelines that you have to debug. Hands-on Docker practice with no real production at risk. Free.

Try a Docker mission →

docker-compose: the next thing you'll meet

One container is fine for a single service. But most real apps need multiple containers running together, say, your API + a Postgres database + a Redis cache. Running them by hand with separate docker run commands is tedious.

Docker Compose lets you describe a multi-container setup in a single docker-compose.yml file:

services:
  api:
    build: .
    ports:
      - "8000:8000"
    environment:
      DATABASE_URL: postgresql://app:secret@db:5432/myapp
    depends_on:
      - db
      - redis

  db:
    image: postgres:16
    environment:
      POSTGRES_USER: app
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myapp
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine

volumes:
  pgdata:

Now docker compose up starts all three. docker compose down stops them. The services can talk to each other by name (the API uses db:5432 as if Postgres were a hostname; Compose makes that work via an internal network).

For local development on a project with multiple moving parts, Compose is what you'll actually run.

Common Docker mistakes by junior devs

Building images that are 2 GB when they could be 100 MB

Default Node or Python images are huge. Use the -slim or -alpine variants and you'll cut image size by 5-10x. Smaller images push faster, pull faster, and start faster.

Copying everything before installing dependencies

Look at this Dockerfile pattern again:

COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .

Why is requirements.txt copied separately first? Because Docker caches layers. If you change a source file, the dependency-install layer is cached and skipped, build is fast. If you do COPY . . before installing, every code change re-runs the install. The slow build is your fault.

Hardcoding secrets in the Dockerfile

Never put database passwords, API keys, or tokens in a Dockerfile. The image is shippable; anyone with access to the image (including pulled-down old versions) can read every layer. Use environment variables passed at run time, or a secrets manager.

Forgetting to expose the port

Your Dockerfile says EXPOSE 8000 but you ran docker run my-app without -p 8000:8000. The container is running, but you can't reach it from your browser. EXPOSE documents the port; -p publishes it. Always need both.

Running as root

Default Docker images run as the root user inside the container. For real production, this is a security smell, if an attacker breaks into the container, they have root inside. Add a non-root user to your Dockerfile:

RUN useradd --create-home appuser
USER appuser

How Docker fits into the bigger picture

Docker is one piece of a larger pattern. The full picture looks like:

  1. You write code. Push it to GitHub.
  2. CI runs tests and builds a Docker image.
  3. The image is pushed to a registry (Docker Hub, ECR, GHCR).
  4. A container orchestrator (Kubernetes, ECS, Cloud Run, Fly.io) pulls the image and runs containers across a fleet of servers.
  5. The orchestrator handles scale, restarts, networking, secrets, and replaces old containers with new ones during deploy.

Docker is steps 2-3. Kubernetes (and cousins) is step 4-5. As an intern, you almost never write Kubernetes manifests, but you'll constantly read and edit Dockerfiles. That's where to focus.

What to know vs. what you can Google

For interns and new juniors:

The 60-minute crash course

If you have an hour and want to get hands-on:

  1. Install Docker Desktop on your laptop. (15 min)
  2. Run docker run hello-world. Confirm it works. (2 min)
  3. Pull and run a Postgres container, connect to it from your terminal. (15 min)
  4. Pick a tiny app you've built. Write a Dockerfile for it. Build it. Run it. (20 min)
  5. Add a second service (Postgres) and run them with docker-compose. (15 min)

That's it. Sixty minutes and you've used every concept in this guide. Docker is one of those tools that looks intimidating until you've done it once.

Practice on broken Docker configs

InternQuest's DevOps missions include broken Dockerfiles with realistic intern-grade bugs, wrong layer order, missing CMD, hardcoded paths. Each mission has a Jira ticket and an automated reviewer. Free.

Try a DevOps mission →