What is CI/CD? Continuous integration and deployment explained simply
CI/CD is one of those acronyms that gets thrown around like everyone already knows what it means. Universities don't teach it. Bootcamps mention it but don't show it. Then your first internship has a "CI failure" on your PR and you have no idea what to do. Here's the plain-English version, plus how to read your first failing pipeline log without panic.
What CI/CD actually means
CI = Continuous Integration. Every time anyone pushes code, an automated system checks the code: runs the tests, lints the formatting, builds it. If anything fails, the system marks the change as broken before it can be merged.
CD = Continuous Deployment (sometimes Continuous Delivery). Once code passes CI, the same automated system packages it and ships it to a server. Some teams ship to production directly; others ship to a staging environment first and require a human to click "promote to production."
Together, CI/CD is the conveyor belt that takes code from "I just typed it" to "real users are using it" without humans manually running commands at each step.
Why every company uses it now
Twenty years ago, software releases looked like this: developers worked for three months, then a single human ran a checklist of 47 manual steps to "release" the new version. Half the time, step 23 was forgotten and a bug went to production. Bugs that took days to roll back. The release event itself was so risky that companies released as rarely as possible, once a quarter, sometimes once a year.
The CI/CD revolution: automate the release process so much that it becomes safe to release tiny changes constantly. Small changes are safer than big ones (less surface area to break, easy to roll back). The companies that automated this won, Netflix deploys thousands of times a day, Amazon every few seconds. Every modern company is somewhere on the spectrum from "we deploy when CI passes" to "we deploy whenever a human approves."
For interns, this means: you don't ship to production by typing commands on a server. You merge a PR. The system does the rest.
What CI runs (the typical checklist)
When you push a commit, CI typically runs in this order:
- Checkout, clone your code at this commit.
- Install dependencies, run
npm install,pip install -r requirements.txt, etc. - Lint, run a code formatter / style checker. Fails on style violations.
- Type check, for typed languages (TypeScript, Python with mypy, etc.), run the type checker.
- Unit tests, run the test suite. Fails on any test failure.
- Integration tests, run tests that hit a real database or API. Slower, more thorough.
- Build, compile / bundle the code. For web apps, this might run
vite buildorwebpack. - Optional: security scan, license check, code coverage threshold, depends on the team.
If any step fails, the pipeline stops and your PR shows a red X. You can't merge until it's green.
What CD does after CI passes
The CD half varies by team, but the common pattern is:
- Build a deployable artifact. A Docker image, a zipped bundle, a static-site export, whatever the deployment target consumes.
- Push the artifact to a registry (e.g., Docker Hub, AWS ECR, npm).
- Deploy to staging. Update the staging environment with the new artifact.
- Run smoke tests against staging. Hit a few critical endpoints to make sure the deployed version actually works.
- Optional manual gate. A human approves promotion to production, or it auto-deploys after a delay.
- Deploy to production. Often via a rolling update (10% of servers at a time) so a bad deploy doesn't take down everything at once.
- Monitor. The system watches error rates and rolls back automatically if they spike after deploy.
For an intern, you don't usually configure any of this, but you'll see steps 1-4 in the GitHub Actions log on your PR. Step 6 happens in the background after merge.
The tools you'll see
- GitHub Actions. Most common for new projects. Configured via YAML files in
.github/workflows/. Free for public repos, generous tier for private. - GitLab CI. Built into GitLab. Configured via
.gitlab-ci.yml. Common at companies that self-host GitLab. - CircleCI / Buildkite / Jenkins. Older or specialized. Jenkins especially common at large legacy companies.
- Vercel / Netlify / Cloudflare Pages. Specialized CD for frontend apps. Push a commit, deployment happens automatically.
You'll see at least two of these in your career. The concepts are the same; only the YAML syntax differs.
What a GitHub Actions file looks like
name: CI
on:
pull_request:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: npm run lint
- run: npm run typecheck
- run: npm test
- run: npm run build
Reading this top to bottom: "On every pull request and every push to main, on a fresh Ubuntu machine, check out the code, install Node 20, install deps, lint, type-check, test, build."
That's it. The whole "CI pipeline" is seven commands in a YAML file. The mystery is mostly a myth.
Get hands-on with CI failures (without breaking anything)
InternQuest's DevOps missions include broken Dockerfiles, broken GitHub Actions configs, and broken CI pipelines that you have to debug. Real workflow practice with no real production at stake. Free.
Try a DevOps mission →How to read a failing CI log without panicking
Your PR has a red X. You click on it. There's a giant log with thousands of lines. Where do you start?
The skill is finding the actual error among the noise. The trick:
- Find which step failed. The log is divided into steps (checkout, install, test, etc.). One of them has a red X next to it. That's the only step you need to read.
- Scroll to the bottom of that step. Errors are usually reported at the end. Scroll up from the bottom to find the first
Error/FAIL/ red text. - Read the actual error message, not the stack trace above it. The message tells you what went wrong; the stack trace tells you where (you'll need it second).
- Reproduce locally. If a test failed in CI, run the same test on your machine:
npm test path/to/the/test.js. Most of the time it'll fail the same way locally and you can debug normally. - If it doesn't fail locally, you have an environment difference, check Node/Python version, environment variables, or platform-specific code.
Common CI failures and what they mean
Lint / formatting
Easy fix. Most projects have a "fix" command: npm run lint:fix, black ., ruff --fix .. Run it locally, commit the result, push.
Type errors
The type checker found a bug. Read the error, fix it. Often it's a function being called with the wrong argument type or accessing a property that might be undefined.
Test failures
Either your code broke an existing test or a test you wrote is broken. Read the test name and the assertion error; usually the message tells you what was expected vs. what happened.
"Tests pass locally but fail in CI"
Almost always one of:
- Race condition. Two tests fight over shared state. Run with
--runInBandor one at a time. - Time/timezone difference. CI runs in UTC; your machine doesn't.
- Missing env var. Your local
.envhas it set; CI doesn't. - Order-dependent tests. One test leaves state that another depends on.
Build failures
Often a missing dependency or an import path that works on case-insensitive filesystems (Mac/Windows) but breaks on Linux. import './foo' won't find ./Foo.js on Linux.
"npm install fails"
Usually a package-lock mismatch. Delete node_modules and package-lock.json locally, run npm install, commit the new lock. Or use npm ci consistently (which respects the lockfile strictly).
The mindset shift CI/CD demands
The biggest adjustment for new engineers: your code is not "done" when it works on your machine. It's done when it passes CI in a clean environment, on a fresh Linux box, with no hidden state from your laptop. CI is the tool that catches the "works on my machine" bugs before users do.
The corollary: CI is your friend, not your enemy. Every time CI fails, it just saved you from shipping a broken thing. Annoying in the moment, much less annoying than a 2 AM page from your on-call rotation.
What you should know vs. what you can Google
For interns and new juniors, the realistic bar:
- Know: what CI/CD is conceptually, how to read your team's CI status on a PR, how to find the failing step, how to read a test failure.
- Be useful at: running tests locally to reproduce CI failures, fixing your own lint/type errors.
- Google when needed: writing a new GitHub Actions workflow, configuring caching, optimizing pipeline speed.
- Defer to seniors on: deployment topology, secret management in CI, infrastructure-as-code.
Most interns are never asked to write CI/CD configs. The skill that actually matters is being able to read them and unblock yourself when something breaks.
The bigger picture
CI/CD is one piece of a larger movement called DevOps, the merging of "writing code" and "running it in production" into one role for one team. Twenty years ago these were separate functions; ops people deployed, dev people coded, and they fought about everything. Now it's all one thing, and "you build it, you run it" is the norm.
For your career: even if you don't want to be a DevOps engineer, learning the basics of containers (Docker), CI/CD pipelines, and one cloud platform (AWS, GCP, or Azure, it doesn't matter which, the concepts transfer) makes you a much more useful engineer. The companies that pay the most are the ones that need full-stack engineers who can ship code AND understand how it runs.
Don't go deep on day one. But the next time CI fails on your PR, treat it as a free chance to learn one new thing about how your team's pipeline works. Six months of that and you'll be the person other interns ask for help.
Practice on broken CI configs without breaking real production
InternQuest's DevOps track gives you broken Dockerfiles, broken GitHub Actions, and broken Docker Compose configs. Each is a mission with a Jira ticket and an automated reviewer. Free.
Try a DevOps mission →