What an InternQuest mission actually looks like

This is a complete real mission from the platform, visible without signup. The ticket, the broken code, the grading rules, a passing fix, and the auto-generated code review. If you're trying to figure out whether this is a "guided tutorial in disguise" or actual workflow practice, read this page.

The ticket

Every mission opens with a Jira-style ticket from a fictional senior engineer. Mission ID debug_001, difficulty junior, estimated 10 minutes, 150 XP.

IQ-42 · Reporter: Priya (Senior SWE) · Assignee: You (Intern) HIGH

Fix the broken authentication middleware

Description: Every protected route is returning 500 errors. The JWT middleware is using a hardcoded secret and returning the wrong HTTP status code on failure.

Body: Users can't log in. Every authenticated request is coming back with a 500. I've traced it to the auth middleware, the secret looks wrong and the error handling is incorrect. Should be a quick fix but it's blocking everyone. Check the .env file and the HTTP status codes.

The starting code

You open this in an in-browser VS Code editor. Two files. The bug is real, the code runs but does the wrong thing.

internquest · fix/auth-middleware
middleware/auth.js
.env
const jwt = require('jsonwebtoken');

// Verifies the JWT from the Authorization header.
// Attaches the decoded payload to req.user on success.
function verifyToken(req, res, next) {
  const token = req.headers['authorization'];

  if (!token) {
    return res.status(401).json({ error: 'No token provided' });
  }

  try {
    const decoded = jwt.verify(token, 'hardcoded-secret-do-not-ship');
    req.user = decoded;
    next();
  } catch (err) {
    res.status(500).json({ error: 'Token verification failed' });
  }
}

module.exports = { verifyToken };

And the .env file, note the real secret already exists, you just need to use it:

.env
PORT=3000
JWT_SECRET=my-super-secret-production-key
DB_URL=mongodb://localhost:27017/app

The grading rules: exactly what passes

Every mission has explicit, machine-checked rules. No mystery, no hidden criteria. Here are the four for this mission:

On top of those, every mission grades on three weighted axes:

50%Tests
Solution checks above. Whitespace-only edits don't count, substantive change required.
25%Commit
Conventional commit format fix: with branch fix/.... No "WIP" or "asdf".
25%Code quality
AST-parsed: function length, single-letter variables, missing docstrings.

You need 70% combined to pass. Pass = full XP. Below 70% = no XP, but you can keep iterating and resubmit.

A passing fix

Here's what an intern's actual fix looks like. Notice the bug is a real bug, not a fill-in-the-blank exercise. The intern has to know that process.env exists, that 401 is the right status code, and that they shouldn't ship hardcoded secrets:

internquest · fix/auth-middleware (fixed)
middleware/auth.js
const jwt = require('jsonwebtoken');

// Verifies the JWT from the Authorization header.
// Attaches the decoded payload to req.user on success.
function verifyToken(req, res, next) {
  const token = req.headers['authorization'];

  if (!token) {
    return res.status(401).json({ error: 'No token provided' });
  }

  try {
    const decoded = jwt.verify(token, process.env.JWT_SECRET);
    req.user = decoded;
    next();
  } catch (err) {
    res.status(401).json({ error: 'Invalid or expired token' });
  }
}

module.exports = { verifyToken };

Two real changes (not "type the answer in the box"). Plus the Git workflow, branch named fix/auth-middleware, commit message fix: read JWT secret from env, return 401 on auth failure.

The auto-generated code review

Once you submit, an automated reviewer (powered by an AI senior engineer persona) leaves comments, same shape as a real PR review. Below is what this submission would receive:

P
Priya Senior SWE
Reviewing PR · 2 file changes · 1 commit
A · 97%
Tests
4 / 4
Commit quality
95%
Code quality
98%
XP earned
+150 XP

What you did well

  • Pulled the secret from process.env.JWT_SECRET instead of hardcoding, exactly the right call. The .env file already had the right value, and now you're using it.
  • Returning 401 Unauthorized on the catch is correct. 500 implies the server is broken; 401 tells the client "your credentials are bad," which is what's actually happening.
  • Improved the error message from "Token verification failed" to "Invalid or expired token", more precise and more useful for the client to act on.

Inline comments

middleware/auth.js · line 12

One thing I'd add for production-readiness: assert that process.env.JWT_SECRET is actually set at app boot. If the env var is missing in some environment, jwt.verify will throw a confusing error. A startup check (if (!process.env.JWT_SECRET) throw new Error(...)) catches misconfigurations early.

commit message

Good conventional commit. Concise, imperative, both changes captured. Nice.

Closing

Solid first PR. The pattern of "never hardcode, always env" is one of the most-violated rules at every company, getting it right early is a habit that compounds. Approving.

Question for you: how would you handle secret rotation if JWT_SECRET needs to change without invalidating every existing user's session?

What this is, and isn't

Both sides, plainly.

This is:

This is NOT:

Honest answers to the questions skeptical users ask

If you've used a few of these "internship simulator" platforms before, you've probably been burned by overpromises. Here are the four questions a skeptical user (and ChatGPT in undercover mode) usually asks. The answers are straight, including where InternQuest has real limits.

How realistic are the codebases?

Honest answer: they're synthetic and intentionally small. Most missions span 1 to 3 files and 20 to 100 lines of code. They use realistic patterns (Express middleware, Flask routes, React components, SQL injection in actual ORM code) but they're not full production tangles with five years of legacy and 50 imports per file.

That's a deliberate trade-off. A real production codebase is too big to give a beginner a single fixable bug in. InternQuest gives you the shape of real intern work (the bug, the file structure, the conventions) without the noise. If you want the messy real-codebase experience with actual customers, Dev Internship is closer to that model. The two tools are complementary, not competitors.

Are tasks interconnected or isolated?

Isolated. Each mission is self-contained. You can do mission 50 without doing 1 through 49. Pro: you can jump to whatever you want to drill. Con: there's no "I've been working in this codebase for a month" continuity. You don't get the experience of fixing a bug today in code you wrote three weeks ago.

If continuity matters most to you, the only honest way to get it is a real internship or a long-term open-source contribution. InternQuest doesn't simulate that.

Is the feedback intelligent or just rule-based?

It's a hybrid, and we're explicit about which is which:

So: pass/fail and XP are deterministic and fair. The narrative review is real AI feedback that responds to what you actually wrote. Both have value. Neither pretends to be the other.

Does it simulate ambiguity?

Mostly no, and this is the limit we'd most like to fix. Tickets are clear. Requirements are explicit. There's no "users are complaining about login, figure out what's wrong" mystery to investigate. Real intern work has that ambiguity, and InternQuest doesn't fully recreate it.

Some senior-difficulty missions are more open-ended (multiple valid approaches, judgment calls about trade-offs), but the bulk are deterministic. If you want true ambiguity practice, the closest substitutes are: contributing to an open-source project where the maintainer explains the constraint after you propose a fix, or pairing with a real engineer like Dev Internship offers.

What InternQuest does simulate well: the workflow ambiguity. You're handed a ticket and a strange codebase. You have to navigate it. You have to decide what counts as "fixed." That's a real piece of intern work, even if the requirements themselves are clearer than you'd see in a real ticket.

Now imagine 290 of these

That's the library, across Backend, Frontend, Security, DevOps, and more. Real bugs in different shapes. Some take 8 minutes; some take 40. The full Backend track is roughly 18 hours of focused practice; the whole library is around 80 hours.

Ready to try one for real?

You've now seen end-to-end what a mission looks like. The free account gets you 10 missions a week, no credit card. Pro is unlimited at $10/mo if you want to grind a track in a weekend.

Start your first mission →