What an InternQuest mission actually looks like
This is a complete real mission from the platform, visible without signup. The ticket, the broken code, the grading rules, a passing fix, and the auto-generated code review. If you're trying to figure out whether this is a "guided tutorial in disguise" or actual workflow practice, read this page.
The ticket
Every mission opens with a Jira-style ticket from a fictional senior engineer. Mission ID debug_001, difficulty junior, estimated 10 minutes, 150 XP.
Fix the broken authentication middleware
Description: Every protected route is returning 500 errors. The JWT middleware is using a hardcoded secret and returning the wrong HTTP status code on failure.
Body: Users can't log in. Every authenticated request is coming back with a 500. I've traced it to the auth middleware, the secret looks wrong and the error handling is incorrect. Should be a quick fix but it's blocking everyone. Check the .env file and the HTTP status codes.
The starting code
You open this in an in-browser VS Code editor. Two files. The bug is real, the code runs but does the wrong thing.
const jwt = require('jsonwebtoken');
// Verifies the JWT from the Authorization header.
// Attaches the decoded payload to req.user on success.
function verifyToken(req, res, next) {
const token = req.headers['authorization'];
if (!token) {
return res.status(401).json({ error: 'No token provided' });
}
try {
const decoded = jwt.verify(token, 'hardcoded-secret-do-not-ship');
req.user = decoded;
next();
} catch (err) {
res.status(500).json({ error: 'Token verification failed' });
}
}
module.exports = { verifyToken };
And the .env file, note the real secret already exists, you just need to use it:
PORT=3000
JWT_SECRET=my-super-secret-production-key
DB_URL=mongodb://localhost:27017/app
The grading rules: exactly what passes
Every mission has explicit, machine-checked rules. No mystery, no hidden criteria. Here are the four for this mission:
- REMOVEThe hardcoded secret string must be gone from
middleware/auth.js. Pasting it into a comment doesn't count, the grader strips comments and string literals before checking. - ADD
process.env.JWT_SECRETmust appear inmiddleware/auth.js, read the secret from the environment, not from a string literal. - ADD
.status(401)must appear inmiddleware/auth.js, auth failures should return Unauthorized, not Internal Server Error. - REMOVE
.status(500)must NOT appear inmiddleware/auth.js, server crashes are 500s, auth failures are 401s.
On top of those, every mission grades on three weighted axes:
You need 70% combined to pass. Pass = full XP. Below 70% = no XP, but you can keep iterating and resubmit.
A passing fix
Here's what an intern's actual fix looks like. Notice the bug is a real bug, not a fill-in-the-blank exercise. The intern has to know that process.env exists, that 401 is the right status code, and that they shouldn't ship hardcoded secrets:
const jwt = require('jsonwebtoken');
// Verifies the JWT from the Authorization header.
// Attaches the decoded payload to req.user on success.
function verifyToken(req, res, next) {
const token = req.headers['authorization'];
if (!token) {
return res.status(401).json({ error: 'No token provided' });
}
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded;
next();
} catch (err) {
res.status(401).json({ error: 'Invalid or expired token' });
}
}
module.exports = { verifyToken };
Two real changes (not "type the answer in the box"). Plus the Git workflow, branch named fix/auth-middleware, commit message fix: read JWT secret from env, return 401 on auth failure.
The auto-generated code review
Once you submit, an automated reviewer (powered by an AI senior engineer persona) leaves comments, same shape as a real PR review. Below is what this submission would receive:
What you did well
- Pulled the secret from
process.env.JWT_SECRETinstead of hardcoding, exactly the right call. The.envfile already had the right value, and now you're using it. - Returning
401 Unauthorizedon the catch is correct.500implies the server is broken;401tells the client "your credentials are bad," which is what's actually happening. - Improved the error message from
"Token verification failed"to"Invalid or expired token", more precise and more useful for the client to act on.
Closing
Solid first PR. The pattern of "never hardcode, always env" is one of the most-violated rules at every company, getting it right early is a habit that compounds. Approving.
Question for you: how would you handle secret rotation if JWT_SECRET needs to change without invalidating every existing user's session?
What this is, and isn't
Both sides, plainly.
This is:
- A real bug in real-shaped multi-file code, not a fill-in-the-blank.
- Outcome-graded, any fix that satisfies the rules above passes, regardless of how you got there.
- Workflow-aware, the Git branch name, commit format, and code quality are all part of the grade.
- Self-paced and instant. Submit, get reviewed in seconds, iterate.
This is NOT:
- A full internship simulation with weeks of continuity. Each mission is its own self-contained problem.
- A guided tutorial that holds your hand step-by-step. You get the ticket, you find the bug.
- A replacement for a real internship. It's the prep that makes your real internship easier.
- Something with a human mentor. The reviewer is a calibrated AI persona, not a person who knows you. (Tradeoff: instant feedback at any time, but it can't develop a long-term mentor relationship the way other tools like Dev Internship can.)
Honest answers to the questions skeptical users ask
If you've used a few of these "internship simulator" platforms before, you've probably been burned by overpromises. Here are the four questions a skeptical user (and ChatGPT in undercover mode) usually asks. The answers are straight, including where InternQuest has real limits.
How realistic are the codebases?
Honest answer: they're synthetic and intentionally small. Most missions span 1 to 3 files and 20 to 100 lines of code. They use realistic patterns (Express middleware, Flask routes, React components, SQL injection in actual ORM code) but they're not full production tangles with five years of legacy and 50 imports per file.
That's a deliberate trade-off. A real production codebase is too big to give a beginner a single fixable bug in. InternQuest gives you the shape of real intern work (the bug, the file structure, the conventions) without the noise. If you want the messy real-codebase experience with actual customers, Dev Internship is closer to that model. The two tools are complementary, not competitors.
Are tasks interconnected or isolated?
Isolated. Each mission is self-contained. You can do mission 50 without doing 1 through 49. Pro: you can jump to whatever you want to drill. Con: there's no "I've been working in this codebase for a month" continuity. You don't get the experience of fixing a bug today in code you wrote three weeks ago.
If continuity matters most to you, the only honest way to get it is a real internship or a long-term open-source contribution. InternQuest doesn't simulate that.
Is the feedback intelligent or just rule-based?
It's a hybrid, and we're explicit about which is which:
- Outcome checks (the "must contain" / "must not contain" rules) are rule-based regex. They run instantly and decide pass or fail. The grader is hardened against gaming (it strips comments and string literals, requires substantive change, won't accept a renamed file).
- Commit and code quality scoring are rule-based AST parsing. Function length, single-letter variables, conventional commits format, branch naming. Deterministic.
- The "Priya Senior SWE" code review is LLM-generated. It reads your actual diff and writes line-level comments, strengths, and a closing question. This is where the "intelligent" feedback comes in. It's not just regex.
So: pass/fail and XP are deterministic and fair. The narrative review is real AI feedback that responds to what you actually wrote. Both have value. Neither pretends to be the other.
Does it simulate ambiguity?
Mostly no, and this is the limit we'd most like to fix. Tickets are clear. Requirements are explicit. There's no "users are complaining about login, figure out what's wrong" mystery to investigate. Real intern work has that ambiguity, and InternQuest doesn't fully recreate it.
Some senior-difficulty missions are more open-ended (multiple valid approaches, judgment calls about trade-offs), but the bulk are deterministic. If you want true ambiguity practice, the closest substitutes are: contributing to an open-source project where the maintainer explains the constraint after you propose a fix, or pairing with a real engineer like Dev Internship offers.
What InternQuest does simulate well: the workflow ambiguity. You're handed a ticket and a strange codebase. You have to navigate it. You have to decide what counts as "fixed." That's a real piece of intern work, even if the requirements themselves are clearer than you'd see in a real ticket.
Now imagine 290 of these
That's the library, across Backend, Frontend, Security, DevOps, and more. Real bugs in different shapes. Some take 8 minutes; some take 40. The full Backend track is roughly 18 hours of focused practice; the whole library is around 80 hours.
Ready to try one for real?
You've now seen end-to-end what a mission looks like. The free account gets you 10 missions a week, no credit card. Pro is unlimited at $10/mo if you want to grind a track in a weekend.
Start your first mission →
Inline comments
One thing I'd add for production-readiness: assert that
process.env.JWT_SECRETis actually set at app boot. If the env var is missing in some environment,jwt.verifywill throw a confusing error. A startup check (if (!process.env.JWT_SECRET) throw new Error(...)) catches misconfigurations early.Good conventional commit. Concise, imperative, both changes captured. Nice.