Do The Things

Stop Talking, Start Doing

A Mind at Work: Why the Best Interview Loops Are Already AI-Ready

“Before I look for anything, I look for a mind at work.” — Sam Seaborn, The West Wing

AI-assisted development is going to be the default. Not eventually — soon. The engineers on your team will work with AI daily, and the ones you hire next will need to be effective in that world.

So the interview loop needs to keep up. But maybe not in the way you think.

The Disconnect

Picture the near future: your team ships with AI assistance as a matter of course. Engineers prompt, review, iterate, and direct AI tools the way they once used Stack Overflow and IDE autocomplete — reflexively, as part of the workflow.

And then you open a req, and the interview loop is… the same one you’ve been running since 2019.

Whiteboard an algorithm. Solve this LeetCode medium. No internet, no tools, no AI — just you, a marker, and the pressure of someone watching you think.

You’re hiring for a world that’s about to stop existing.

The Real Divide Was Always There

Here’s what I’ve come to realize: the interviews that need to change for AI aren’t the ones that were good in the first place.

If you were already probing for a mind at work — critical thinking, second-order impact, design tradeoffs, security implications — then you were already interviewing for candidates who’ll be effective with AI. You just didn’t know it yet.

Think about it. The skills that make someone dangerous with AI tools are the same skills that great interviewers were always looking for:

  • Can they decompose a problem? That’s what good system design rounds test. It’s also exactly what effective AI prompting requires — breaking a vague goal into clear, testable pieces.
  • Can they spot what’s wrong? If your interview had candidates review code and find concurrency bugs, security holes, or architectural time bombs — congratulations, you were training for AI-generated code review before AI-generated code existed.
  • Do they think about second-order effects? “This works, but what happens when traffic spikes?” “This is secure now, but what assumptions does it make about trust boundaries?” That kind of thinking is the differentiator between someone who blindly accepts AI output and someone who catches the subtle failures.
  • Can they articulate why? Not just “this is wrong” but “this is wrong because it assumes X, and X breaks when Y.” That’s the skill that turns AI from a slot machine into a collaboration.

The interviews that were already testing for these things? They don’t need an AI overhaul. They need to keep doing exactly what they’re doing.

What Actually Needs to Change

The interviews that are in trouble are the ones that were always testing for the wrong things — AI just made it obvious.

  • Memorization rounds. Can you recall the mechanics of a red-black tree rotation? This was already a weak signal. Now it’s a useless one. Nobody needs to memorize what they can generate in seconds. The question was never “do you know the algorithm?” — it should have been “do you know when and why to use it?”
  • Artificial constraint rounds. No internet, no docs, no tools. This tested a skill that hasn’t been relevant since broadband. Now it’s testing for a skill that’s actively counterproductive — the ability to work without the tools your job requires.
  • Syntax-on-whiteboard rounds. Can you write compilable code with a marker? This was always more hazing ritual than signal. Now it’s hazing for a skill nobody uses.

These formats were already failing to predict job performance. AI didn’t break them — it just removed any remaining justification for keeping them around.

The Awkward Middle

Right now we’re in the worst part of the transition. Organizations are all over the adoption spectrum — some are AI-native, some are still in “wait and see” mode with policies that actively prevent engineers from using AI at work. Your candidates are coming from all of these places.

That creates mismatches in every direction:

The AI-fluent engineer in a traditional loop. A great engineer bombs a LeetCode round because they haven’t memorized Dijkstra’s algorithm in three years. Why would they? They use AI to implement it in 30 seconds and spend their energy on the parts that actually matter — like whether Dijkstra’s is even the right choice.

The whiteboard ace who can’t evaluate. A candidate aces the algorithm but struggles on the job because they never developed the judgment muscles. They can implement a solution but can’t evaluate one. They can write code but can’t review it critically.

The polyglot thinker in a language-specific loop. A candidate who’s worked across Python, Go, and TypeScript thinks in patterns and concepts rather than language-specific idioms. They pick the right tool for the job and lean on AI to handle the syntax they’re rusty on. But your loop requires idiomatic Rust from memory, so they look “weak” — even though their cross-language fluency is exactly what makes them effective in an AI-assisted world. There will always be demand for deep language expertise. But filtering exclusively for it misses candidates whose strength is synthesizing concepts across ecosystems.

The sharp thinker from a no-AI shop. A candidate has never used AI tools professionally — their org hasn’t allowed it yet. But they decompose problems cleanly, spot security implications unprompted, and think three steps ahead about system behavior. The AI tools can be learned. That kind of thinking can’t. If your loop is looking for a mind at work, this person passes. If your loop is looking for AI experience, you just filtered out one of your strongest candidates.

All three of these are interview design problems, not candidate problems. And the last one is the most important to get right — because penalizing candidates for their previous employer’s adoption timeline has nothing to do with their ability to do the job.

So What Do You Actually Do?

If you’re redesigning your loop, start by asking: were we already testing for thinking, or were we testing for trivia?

If you were testing for thinking — keep it. Maybe let candidates use AI tools during the session, not because the AI matters, but because removing artificial constraints lets you focus on what you were already evaluating: how their mind works.

If you were testing for trivia — this is your excuse to fix it. Some ideas:

Problem decomposition. Give a messy, real-world problem. Watch how they break it down. Tools allowed, AI allowed, internet allowed. Judge the approach, not the syntax. This was always a good interview. Now it’s a necessary one.

Code review. Show them code — AI-generated or not — with subtle issues. Not syntax errors. Concurrency bugs. Security vulnerabilities. Logic that works for the happy path but crumbles on edge cases. Can they find the problems? Can they explain why they’re problems? This is the skill gap AI is widening: the distance between people who can write code and people who can evaluate it.

Design with tradeoffs. System design, but push on the second-order effects. What happens at scale? What are the security implications? What assumptions are baked in? The candidates who think this way will use AI effectively because they’ll question what it produces. The ones who don’t will accept the first plausible answer — from a human or a machine.

The Uncomfortable Question

Here’s where it gets spicy: if you’re an interviewer who runs a memorization-heavy loop, you’re already misaligned — and AI just widened the gap. You’re testing for skills that don’t predict job performance using methods that haven’t changed in a decade.

This isn’t about whether you’ve adopted AI yet. It’s the same adoption curve playing out everywhere, and nobody should be penalized for where they are on it — interviewers included. But interview reform does require understanding what you’re selecting for. If the goal is to find a mind at work, you need to be clear on what that looks like. And that means engaging with how the work is actually changing, even if your own workflow hasn’t caught up yet.

You don’t have to be an AI power user to run a great interview. You do have to stop testing for things that no longer matter.

The Broader Signal

The interview question is really a proxy for a bigger one: does your organization actually understand what effective engineering looks like, or have you always been testing for proxies and just never had a reason to question them?

AI didn’t create a new set of skills to interview for. It revealed which interviews were already measuring the right things and which were always theater. The best interviewers — the ones who were probing for critical thinking, design sense, security awareness, and the ability to reason about tradeoffs — they’ve been running AI-ready interviews all along.

The rest of the industry just needs to catch up.


This post was drafted with AI assistance, refined with human judgment, and will probably be evaluated by both. Meta? Maybe. But if you can’t tell which parts are which — that’s kind of the point.