AI Broke Interviews
63 comments
·November 1, 2025neilv
danpalmer
Prepping for interviews has been a big deal forever in most other industries though. It's considered normal to read extensively about a company, understand their business, sales strategies, finances, things like that, for any sort of business role.
I think tech is and was an exception here.
makeitdouble
What you're describing sounds to me like just caring for the place we'll be spending half a decade or more and will have the most impact on our health, financial and social life.
I'd advise anyone to read the available financial reports on any company they're intending to join, execpt if it's an internship. You'll spend hours interviewing and years dealing with these people, you could as well take an hour or two to understand if the company is sinking or a scam in the first place.
throwaway98797
kinda silly given the ability of most people to infer anything substantial through finances and marketing copy
really company reviews is all that matters and even that has limited value since your life is determined by your manger
best you can do is sus out how your interviewers are fairing
are they happy? are they stressed, everything else has so much noise to be worse than worthless
neilv
It was good standard advice even for programmers to know at least a little about the company going in. And you should avoid typos and spellos on your resume.
But no "prep" like months of LeetCode grinding, memorizing the "design interview" recital, practicing all the tips for disingenuous "behavioral" passing, etc.
ThrowawayR2
IIRC Google had an even higher bar in their early days: candidates had to submit a transcript showing a very high GPA and they usually hired people only from universities with elite CS programs. No way to prep for that.
They only gave it up years later when it became clear even to them it wasn't benefiting them.
makeitdouble
> many companies just blindly do it.
Yes. A while ago a company contacted me to interview, and after the first "casual" round they told me their standard process was going full leetcode on the second round and I'm advised to prepare for those if I'm interested in going further.
While that's the only company that was so upfront about it, most accept that leetcodes are dumb (need to be prepped even for a working engineer) and still base the core of their technical interview on them.
dwohnitmok
> And we could interview like adults, instead of like teenagers pledging a frat.
I think you're viewing the "good old days" of interviewing through the lens of nostalgia. Old school interviewing from decades ago or even more recently was significantly more similar to pledging to a frat than modern interviews.
> people who are genuinely enthusiastic
This seems absurdly difficult to measure well and gameable in its own way.
The flip side of "ad hoc" interviewing as you put it was an enormous amount of capriciousness. Being personable could count for a lot (being personable in front of programmers is definitely a different flavor of personable in front of frat bros, but it's just a different flavor is all). Pressure interviews were fairly common, where you would intentionally put the candidate in a stressful situation. Interview rubrics could be nonexistent. For all the cognitive biases present in today's interview process, older interviews were rife with much more.
If you try to systematize the interview process and make it more rigorous you inevitably make a system that is amenable to pre-interview preparation. If you forgo that you end up with a wildly capricious interview system.
If course you rarely have absolutes. Even the most rigorous modern interview systems often still have capriciousness in them and there was still some measure of rigor to old interview styles.
But let's not forget all the pain and problems of the old style of interviews.
nradov
Being personable does count for a lot in any role that involves teamwork. Certain teams can maybe accommodate one member whose technical skills make up for bad interpersonal skills as a special exception, but one is the limit.
captainkrtek
I’ve conducted about 60 interviews this year, and have spotted a lot of AI usage.
At first I was quite concerned, then I realized that in nearly all cases I’d spotted usage, a pattern stood out.
Of the folks I spotted, all spoke far too clearly and linearly when it came to problem solving. No self doubt, no suggestion of different approaches and appearance of thought, just a clear A->B solution. Then, because they often didn’t ask any requirements questions beyond what I initially asked, the solution would be inadequate.
The opinion I came to is that even in the best Pre-AI era interviews I conducted, most engineers contemplate ideas, change their mind, ask clarifying questions. Folks mindlessly using AI don’t do this and instead just treat me as the prompt input and repeat it back. Regardless of if they were using AI or not, I won’t know ultimately, they still fail to meet my bar.
Sure, some more clever folks will mix or limit their LLM usage and get past me, but oh well.
DenisM
I interviewed a guy in person and he paused for 5 seconds, then wrote a perfect solution. I tried making the problem more and more complicated and he nailed it anyway, also after a brief pause. We were done in half the time.
Maybe he just memorized the solution, I don’t know.
Would you fail that guy?
captainkrtek
It depends, I had some interviews like this that I suspected. For context, most of the interviews I conduct are technical design related where we have a discussion, less coding. So in those it is quite open ended where we will go, and there are many reasonable solutions.
In those cases where I’ve seen that level of performance, there have been (one or more of):
- Audio/video glitches.
- candidate pausing frequently after each question, no words, then sudden clarity and fluency on the problem.
- candidate often suggests multiple specific ideas/points to each question I ask.
- I can often see their eyes reading back and forth (note; if you use AI in an interview, maybe dont use a 4K webcam).
- way too much specificity when I didn’t ask for it. For example, the topic of profiling a go application came up, and the candidate suggested we use go tool pprof and suggested a few specific arguments that weren’t relevant, later I found in the documentation the same exact example commands verbatim.
In all, the impression I come away with in those types of interviews is that they performed “too well” in an uncanny way.
I worked for AWS for a long time and did a couple hundred interviews there, the best candidates I interviewed were distinctly different in how they solved problems, how they communicated, in ways that reading from an llm response can’t resemble.
ekropotin
Jumping straight to the optimal solution may also indicate that candidate have seen the problem before.
captainkrtek
The funny thing is, they don’t. They often jump to a solution that lacks in many ways, because it barely addresses the few inputs I gave (since they asked no follow up, even when I suggest they ask for more requirements).
tavavex
Can I ask - out of the 60 interviews, roughly how many times did you suspect AI usage?
captainkrtek
Probably about 10 or so.
Freedom2
> most engineers contemplate ideas, change their mind, ask clarifying questions
I don't disagree at all. I find it slightly funny that in my experience interviewing for FAANG and YC startups, the signs you mentioned would be seen as "red flags". And that's not just my assumption, when I asked for feedback on the interview, I have multiple times received feedback along the lines of "candidate showed hesitation and indecision with their choice of solution".
captainkrtek
Yeah that is definitely something that is subject to the interviewers opinion and maybe company culture. To me, question asking is a great thing, though the candidate eventually needs to start solving.
saulpw
Hotshot FAANG and YC startups don't want humans, they want zipheads[0].
amrocha
The real problem will be in 5 years, when current university students having their brains melted by AI that somehow luck into entry level positions can’t ever get to senior level because they’re too reliant on AI and they literally don’t know how to think for themselves. There will never again be as many senior engineers as there are today. There won’t be any good engineers left to hire.
Look around you. 15 years ago we didn’t have phones and now kids are so addicted to them they’re giving themselves anxiety and depression. Not just kids, but kids have it the worst. You know it’s gonna be even worse with AI.
floundy
Most departments at companies run on zero to two good engineers anyway. The rest are personality and nepotism hires limping along some half-baked project or sustainment effort.
Most people in my engineering program didn’t deserve their engineering degrees. Where do you think all these people go? Most of them get engineering jobs.
artyom
The article implies that somewhat, before AI the leetcode/brainteaser/behavioral interview process had somewhat acceptable results.
The reality is that AI just blew up something that was a pile of garbage, and the result is exactly what you'd expect.
We all treat interviews in this industry as a human resources problem, when in reality is an engineering problem.
The people with the skills to assess technical competency are even more scarce than actual engineers (b/c they would be engineers with people skills for interviewing), and that kind of people is usually very very busy to be bothered with what's a (again, perceived) human resources problem.
Then the rest is just random HR personnel pretending that they know what they're talking about. AI just exposed (even more) how incompetent they are.
bluGill
The results did filter out a few people who could not think.
i reciently interviewed someone who was a senior engineer on the space shuttle, but managed a call center after that. Can this person still write code is a question we couldn't figure out and so had to pass. (We can't prove it but think we ended up with someone who outsourced the work to elsewhere - but at least that person could code if needed as proved by the interview)
habosa
For our coding interviews we encourage people to use whatever tools they want. Cursor, Claude, none, doesn’t matter.
What I’m looking for is strong thinking and problem solving. Sometimes someone uses AI to sort of parallelize their brain, and I’m impressed. Others show me their aptitude without any advanced tools at all.
What I can’t stand is the lazy AI candidates. People who I know can code, asking Claude to write a function that does something completely trivial and then saying literally nothing in the 30 seconds that it “thinks”. They’re just not trying. They’re not leveraging anything, they’re outsourcing. It’s just so sad to set how quickly people are to be lazy, to me it’s like ordering food delivery from the place under your building.
sega_sai
I am teaching a coding class, and we had to switch to in person interview/viva assessment about the code written by students, to deal with AI written code. It works, but it requires a lot of extra effort on our side. I don't know if it is sustainable...
xandrius
Why wouldn't something like this work?
1. Get students to work on a more complex than usual project (in relation to their previous peers). Let them use whatever they want and let them know that AI is fine.
2. Make them come in for a physical exam where they have questions about they why of decisions they had to take during the project.
And that's it? I believe that if you can a) produce a fully working project meeting all functional requirements, and b) argue about its design with expertise, you pass. Do it with AI or not.
Are we interested in supporting people who can design something and create it or just have students who must follow the whims of professors who are unhappy that their studies looked different?
sega_sai
A project doesn't quite work for my course, as we teaching different techniques and would like knowledge of each of them.
But yes we currently allow students to use AI provided their solution works and they can explain it. We just discourage to use AI to generate the full solution to each problem.
hahajk
If I read your suggestion correctly, you're saying the exam is basically a board explaining their decision making around their code. That sounds great in theory but in practice it would be very hard to grade. Or at least, how could someone fail? If you let them use AI you can't really fault them for not understanding the code, can you? Unless you teach the course to 1. use AI and then 2. verify. And step 2 requires an understanding of coding and experience to recognize bad architecture. Which requires you to think through a problem without the AI telling you the answer.
dagmx
I’ve mentioned it before, but it’s not just that people “cheat” during interviews with an LLM…it’s that they have atrophied a lot of their basic skills because they’ve become dependent on it.
Honestly, the only ways around it for me are
1. Have in person interviews on a whiteboard. Pseudocode is okay.
2. Find questions that trip up LLMs. I’m lucky because my specific domain is one where LLMs are really bad at because we deal with hierarchical and temporal data. They’re easy for a human but the multi dimensional complexity trips up every LLM I’ve tried.
3. Prepare edge cases that require the candidate to reconsider their initial approach. LLMs are pretty obvious when they throw out things wholesale
staticautomatic
Rather than trying to trip up the LLM I find it’s much easier to ask about something esoteric that the LLM would know but a normal person wouldn’t.
dagmx
That basically amounts to the same thing. LLMs are pretty good at faking responses to conversational questions.
alyxya
Interviews are fundamentally really difficult to get right. On one side, you could try to create the best fairest standardized interview process based on certain metrics, but people will eventually optimize on how well they can do on the standardized interview, making it less effective. On the other side, you could create a customized ad hoc interview to try to learn as much about the candidate as possible, and have them do a work trial for a few days to ensure they're the right candidate, but this takes a ton of time and effort on both the company and the candidate.
I personally think the best interview format is the candidate doing a take home project and giving a presentation on it. It feels like the most comprehensive yet minimal way to assess a candidate on a variety of metrics, tests coding ability in the project, real system design rather than hypothetical, communication skills, and depth of understanding on the project when the interviewer asks follow-up questions. It would be difficult to cheat this with AI since you would need a solid understanding of the whole project for the presentation.
harpiaharpyja
It's funny how this article seems to repeat itself halfway through, like it was written by AI
storus
Universities and education overall also had their foundation detonated by AI. Some Stanford classes now do 15 minute tricky exams to reduce the chance of cheating with AI (it takes some time to type it so the point is to make the exam so short that one can't physically cheat well). I am not sure what the solution for this mess is going to be.
nradov
Several possible solutions:
1. Strict honor code that is actually enforced with zero tolerance.
2. Exams done in person with screening for electronic devices.
3. Recognize that generative AI is going to be ambient and ubiquitous, and rework course content from scratch to focus on the aspects that only humans still do well.
storus
Only 3) could scale but then those exam takers not using AI would fail unless they are geniuses in many areas. 1) and 2) can't be done when you have 50-70% of your course consisting of online students (Stanford mixes on-campus with CGOE external students who take the exams off-campus), who are important for your revenue. Proctoring won't work either as one could have two computers, one for the exam, one for the cheating (done for interviews all the time now).
shinycode
Maybe it’s time to ask deeper questions, ask how to reduce complexity while preserving meaning. Doing real pair programming with shared remote code and simulate as much as possible a real day-to-day environment. Not all companies search for the same kind of developers. Some don’t really care about the person as long as the tech skills are there. Some don’t look for the brightest in favor of a better cultural match with the team. Genuine remote interviews aren’t easy but it also depends on the interviewer’s skills. We’ve been touted for year that AI will replace developers, would Elon replace the engineers working on the software of it’s rockets with AI ? It depends what’s at stake. I bet their interviews are quite specific and researched thoroughly. We can find better ways to create a real connexion in the interviews and still make sure the tech skills are sound without leet code. We also need developers who master the use of AI and have real skills of thinking before and designing and deep review code skills
mtneglZ
I still think how many golf balls fit in a 747 is a good interview question. No one needs to give me a number but someone could really wow me but outlining a real plan to estimate this, tell me how you would subcontract estimating the size of the golf ball and the plane. It's not about a right or wrong answer but explaining to me how you think. I do software and hardware interviews and always did them in person so we can focus on how a candidate thinks. You can answer every question wrong in my interview but still be above the bar because of how they show me they can think.
pdpi
Some of the best hires I’ve ever made would’ve tanked that sort of interview question. Being able to efficiently work through those puzzles is probably decent positive signal, but failure tells me next to nothing, and a question that can fail to give me signal is a question that wastes valuable time — both mine and theirs.
A format I was fond of when I was interviewing more was asking candidates to pick a topic — any topic, from their favourite data structure to their favourite game or recipe — and explain it to me. I gave the absolute best programmer I ever interviewed a “don’t hire” recommendation, because I could barely keep up with her explanation of something I was actually familiar with, even though I repeatedly asked her to approach it as if explaining it to a layperson.
tavavex
I feel like the stereotype about this question is different from your approach, though: supposedly, it started with quirky, new tech-minded businesses using it rationally to see people who could solve open-ended problems, and evolved to everyone using it because it was the popular thing. If someone still uses it today, I would totally expect the interviewer to have a number up on their screen, and answers that are too far off would lead to a rejection.
Besides, it's too vague of a question. If I were asked it, I would ask so many clarifying questions that I would not ever be considered for the position. Does "fill" mean just the human/passenger spaces, or all voids in the plane? (Cargo holds, equipment bays, fuel and other tanks, etc). Do I have access to any external documentation about the plane, or can I only derive the answer using experimentation? Can my proposed method give a number that's close to the real answer (if someone were to go and physically fill the plane), or does it have to be exactly spot on with no compromises?
bluGill
Problem is many people want to grade the answer for correctness instead of thinking. It is easy to figure out a correct answer and you can tell hr they were off by some amount t so 'no'. It is much harder to tell hr that even though they were within some amount of correct you shouldn't hire them because they can't think (despite getting a correct answer)
null
highfrequency
If AI can solve all of your interview questions trivially, maybe you should figure out how to use AI to do the job itself.
Gigachad
The questions were just a proxy for the knowledge you needed. If you could answer the questions you must have learned enough to be able to do the work. We invented a way to answer the test questions without being able to do the work.
null
> Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers.
Before Google, AFAIK, it was ad hoc, among good programmers. I only ever saw people talking with people about what they'd worked on, and about the company.
(And I heard that Microsoft sometimes did massive-ego interviews early on, but fortunately most smart people didn't mimic that.)
Keep in mind, though, that was was before programming was a big-money career. So you had people who were really enthusiastic, and people for whom it was just a decent office job. People who wanted to make lots of money went into medicine, law, or financial.
As soon as the big-money careers were on for software, and word got out about how Google (founded by people with no prior industry experience) interviewed... we got undergrads prepping for interviews. Which was a new thing, and my impression is that the only people who would need to prep for interviews either weren't good, or were some kind of scammer. But then eventually those students, who had no awareness of anything else, thought that that this was normal, and now so many companies just blindly do it.
If we could just make some other profession be easier big money, maybe only people who are genuinely enthusiastic would be interviewing. And we could interview like adults, instead of like teenagers pledging a frat.