Skip to content(if available)orjump to list(if available)

AI killed the tech interview. Now what?

AI killed the tech interview. Now what?

622 comments

·February 19, 2025

karaterobot

The best interview process I've ever been a part of involved pair programming with the person for a couple hours, after doing the tech screening having a phone call with a member of the team. You never failed to know within a few minutes whether the person could do the job, and be a good coworker. This process worked so well, it created the best team, most productive team I've worked on in 20+ years in the industry, despite that company's other dysfunctions.

The problem with it is the same curse that has rotted so much of software culture—the need for a scalable process with high throughput. "We need to run through hundreds of candidates per position, not a half dozen, are you crazy? It doesn't matter if the net result is better, it's the metrics along the way that matter!"

m11a

I dislike pair programming interviews - as they currently exist - because they usually feel like a time-crunched exam. You don't realistically have the freedom to actually think as you would in actual pair programming. i.e. if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview, but it's pretty realistic of real life work and entirely a non-problem. It's probably even good to test for at interview: how does a person work when they aren't working with an oracle that already knows the answer (ie: the interviewer)?

Pair programming with the person for a couple hours, maybe even on an actual feature, would probably work, assuming the candidate is compensated for their time. I can imagine it'd especially work for teams working on open source projects (Sentry, Zed, etc). Might not be as workable for companies whose work is entirely closed source.

Indeed, the other problem is what you mention: it doesn't scale to high throughput.

open592

> i.e. if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview

In all pair programming interviews I have run (which I will admit have been only a few) I would fail myself as an interviewer if I was not able to guide the interviewee away from a dead end within 15 minutes.

If the candidate wasn't able to understand the hints I was giving them, or just kept driving forward, then they would fail.

raffraffraff

Exactly! Who calls "researching how to build X, but then letting my pair-programming partner fall down a rabbit hole so I can feel superior" "pair programming".

michaelcampbell

> I would fail myself as an interviewer

You are not most interviewers, alas.

karaterobot

That's definitely up to the interviewer, in which a lot of discretion and trust has been placed. I think a lot of it also comes down to the culture of the company—whether they're cutthroat or supportive. As you get better people into the company, hopefully this improves over time. I know that when we did it, it was never about nailing it on the first try, it was literally about proving you knew how to program and were not an asshole. So, not the equivalent of reversing a binary tree on a whiteboard. The kinds of problems we worked on in the interviews weren't leetcode type problems, they were real tickets from our current project. Sometimes it was just doing stuff like making a new component or closing a bug, but those were the things we really did, so it felt like a better test.

codr7

Hiring doesn't scale, period; deal with it.

darth_avocado

Exactly. People go like “the ideal way to interview is <whatever they themselves are the best at>”. Pair programming interviews suck and don’t scale, just like every other alternate way of hiring.

eikenberry

IMO interviewing is the biggest bottleneck and if interviewing was decoupled from hiring then it wouldn't be a problem. But this requires a Guild-like organization to manage interviewing/vetting and for companies to use said Guild for hiring. The companies could then do a single team culture meeting (if they wanted) before hiring.

tshaddox

For companies that successfully scale their team size, it literally did scale, right? I think you mean that hiring is very difficult to scale.

hassleblad23

Exactly. Its almost like optimising for finding your best possible match for marriage. You don't go over a billion prospects, you choose from the ones locally available to you, as they come.

trhway

it isn't about scale. It is about core principle of the tech hiring - all the companies hire only the best. Not only it is impossible to scale, it is plain impossible. Even if all the companies hired only "above the average" it would still be a pretty tall order :)

barbazoo

> i.e. if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview

That’s an assumption. Perhaps following a dead end for a while, realizing it, pivoting, etc is a valuable, positive, signal?

m11a

I agree. But what I mean is: that's not how it's perceived in the current interview structure, which lasts maybe 45 minutes or so. Ultimately, going down a dead end means you'd now have 30 minutes to find the right solution and code it up. So the oracle (the interviewer) would probably help you realise sooner that it's a bad idea, so you don't waste your time. That's assuming they know the problem and solution well; if they don't, you'll just lose them and burn through your time.

In a 2 hour pair programming session on an 'unsolved' problem (like an open issue / minor bug / minor feature in a public repo), yes, it will likely not matter if you tried a bad idea, and would both be more realistic and a positive signal.

weitendorf

I do 1hr pair programming interviews for my company and you have to strike a balance between letting candidates think through the problem even when you think it won't work (to see their thought process and maybe be surprised at their approach working/see how quickly they can self-correct) and keeping them on track so that the interview still provides a good signal for candidates who are less familiar with that specific task/stack.

I'm also not actually testing for pair programming ability directly, moreso ability to complete practical tasks / work in a specific area, collaborate, and communicate. If you choose a problem that is general/expandable enough that good candidates for the position are unlikely to go down bad rabbit holes (eg for a senior fullstack role, create a minimal frontend and api server that talk to each other) it works just fine. Actually with these kinds of problems it's kind of good if your candidates end up "studying" them like with leetcode, because it means they are just learning how to do the things that they'll do on the job.

> maybe even on an actual feature

I don't think this would work unless the feature were entirely self-contained. If your workaround is to give the candidate an OSS project they need to study beforehand, I think that would bias candidates' performance in ways that aren't aligned with who you want to hire (eg how desperate are they for the role and how much time outside of work are they willing to put into your interview).

thayne

Another problem is it is difficult to compare candidates whose interviews involved working on completely different problems.

JumpCrisscross

> if you wag your tail chasing a bad end for 15 mins, this is a fail in an interview

Eh, if it's a reasonable bad end and you communicate through it, I wouldn't see it as a fail. Particularly if it's something I would have tried, too. (If it's something I wouldn't have thought of and it's a good idea, you're hired.)

delusional

I did a couple of rounds of this with my manager as the interviewer. Personally I really liked the process, and the feedback I got from the candidates was positive (but then again it always would be).

What worked well for me was that I made it very clear to my manager, a man who I trust, that I would not be able to provide him with a boolean pass/fail result. I couldn't provide him any objective measure of their ability or performance. What I could do was hang out with the canditates for an hour while we discussed some concepts I thought were important in my position. From that conversation I would be able to provide him a ranking along with a personal evaluation on whether I would personally like to work with the candidate.

I prepared some example problems that I worked for myself a bit. Then I went into the interviews with those problems and let them direct those same explorations of the problem. Some of them took me on detours I hadn't taken on my own. Some of them needed a little nudge at times. I never took specific notes, but just allowed my brain to get a natural impression of the person. I was there to get to know them, not administer an exam.

I feel like the whole experience worked super well. It felt casual, but also very telling. It was almost like a focused date. Afterwards I discussed my impression of the candidates with my manager to ensure the things I was weighing was somewhat commutable to what he desired.

All in all it was a very human process. It must have taken an enormous amount of trust from my manager to allow me the discretion to make a subjective judgment. I was extremely surprised at how clearly I was able to delineate the people, but also how that delineation shifted depending on which axis we evaluated. A simple pass/fail technical interview would have missed that image of a full person.

joshdavham

I've (unfortunately) been interviewing the last two months and the main pattern that I've noticed is that a) big companies have terrible interview processes while b) small companies and startups are great at interviewing.

Big companies need to hire tons of people and interview even more so they need some sort of scalable process for it. An early stage startup can just ask you about your past projects and pair program with you for an hour.

solarmist

I hear this all the time, but I have yet to experience it. It may be because the small companies that I interview with are all startups, but I have yet to be able to get a call back from any other kind of small company. And the startups I do interview with have a full FAANG interview loops.

There seems to be a weird selection bias that if you're FAANG or FAANG adjacent these small companies aren't interested.

dilyevsky

At a former gig we had a newly hired ex-facebook employee give notice within a month because she didn't like that dev setup had bugs that devs themselves had to fix. At fb they obviously can spend millions of dollars for a whole team that ensures that working dev env is always a button click away, a startup (even a scaleup) usually can't afford to. This is just one example out of many I can tell...

nickff

>”There seems to be a weird selection bias that if you're FAANG or FAANG adjacent these small companies aren't interested.”

Many smaller companies have noticed that former and wannabe FAANGers are looking for FAANG-type jobs, and are not good fits in their niche. Small companies often have more uncertainty, fewer clear objectives, less structure, and often lower pay. They’re not a good substitute for megacorps.

samr71

Yup. You can check out of FAANG anytime you like, but you can never leave.

Was path dependency for careers always this bad?

cableshaft

I've had a few good experiences with interviews at small companies and startups, so they do exist.

But I have also had really terrible experiences, similar to what you've mentioned. Sounds like you've just gotten unlucky and gotten the terrible ones.

codr7

Yeah, been there, done that; wannabe FAANGs are the worst.

Dylan16807

What exactly does "scalable" mean here?

If a startup can spend 20 man-hours filling a single position, why can't a big company spend 1000 man-hours filling 50 positions?

serial_dev

In a small company, you can tell your buddy “just have a chat with the candidate and if you like them and you think they can do the job, hire them”.

If the person interviewing your candidates messes up, you’ll know soon enough. In a large company, the bad people will take over and your company is dying a slow death.

That approach doesn’t work on a large scale. Some interviewers are too nitpicky, elitist, others approve anyone who uses the same language as them for side projects. Some are racists, sexist, or have other kinds of biases. Some might have a crush on the candidate. Sometimes the interviewer thinks about their own task while they squeeze in an interview. In some countries, “undoing” a bad hire is hard, so they need to make sure that the candidate can work on any team (or at least on multiple teams reasonably well).

IMO for large companies it makes sense to standardize the interview process.

Also, in my opinion grinding leetcode is also a good personality check for FAANG hires: it shows the candidate can suck it up, study hundreds of hours, and do whatever they need to do to pass an arbitrary test, even if they themselves think it’s a broken process. The larger the company, the more this quality matters in candidates as they will need to deal with a lot of things they will probably not like.

dilyevsky

> If a startup can spend 20 man-hours filling a single position, why can't a big company spend 1000 man-hours filling 50 positions?

Because big companies are run by bean counters and they also don't require the same kind of talent that is useful to startups. There's less competition for hyper-specialized seniors and middle of the pack generalists.

joshdavham

> If a startup can spend 20 man-hours filling a single position, why can't a big company spend 1000 man-hours filling 50 positions?

Huh. That’s actually a great question! I actually don’t know.

tomnipotent

> small companies and startups are great at interviewing

Small companies have the benefit of the pressure to fill a role to get work done, the lack of bureaucratic baggage to "protect" the company from bad hires, and generally don't have enough staff to suffer empire-building.

Somewhere along the line the question changes from "can this candidate do the job that other people in this office are already doing?" to "can this candidate do the job of this imaginary archetype I've invented with seemingly impossible qualities".

tristramb

"We need to run through hundreds of candidates per position, not a half dozen"

But you don't! You only need to find the first person who is good enough to do the job. You do not need to find the best person.

dowager_dan99

You need to run through hundreds of candidates to find someone marginally qualified. I am not exaggerating.

samr71

Do we have different definitions of "marginally qualified"? Idk, I feel I'm a decent engineer - I can certainly do whatever leetcode medium they throw at me, as much as this counts of anything - and can actually code, but I still get maybe 1 callback per 50 applications.

Does "marginally qualified" mean "Ivy League Competitive Programmer PhD" or something?

slg

>The best interview process I've ever been a part of involved pair programming with the person for a couple hours... You never failed to know within a few minutes whether the person could do the job

There is something funny about the "best interview process" taking "a couple hours" despite giving you the answer "within a few minutes". Seems like even the best process is a little broken.

alwa

Lightly ironic indeed! Though I’m not sure “broken” is exactly the word I’d choose.

I can only speak for myself, but I imagine myself as a candidate approaching a “couple of hours” project or relationship differently than I would a “few minutes” speed round. For that matter I can think of people I know professionally who I only know through their behavior “on stage” in structured and stylized meetings of a half hour or an hour—and I don’t feel like I have any sense at all of how they would be as day-to-day coworkers.

If we sat down to work together, you’d probably have a sense in the first few minutes of whether or not we were going to work out—but that would be contingent on us going into it with the attitude that we were working together as colleagues toward a goal.

karaterobot

That's mainly because there were multiple pairing sessions, and even if you knew the person was going to pass, there are still a couple more people who need to meet them, and a schedule to make sure they're available to do that. Plus due diligence, etc.

Nor am I saying it was a perfect system, just the best I've seen in terms of results.

shihab

The biggest victims of these non-scalable process is people without a good network. As an intl PhD student, I am that person.

So now I have this weird dynamic: I get interview calls only from FAANG companies, the ones with the manpower to do your so called "cursed" scalable interviews. But the smaller companies or startups, ones who are a far better fit for my specialized kills, never call me. You need to either "know someone" or be from a big school, or there is zero chance.

ukd1

Pairing on something close to whatever real work they'd be doing, but familiar to the applicant is my favorite way to evaluate someone (e.g. choose a side project, pre agree adding a feature).

I don't care if someone uses modern tools (like AI assists), google, etc - e.g. "open book" - as that's the how they want to work. Evaluating their style of thinking / solving problems, comms, and output is the win.

andyjohnson0

Some of us find the prospect of pairing with an unknown person in an unknown environment, and against the clock, to be very stressful.

andyjohnson0

Anecdote:

I've been interviewing recently and got through to the last round (of five...) with an interesting company. I knew the interview involved pairing, but I didn't expect: two people sitting behind me while I sat on a wobbly stool at a standing desk, trying to use a sticky wired mouse and a non-UK keyboard, and a very bright monitor (I normally use dark mode). They had a screen recorder running, and a radio was on in the next room.

I totally bombed it. I suspect just the two people behind me would have been enough though.

hirvi74

I would find trying to solve such problems with known people in known environments to be somewhat stressful too.

dowager_dan99

Very few people doing this sort of interview (they tend to be our best, most empathetic developers) are likely to cut a multi-hour planned process short after a few minutes. It will eat at least an hour of their (very expensive & valuable) time.

Also how am I supposed to filter the 100's of AI-completed assessments? Who gets this opportunity?

karaterobot

We didn't do assessments (if by that you mean take home assignments). This was partly a solution to that, since nobody thought they were a good idea. If you mean the phone screen, I think that would be a problem, yep, but it wasn't an issue back in 2016. Having them pair would weed out cheaters, but we would have to figure out a way to weed them out during the screening, I agree.

We also did not require the employers doing the interview to be our most senior team members. They probably did it more often than most people, but often because they volunteered to do it. Anyone on the team would be part of the loop, which helped with scheduling. And, remember, we were working on actual tickets, so in a lot of cases it actually helped having the candidate there as a pairing partner.

For a little extra detail, the way we actually did it was to have 2-3 pairing sessions of up to 2 hours apiece. At the end of the day, all the team members who paired with the candidate had to give them the thumbs up.

samr71

Use another AI, of course!

Idk if I'm even being sarcastic here.

CharlieDigital

Code reviews.

Teams are really sleeping on code reviews as an assessment tool. As in having the candidate review code.

A junior, mid, senior, staff are going to see very different things in the same codebase.

Not only that, as AI generated code becomes more common, teams might want to actively select for devs that can efficiently review code for quality and correctness.

I went through one interview with a YC company that had a first round code review. I enjoyed it so much that I ended up making a small open source app for teams that want to use code reviews: https://coderev.app (repo: https://github.com/CharlieDigital/coderev)

jghn

This is harder than it sounds, although I agree in a vacuum the idea is a good one.

So much value of the code review comes from having actual knowledge of the larger context. Mundane stuff like formatting quirks and obvious bad practices should be getting hoovered up by the linters anyways. But what someone new may *not* know is that this cruft is actually important for some arcane reason. Or that it's important that this specific line be super performant and that's why stylistically it's odd.

The real failure mode I worry about here is how much of this stuff becomes second nature to people on a team. They see it as "obvious" and forgot that it's actually nuance of their specific circumstances. So then a candidate comes in and misses something "obvious", well, here's the door.

CharlieDigital

You can do code review exercises without larger context.

An example from the interview: the code included a python web API and SQL schema. Some obvious points I noticed were no input validation, string concatenating for the database access (SQL injection), no input scrubbing (XSS), based on the call pattern there were some missing indices, a few bad data type choices (e.g. integer for user ID), a possible infinite loop in one case.

You might be thinking about it in the wrong way; what you want to see is that someone can spot these types of logic errors that either a human or AI copilot might produce regardless of the larger context.

The juniors will find formatting and obvious bad practices; the senior and staff will find the real gems. This format works really well for stratification.

williamdclt

> no input validation, string concatenating for the database access (SQL injection), no input scrubbing (XSS), based on the call pattern there were some missing indices, a few bad data type choices (e.g. integer for user ID), a possible infinite loop in one case

I'd say all this stuff is junior-level (maybe ~mid for things like user ID integers). It's just a checklist of "obvious bad practices", it doesn't require experience.

The senior stuff is much higher-level: domain modelling, code architecture, consistency guarantees, system resilience... system design in general.

jghn

Yes, ish.

In a previous job we did code review interviews. And went the route you said due to the problem I said. And yes, it's a lot better. But what also happened over time was that the bar slowly raised. Because over time the "harder" pieces of that session started to seem rote to interviewers, they became viewed as table stakes.

Mind you this is true of any interview scheme that has a problem solving component to it. I'm not saying that the code review style is extra bad here, just that it doesn't solve this problem.

solarmist

In theory, you can do code reviews without larger context if the code is carefully selected. Apparently, some companies think any badly written code from their code base can just be selected though.

conjectures

It's not so hard. One of the interview stages I did somewhere well known used this.

Here's the neural net model your colleague sent you. They say it's meant to do ABC, but they found limitation XYZ. What is going on? What changes would you suggest and why?

Was actually a decent combined knowledge + code question.

CharlieDigital

There are so many interesting ways to use code reviews like subtly introducing defects and bugs and see if people can follow the logic, read the code, find where the reasoning comes up short.

I wrote up 7 general strategies for teams that are interested: https://coderev.app/blog/7-strategies-for-using-code-reviews...

justin_oaks

I like the code review approach and tried it a few times when I was needed to do interviews.

The great thing about code reviews is that there are LOTS of ways people can improve code. You can start with the basics like can you make this code run at all (i.e. compile) and can you make it create the right output. And there's also more advanced improvements like how to make the code more performant, more maintainable, and less error-prone.

Also, the candidates can talk about their reasoning about why or why not they'd change the code they're reviewing.

For example, you'd probably view the candidates differently based on their responses to seeing a code sample with a global variable.

Poor: "Everything looks fine here"

Good: "Eliminate that global variable. We can do that by refactoring this function to..."

Better: "I see that there's a global variable here. Some say they're an anti-pattern, and that is true in most but not all cases. This one here may be ok if ..., but if not you'll need to..."

CharlieDigital

100% it is more conducive to a conversational exchange that actually gives you better insight into how a developer thinks much more so than leetcode.

Coding for me is an intensely focused activity and I work from home to boot so most of the time, I'm coding in complete silence. It's very awkward to be talking about my thought process while I'm coding, but not talking is just as awkward!

ctkhn

Some of the most interesting interviews that I felt like accurately assessed my skills (even non-live ones) where debugging and code review assessments. I didn't get offers from these cos later on because I failed the leetcodes they did later in the process but I felt the review interviews were a good way to be assessed.

CharlieDigital

Even more relevant now that we are in the age of code generation as the powertool for productivity.

solarmist

I loved the idea of code reviews interviews, i've had several good ones, until yesterday when I had my first bad code review interview.

They asked me to review a function for a residential housing payment workflow, which I'm unfamiliar with. From an actual snippet of their bad production code (which has since been rewritten). In Go which I've never used (I've never professionally used the language that doesn't have error handling built-in, for example).

I had to spend more than half of my time asking questions to try and get enough context about Go error handling techniques, the abstractions they were using which we only had the import statements to and the way that the external system was structured to handle these requests to review the hundred lines of code they shared.

I was able to identify a bunch of things incidentally, like making all of the DB changes as part of a transaction so that we don't get inconsistent state or breaking up the function into sub functions, because the names were extremely long, but this was so far outside my area of expertise and comfort zone that I felt like shooting in the dark.

So just like any other interview style, they can be done very poorly.

typewithrhythm

Honestly this sounds like a successful "bad fit" signal (assuming that they work with go and payment systems mostly).

Language and domain experience are things id like to know after an interview process.

solarmist

Honestly, it was also a red flag for me that they don’t actually know what they want and have bad communication between leadership and engineering. Prior to this interview I was already on the fence about them.

They don’t work mostly in Go. Even the interviewer said that he’s vaguely familiar with this area of the code, but he doesn’t work and Go. They work mostly in Kotlin and they explicitly are advertising for solid generalists.

whiplash451

I don't know. A cold code review on a codebase they never saw is not super informative about how the candidate would interact with you and the code once they're in known territory.

CharlieDigital

    > A cold code review on a codebase they never saw
What do you think happens in the first few weeks of someone joining a new team? Probably reading a lot of code they never saw...

So yeah, I think it's the opposite: explicitly testing for their ability to read code is probably kinda important.

mikenei

The demo video on your homepage looks great! If I may ask, what recording software did you use to create and edit the video?

CharlieDigital

Screen Studio on Mac.

(OBS Elements other times)

wruza

Is there a site where one could review some code and see what many others say about it and their experience level?

I guess it would degrade to stackoverflow-like poems eventually, but still interesting.

supriyo-biswas

> Is there a site where one could review some code and see what many others say about it and their experience level?

https://codereview.stackexchange.com

gcrout

This could be like rap genius for code!

CharlieDigital

Not that I know of!

It would be interesting, but I agree it would need to be content moderated to some extent.

mclau156

github and issues page

endemic

I did this once and it was obvious the interviewer wanted me to point out his pet "gotcha." Not a great experience.

justin_oaks

Yup, that's just one of the many ways to do a code-review interview wrong.

Each code sample should have multiple things wrong. The best people will find most (not necessarily all) of them. The mediocre will find a few.

IshKebab

Yeah it's really tempting when you discover an interesting fact to think "that would make an interesting interview question" and turn the interview into some kind of pub quiz. Happens with all forms of technical interview though. I mean 90% of leetcode questions are "this one weird trick".

perlgeek

Company A wants to hire an engineer, an AI could solve all their tech interview questions, so why not hire that AI instead?

There's very likely a real answer to that question, and that answer should shape the way that engineer should be assessed and hired.

For example, it could be that the company wants the engineer to do some kind of assessment whether a feature should be implemented at all, and if yes, in what way. Then you could, in an interview, give a bit of context and then ask the candidate to think out loud about an example feature request.

It seems to me the heart of the problem is that companies aren't very clear about what value the engineers add, and so they have trouble deciding whether a candidate could provide that value.

juujian

The even bigger challenge is that hiring experts in any domain requires domain knowledge, but hiring has been shifted to HR. They aren't experts in anything, and for some years they made do with formulaic approaches, but that doesn't cut it anymore. So now if your group wants to get it done, and done well, you have to get involved yourself, and it's a lot of work on top of your regular tasks. Maybe more work because HR is deeply involved.

ghaff

>hiring has been shifted to HR

Well, unless you know sufficiently senior people. But I suspect that is a deeply unsatisfactory answer to many people in this forum.

My long term last, only technically-adjacent, job came through a combination of knowing execs, having gone to the same school as my ultimate manager, and knowing various other people involved. (And having a portfolio of public work.)

tempodox

Personal networks only disadvantage those who have none.

ctkhn

I saw this at the big corporate (not faang/tech) place I work at. Engineers run and score interviews, but we don't make the final decision. That goes to HR and the hiring manager who usually has no technically background.

moodyredtimes

Yup, I have seen some really poor decisions as a result of this. I'm also curious - what will be the effect of AI assistance during behavior interviews, etc.

BobaFloutist

HR are experts in HR, which is to say they have a broader view of the institutional needs and legal requirements of hiring staffing than you do. It's always annoying when that clashes with your vision, but dismissing their entire domain is unlikely to help you avoid running into that dynamic again and again

whiplash451

> hiring has been shifted to HR

Not everywhere. At my company, HR owns the process but we -- the hiring tech team -- own the content of interviews and the outcomes.

bongoman42

I've never seen hiring completely in the domain of HR. HR filters incoming candidates and checks for culture fit etc, but technical competency is checked by engineers/ML folks. I can't imagine an HR person checking if someone understands neural networks.

typewithrhythm

HR involvement is unavoidable at big companies; and basics like "years of experience for payband" can cause issues. They fundamentally do not understand the job, but somehow have to ensure its not a biased hiring process.

michaelt

> Company A wants to hire an engineer, an AI could solve all their tech interview questions, so why not hire that AI instead?

Interview coding questions aren't like the day-to-day job, because of the nature of an interview.

In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.

It also has to be a problem that's solvable within about 40 minutes.

The problem needs to test the candidate meets the company's hiring bar - while also having enough nuance that there's an opportunity for absolutely great candidates to impress me.

And the problem has to be possible to state unambiguously. Can't have a candidate solving the problem, but failing the interview because there was a secret requirement and they failed to read my mind.

And of course, if we're doing it in person on a whiteboard (do people do that these days?) it has to be solvable without any reference to documentation.

gopher_space

> In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.

If you send me a rubric I can pre-load whatever you want to talk about. If you tell me what you're trying to build and what you need help with, I can show up with a game plan.

You need to make time for a conversation on the intricacies of voucher calculation and global sales tax law if you want to find people jazzed about the problem space.

qudat

> In an hour-long interview, I have to be able to state the problem in a way the candidate can understand, within 10 minutes or so. We don't have time for a lecture on the intricacies of voucher calculation and global sales tax law.

Proving if they are technically capable of a job seems rather silly. Look at their resume, look at their online works, ask them questions about it. Use probing questions to understand the depths of their knowledge. I don't get why we are over-engineering interviews. If I have 10+ years of experience with some proof through chatting that I am, in fact, a professional software engineer, isn't that enough?

theamk

Have you ever hired?

No, it's not enough. There are people out there who can talk great talk, and have great resume, but cannot do their actual job for some reason. Maybe they cannot read the code, maybe they cannot write the code, maybe they can write the code but not in the manner that keeps the rest of codebase working... I've had people like that on my team, it was miserable for all of us.

It is essential to see candidate actually write and debug code. It would be even better if we could see how the candidate deals with existing huge codebase, but sadly this kind of thing can't be easily done in a quick interview, and good candidates don't want trial periods.

janoc

>Interview coding questions aren't like the day-to-day job, because of the nature of an interview.

You have missed his point. If the interview questions are such that AI can solve them, they are the wrong questions being asked, by definition. Unless that company is trying to hire a robot, of course.

null

[deleted]

rurp

One of the best interviews I've encountered as a candidate wasn't exactly a pair programming session but it was similar. The interviewer pulled up a webpage of theirs and showed me a problem with it, and then asked how I would approach fixing it. We worked our way through many parts of their stack and while it was me driving most of the way we ended up having a number of interesting conversations that cropped up organically at various points. It was scheduled for an hour and the time actually flew by.

I felt like I got a good sense of what he would be like to work with and he got to see how I approached various problems. It avoided the live coding problems of needing to remember a bunch of syntax trivia on the spot and having to focus on a quick small solution, rather than a large scalable one that you need more often for actual work problems.

nottorp

Problem is, company A doesn't need an engineer to solve those interview questions but real problems.

placardloop

“Real problems” aren’t something that can be effectively discussed in the time span of an interview, so companies concoct unreal problems that are meant to be good indicators.

542354234235

On that, these unreal questions/problems are decent proxies for general knowledge for humans, but not for AI. Humans don't have encyclopedic knowledge, so questions on a topic can do a decent job of indicating a person has the broader depth of knowledge in that topic and could bring that to bear in a job. An AI can answer all the questions but can't bring that to bear in a job.

WE saw this last year with all the "AI can now pass the bar exam" articles, but that doesn't lead to them being able to do anything approaching practicing law, because AI failure modes are not the same as humans and can't be tested the same way.

0_____0

Really? How short are your interviews, and how big are these Real Problems such that you can't get a sense of how your candidate would start to tackle them?

okdood64

This is the answer.

Let's not pretend 95% of companies are asking asinine interview questions (though I understand the reasons why) that LLMs can easily solve.

nottorp

Let's go one step further: LLMs can't solve anything, but most interview questions are covered so much online that they'll parrot a passable answer.

bitwizeshift

Tech interviews in general need to be overhauled, and if they were it’d be less likely that AI would be as helpful in the process to begin with (at least for LLMs in their current state).

Current LLMs can do some basic coding and stitch it together to form cool programs, but it struggles at good design work that scales. Design-focused interviews paired with soft-skill-focus is a better measure of how a dev will be in the workplace in general. Yet, most interviews are just “if you can solve this esoteric problem we don’t use at all at work, you are hired”. I’d take a bad solution with a good design over a good solution with a bad design any day, because the former is always easier to refactor and iterate on.

AI is not really good at that yet; it’s trained on a lot of public data that skews towards worse designs. It’s also not all that great at behaving like a human during code reviews; it agrees too much, is overly verbose, it hallucinates, etc.

lanstin

I want to hire people who can be given some problem and will go off and work on it and come to me with questions when specs are unclear or there's some weird thing that cropped up. AI is 100% not that. You have to watch it like a 15 year old driver.

Imnimo

A company wants to hire someone to perform tasks X, Y and Z. It's difficult to cleanly evaluate someone's ability to do these tasks in a short amount of time, so they do their best to construct a task A which is easy to test, and such that most people who can do A can also do X, Y and Z.

Now someone comes along and builds a machine that can do A. It turns out that while for humans, A was a good indicator of X, Y and Z, for the machine it is not. A is easy for the machine, but X, Y and Z are still difficult.

This isn't a sign that the company was wrong to ask A, nor is it a sign that they could just hire the machine.

diob

It's because coding interview questions aren't so much assessing job skills as much as they are thinly veiled IQ tests.

I think if it was socially acceptable they'd just do the latter.

tptacek

Plenty of companies administer IQ tests. The reason everyone doesn't is that it doesn't work well.

callingbull

Nothing works well but IQ tests predict job performance better than anything else.

vasco

A lot of companies have IQ like tests, in particular big consulting companies like McKinsey and so on.

frankfrank13

McK's case interview is just as game-able as HackerRank style interviews. There are entire consulting clubs at many colleges that teach this exact interview style. It's true that it's harder (but not impossible) to use AI to help, but calling it an IQ-like test is true only as much as any other technical interview.

That being said, McK did create an entire game that they claim can't be studied for ahead of time. If the intention is to test true problem solving skills, then maybe that's roughly equivalent to a systems interview, which is hard(er) to cheat .

codr7

And they're losing all but the worst candidates because of it, which explains a lot.

jelambs

> Using apps like GitHub Co-pilot and Cursor to auto-complete code requires very little skill in hands-on coding.

this is a crazy take in the context of coding interviews. first, because it's quite obvious if someone is blindly copy and pasting from cursor, for example, and figuring out what to do is a significant portion of the battle, if you can get cursor to solve a complex problem, elegantly, and in one try, the likelihood that you're actually a good engineer is quite high.

if you're solving a tightly scoped and precise problem, like most coding interviews, the challenge largely lies in identifying the right solution and debugging when it's not right. if you're conducting an interview, you're also likely asking someone to walk through their solution, so it's obvious if they don't understand what they're doing.

cursor and copilot don't solve for that, they make it much easier to write code quickly, once you know what you're doing.

brainwipe

I was asked by an SME to code on a whiteboard for an interview (in 2005? I think?). I asked if I could have a computer, they said no. I asked if I would be using a whiteboard during my day-to-day. They said no. I asked why they used whiteboards, they said they were mimicking Google's best practice. That discussion went on for a good few minutes and by the end of it I was teetering on leaving because the fit wasn't good.

I agreed to do it as long as they understood that I felt it was a terrible way of assessing someone's ability to code. I was allowed to use any programming language because they knew them all (allegedly).

The solution was a pretty obvious bit-shift. So I wrote memory registers up on the board and did it in Motorola 68000 Assembler (because I had been doing a lot of it around that time), halfway through they stopped me and I said I'd be happy to do it again if they gave me a computer.

The offered me the job. I went elsewhere.

piyuv

You should’ve asked them “do you also mimic google’s compensation?”

wiejriljw

I work for a faang subsidiary. We pay well below average salary and equity. We finally got one nice perk, a very good 401k match. A few months later it was announced that the 401k match would be scaled back "to come in line with what our parent company offers". I thought about asking "will be getting salaries or equity in line with what our parent company offers?" but that would have been useless. Management doesn't care. I'm job hunting.

CYR1X

Oh man I needed that in the clip for like a dozen interviews a decade ago.

janoc

This zinger I have to remember for the next time someone tries this whiteboard BS on me!

blitzar

"How many google shares do I get?"

whiplash451

> I was asked by an SME to code on a whiteboard for an interview (in 2005? I think?). I asked if I could have a computer, they said no. I asked if I would be using a whiteboard during my day-to-day. They said no. I asked why they used whiteboards, they said they were mimicking Google's best practice.

This looks more like a culture fit test than a coding test.

Rhapso

Yeah, very bad fit. Surprised they made an offer.

Folks getting mad about whiteboard interviews is a meme at this point. It misses the point. We CANT test you effectively on your programming skillbase. So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.

It isn't that your interviewer knew all the languages, but that the language didn't matter.

I didn't get this until I was giving interviews. The instructions on how to give them are pretty clear. The goal isn't to "solve the puzzle" but instead to demonstrate you can reason about it effectively, communicate your knowledge and communicate as part of problem solving.

I know many interviewers also didn't get it, and it became just "do you know the trick to my puzzle". That pattern of failure is a good reason to deprecate white board interviews, not "I don't write on a whiteboard when i program in real life".

timr

> We CANT test you effectively on your programming skillbase. So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.

Except, that's not what happens. In basically every coding interview in my life, it's been a gauntlet: code this leetcode medium/hard problem while singing and tapdancing backwards. Screw up in any way -- or worse (and also commonly) miss the obscure trick that brings the solution to the next level of algorithmic complexity -- and your interview day is over. And it's only gotten worse over time, in that nowadays, interviewers start with the leetcode medium as the "warmup exercise". That's nuts.

It's not a one off. The people doing these interviews either don't know what they're supposed to be looking for, or they're at a big tech company and their mandate is to be a severe winnowing function.

> It isn't that your interviewer knew all the languages, but that the language didn't matter.

I've done enough programming interviews to know that using even a marginally exotic language (like, say, Ruby) will drastically reduce your success rate. You either use a language that your interviewer knows well, or you're adding a level of friction that will hurt you. Interviewers love to say that language doesn't matter, but in practice, if they can't know that you're not making up the syntax, then it dials up the skepticism level.

jerf

They generally do not know what they are looking for. They are generally untrained, and if they are trained, the training is probably all about using leetcode-type problems to give out interviews that are sufficiently similar that you can run stats on the results and call them "objective", which is exactly the thing we are all quite correctly complaining about. Which is perhaps anti-training.

The problem is that the business side wants to reduce it to an objective checklist, but you can't do that because of Goodhart's Law [1]. AI is throwing this problem into focus because it is basically capable of passing any objective checklist, with just a bit of human driving [2]. Interviews can not consist of "I'm going to ask a question and if you give me the objectively correct answer you get a point and if you do not give the objectively correct answer you do not". The risk of hiring someone who could give the objectively correct answers but couldn't program their way out of a wet paper bag, let alone do requirements elicitation in collaboration with other humans or architecture or risk analysis or any of the many other things that a real engineering job consists of, was already pretty high before AI.

But if interviewing is not a matter of saying the objectively correct things, a lot of people at all levels are just incapable of handling it after that. The Western philosophical mindset doesn't handle this sort of thing very well.

[1]: https://en.wikipedia.org/wiki/Goodhart%27s_law

[2]: Note this is not necessarily bad because "AI bad!", but, if all the human on the other end can offer me is that they can drive the AI, I don't need them. I can do it myself and/or hire any number of other such people. You need to bring something to the job other than the ability to drive an AI and you need to demonstrate whatever that is in the interview process. I can type what you tell me into a computer and then fail to comprehend the answer it gives is not a value-add.

bargainbin

When I joined my current team I found they had changed the technical test after I had interviewed but before I joined. A couple of friends also applied and got rejected because of this new test.

When I finally got in the door and joined the hiring effort I was appalled to find they’d implemented a leetcode-esque series of challenges with criteria such as “if the candidate doesn’t immediately identify and then use a stack then fail interview”. There were 7 more like this with increasingly harsh criteria.

I would not have passed.

nottorp

> can you have a real conversation (with a whiteboard to help) about how to solve the problem

And do you frame the problem like that when giving interviews? Or the candidates are led to believe working code is expected?

Rhapso

Do I? yes. I also teach my students that the goal of an interview is to convince the interviewer you are a good candidate, not to answer the questions correctly. Sometimes they correlate. Give the customer what they need not what they asked for.

Do I see others doing so? sadly no.

I feel like a lot of the replies to my comment didn't read to the end, I agree the implementation is bad. The whiteboard just isn't actually the problem. The interviewers are.

Unless they change mentality to "did this candidate show me the skills i am looking for" instead of "did they solve puzzle" the method doesn't matter.

absolutelastone

> The goal isn't to "solve the puzzle" but instead to demonstrate you can reason about it effectively, communicate your knowledge and communicate as part of problem solving.

...while being closely monitored in a high-stakes performance in front of an audience of strangers judging them critically.

pockmarked19

That’s a skill you do need at Google if you’re going to survive. At least nowadays.

gedy

> So we test on a more relevant job skill, like can you have a real conversation (with a whiteboard to help) about how to solve the problem.

Everybody says that, but reality is they don't imho. If you don't pass the pet question quiz "they don't know how to program" or are a "faker", etc.

I've seen this over and over and if you want to test a real conversation you can ask about their experience. (I realize the challenge with that is young interviewers aren't able to do that very well with more experienced people.)

placardloop

+1 to all this. It still surprises me how many people, even after being in the industry for years, think the goal of any interview is to “write the best code” or “get the right answer”.

What I want to know from an interview is if you can be presented an abstract problem and collaboratively work with others on it. After that, getting the “right” answer to my contrived interview question is barely even icing on the cake.

If you complain about having to have a discussion about how to solve the problem, I no longer care about actually solving the problem, because you’ve already failed the test.

SJC_Hacker

I think you're severely underestimating how much just about every software company has bought into the FAANG philosophy, and how many candidates they get who can answer those questions correctly.

Yes if you don't communicate clearly, you will get points deducted. But if you can't answer the question nearly perfectly, its basically an immediate fail.

fdlaks

Unfortunately I used to think this was the main purpose of the interview as well, but have been proven wrong time and time again.

The only thing that matters in most places is getting to the optimal solution quickly. It doesn't matter if you explain your thought process or ask clarifying questions, just get to the solution and answer the time and space complexity correctly and you pass.

Like others have said I think this is a symptom of the sheer number of people applying and needing to go through the process, there is no time for nuance or evaluating people on if you would actually like to work with them or not.

bossyTeacher

> The offered me the job. I went elsewhere.

I am so happy that you did this. We vote with our feet and sadly, too many tech folks are unwilling to use their power or have golden handcuff tunnel vision.

thecleaner

Take a bow.

user432678

And my axe…

tokai

>I was allowed to use any programming language because they knew them all (allegedly). brainfuck time

codr7

Hehe, I have to remember to bring one of my custom Forths to the next interview.

xmx98

As an interviewer I’d just skip the questions and talk about your Forth haha

Clubber

>I was allowed to use any programming language because they knew them all (allegedly).

After 30 years of doing this, I find that typically the people who claim to know a lot often know very little. They're insecure in their ability so much that they've tricked themselves into not learning anything.

Exoristos

Are there people who still aren't aware that FAANGs developed this kind of thing to bypass H1-B regulations?

brainwipe

I've accidentally been using an AI-proof hiring technique for about 20 years: ask a junior developer to bring code with them and ask them to explain it verbally. You can then talk about what they would change, how they would change it, what they would do differently, if they've used patterns (on purpose or by accident) what the benefits/drawbacks are etc. If they're a senior dev, we give them - on the day - a small but humorously-nasty chunk of code and ask them to reason through it live.

Works really well and it mimics the what we find is the most important bit about coding.

I don't mind if they use AI to shortcut the boring stuff in the day-to-day, as long as they can think critically about the result.

coffeefirst

Yep. I've also been using an AI-proof interview for years. We have a normal conversation, they talk about their work, and I do a short round of well-tested technical questions (there's no trivia, let's just talk about some concepts you probably encounter fairly regularly given your areas of expertise).

You can tell who's trying to use AI live. They're clearly reading, and they don't understand the content of their answers, and they never say "I don't know." So if you ask a followup or even "are you sure" they start to panic. It's really obvious.

Maybe this is only a real problem for the teams that offloading their interviewing skills onto some leetcode nonsense...

ttyprintk

This is a fine way. I’ll say that the difference between a senior and a principal is that the senior might snicker but the principal knows that there’s a chance the code was written by a founder.

ilc

And if the Principal is good, they should stand up and say exactly why the code is bad. If there's a reason to laugh because it is cliche bad, they should say so.

If someone gave me code with

if (x = 7) { ... } as part of a C eval.

Yeah, you'll get a sarcastic response back because I know it is testing code.

What I think people ignore is that personality matters. Especially at the higher levels. If you are a Principal SWE you have to be able to stand up to a CEO and say "No, sir. I think you are wrong. This is why." In a diplomatic way. Or sometimes. Less than diplomatic, depending on the CEO.

One manager that hired me was trying to figure me out. So he said (and I think he was honest at the time). "You got the job as long an you aren't an axe murderer."

To which I replied deadpan: "I hope I hid the axe well." (To be clear to all reading, I have never killed someone, nevermind with an axe! Hi FBI, NSA, CIA and pals!)

Got the job, and we got along great, I operated as his right hand.

micheles

Nowadays I am on the other part of the fence, I am the interviewer. We are not a FAANG, so we just use a SANE interview process. Single interview, we ask the candidate about his CV and what his expectations are, what are his competences and we ask him to show us some code he has written. That's all. The process is fast and extremely effective. You can discriminate week candidates in minutes.

mparnisari

That process might work for your company precisely because you are not FAANG. You don't get hundreds of applicants that are dying to get in, so people don't have that strong of a motivation to do anything it takes (including lying) to get the job.

adastra22

I’ve worked at a company with 150,000 employees. The interview process was pretty much as described here. There is absolutely no reason a Big Co needs to operate any differently.

_the_inflator

Do you reevaluate them in predetermined intervals to see how your initial expectation matches the outcome?

itomato

With each Sprint, presumably.

FirmwareBurner

>we ask him to show us some code he has written

How do you expect them to get access to the property internal Git repo codebase and approval from their employer's lawyers to show it to third parties during the interview?

Sounds like you're only selecting Foss devs and nothing more.

ramon156

Most people have still written code for school or a hobby project. Maybe I'm missing empathy, but I cannot understand how some developers have no code to show.

If that's the case however, just let them make a small project over the weekend and then do another interview where you ask stuff about what they've made. It's not that deep

apocalyptic0n3

I started writing code when I was 12 and started doing it professionally at 22. I'm now in my mid-30s and outside of work, I haven't written anything more than one-off scripts for my homelab in close to a decade. I'm already spending upwards of 50 hours with code each week and I need to do something else at night and on the weekends to release my brain from it. I also didn't go to school for CS, and even if I did... it was over a decade ago. So I have ~25 years of experience writing code but could not show you a single line of it. And even if I could, how would you know I was the one to write it?

This is an extremely flawed interview process in my opinion and the last time I encountered it led to an awkward scenario that led to me walking out. Personally, when I conduct interviews, it's a mix of things. We talk about your past work, I quiz you a bit on some topics you'd encounter in your day-to-day here, and then we'll spend an hour doing some combination of a code review of a working-but-flawed demo project I created, a 30-40 minute coding exercise, and/or a problem-solving scenario where I give you a problem and then we talk through how, as a pair, how we could solve it.

aleph_minus_one

> Most people have still written code for school or a hobby project. Maybe I'm missing empathy, but I cannot understand how some developers have no code to show.

First: they might have private code, but not necessarily code to show (I, for example, am rather not willing to show quite some of the code that I wrote privately).

Second: the kind of "code" that I tend to write privately (and into which I invest quite a lot of time) is really different from what I do at work, and what is actually considered "code" by many. It's more like (very incomplete) drawings and TeX notes about observations and proofs of properties and symmetries between some algorithms. Once finished, they will be very easy to systematically transform into a program in a computer language.

This is about very novel stuff, which to explain would take quite a lot of time.

williamdclt

> Most people have still written code for school or a hobby project

School was years and years ago, and has nothing to do with my current skills.

From the people i personally know, most do _not_ have a hobby project, even fewer have hobby projects that showcases their technical skills. Nor should they be expected to. Most people have non-programming hobbies.

> I cannot understand how some developers have no code to show.

It's really not that deep, I'm worried if you really cannot understand. I don't code outside of work, I'm not interested in doing it. I'm good at software engineering, not passionate about it. I have a bunch of other hobbies. There's no reason I'd have any code to show now or at any point in the future.

> let them make a small project over the weekend and then do another interview where you ask stuff about what they've made

If I'm paid for it, sure why not I could do that. I won't love it but hey I'm looking for a job, I'll put the legwork in. But if this is the only or the "preferred" interview process for a company, I need to point out that it is deeply discriminatory as it advantages people who have the time to do a weekend project: for example it benefits males disproportionally (women do most of the care work in any country, also the most house work, also have a higher chance to be a single parent, all of which impacts the time they can put in a "weekend project" if they can do it at all).

solumos

I don't code much outside of work. I have hobby projects from 10+ years ago, but they're not much more than landing pages copied from templates and wordpress installs. I mostly work in backend/data/platform engineering professionally.

If I were asked to make a small project over a weekend, I'd be likely to decline rather than doing a more standard interview, or I'd use AI to do it in a reasonable timeframe (which seems to defeat the purpose as it relates to this discussion)

pjmlp

School was a few decades ago, and the code I have on Github is mostly toy stuff I do in rainy weekends, most of us have a life without room to code outside work most of the time.

Friends, family, stuff to take care of.

justin_oaks

Like many of the other commenters, I have no code to show. I'm strongly motivated at work to solve problems and create correct, performant, maintainable code. I appreciate a job well done.

Outside of work, I just don't have the motivation to code anything. I don't have sufficient at-home problems where code will fix them.

In an interview, ask me anything! ... except to show you code on Github.

dennis_jeeves2

>Maybe I'm missing empathy,

Worse actually. There is more to life than code - unless you are a savant. Most of us aren't.

But it is the way you are, you probably know no better and you are doing your best, what you can do is to refuse to interview.

user99999999

Please share your GitHub @

user99999999

My worst code is always what I wrote yesterday. Often what’s missing is context, unless I comment ad nauseam. Sure I didn’t write complete test, obey open closed principles abstract into factory functions. The code I send from my hobby projects is likely a mess, because finishing on my own time by my own unpaid constraints wills it to be so

ttyprintk

Maybe you forked a library because of reasons. You can tour the original repo and explain the problems. I have at least one of those examples for each time the legal or confidentiality department stepped in.

FirmwareBurner

>Maybe

The word maybe is doing a lot of heavy lifting here. What if you never had to do that? Not everyone's work is public. Inf act I'd say most people's work is not public. Sometimes even the product is not public since it's B-2-B.

Clubber

We do this too, works fine. We ask open ended questions like, "What's your favorite thing you've done in your career and why?" and "What was the most challenging project in your career and why?" If you listen, you can get a lot of insight from just those two questions. If they don't give enough detail, we'll probe a little.

Our "gotcha," which doesn't apply to most languages anymore is, "What's the difference between a function and a procedure." It's a one sentence answer, but people who didn't know it would give some pretty enlightening answers.

Edit: From the replies I can see people are a little defensive about not knowing it. Not knowing it is ok because it was a question I asked people 20 years ago relevant to a language long dead in the US. I blame the defensiveness on how FUBAR the current landscape is. Giving a nuanced answer to show your depth of knowledge is actually preferred. A once sentence answer is minimal.

I'm editing this because HN says I'm posting too fast, which is super annoying, but what can I do?

aleph_minus_one

> We do this too, works fine. We ask open ended questions like, "What's your favorite thing you've done in your career and why?" and "What was the most challenging project in your career and why?" If you listen, you can get a lot of insight from just those two questions. If they don't give enough detail, we'll probe a little.

The problem is: there is a very negative incentive to give honest answers. If I were to answer these questions honestly, I'd bring up some very interesting theorems (related to some deep algorithmic topics) that I proved in my PhD thesis. Yes, I would have loved to stay in academia, but I switched to industry because of the bad job prospects in academia - this is not what interviewers want to hear. :-(

> "What's the difference between a function and a procedure." It's a one sentence answer

The terminology here differs quite a lot in different "programming communities". For example

> https://en.wikipedia.org/w/index.php?title=Procedure&oldid=1...

says: "Procedure (computer science), also termed a subroutine, function, or subprogram",

i.e. there is no difference. On the other hand, Pascal programmers strongly distinguish between functions and procedures; here functions return a value, but procedures don't. Programmers who are more attracted to type theory (think Haskell) would rather consider "procedures" to be functions returning a unit type. If you rather come from a database programming background, (stored) procedures vs functions are quite different concepts.

I could go on and on. What I want to point out is that this topic is much more subtle than a "one sentence answer".

michaelt

> I'd bring up some very interesting theorems (related to some deep algorithmic topics) that I proved in my PhD thesis. [...] I switched to industry because of the bad job prospects in academia - this is not what interviewers want to hear.

In my experience you'll be fine giving that answer assuming you're going for the kind of programming job that hires PhDs.

You remind them you have a PhD - and in something deeply algorithmic. You can successfully answer any follow-up questions from them, as you literally have a PhD in the topic they're asking about. There's no shame in entering industry because you want jobs and money - in fact, those things are precisely what the hiring manager is able to offer you.

You'd rather be in academia but it doesn't have the pay and job security? Well, the hiring manager would rather be a snowboard instructor in Aspen but doesn't for the same reason. So you've got common ground with them.

jimbokun

> Yes, I would have loved to stay in academia, but I switched to industry because of the bad job prospects in academia - this is not what interviewers want to hear. :-(

I would love to hear that from a candidate I'm interviewing. Who can't relate to the distinction between your ideal job and the job that will actually pay you money?

Clubber

>The problem is: there is a very negative incentive to give honest answers. If I were to answer these questions honestly, I'd bring up some very interesting theorems (related to some deep algorithmic topics) that I proved in my PhD thesis.

This is unfortunate that you would get that response. FWIW, I would be interested in hearing all this in an interview and I would look at it favorably.

>What I want to point out is that this topic is much more subtle than a "one sentence answer".

Yes, you would definitely get bonus points for nuance. The one sentence answer was minimal. What it filters out are people who don't know anything about Delphi but applying for the job with highly embellished resumes hoping to get lucky. This was for software used in hospitals, so bugs or errant code could have pretty drastic consequences.

bdavisx

Here's an interesting thought on your "gotcha" - I'm 57 years old, been programming as a career for over 30 years, a lot of languages and I have no idea what the difference is.

michaelt

If I'm applying for a Java position and I claim to have Java experience on my resume, it's perfectly valid for them to ask me the difference between an int, an Integer, and a BigInteger.

But it's certainly not a universal question applicable to all programming languages.

Likewise, Clubber says in their post that their 'gotcha' question doesn't apply to most languages.

dennis_jeeves2

I have no idea either. I can easily look it up though. You can often tell an inexperienced interviewer from the extremely domain specific question they ask which _they_ are familiar with.

ttyprintk

It’s ok to say that it’s never professionally mattered. No one has ever been paid to know that. “Are side effects a bad pattern?” Lotsa people have needed to know that on day one.

null

[deleted]

bluefirebrand

> Our "gotcha," which doesn't apply to most languages anymore is, "What's the difference between a function and a procedure."

My answer would be along the lines of "It's 2025, no one has talked about procedures for 20+ years"

masterj

> Single interview, we ask the candidate about *his* CV and what *his* expectations are, what are *his* competences and we ask *him* to show us some code *he* has written

You... might want to think about what implicit biases you might be bringing here

null

[deleted]

adregan

What I've been thinking about leetcode medium/hard as a 30-45 minute tech interview (as there are a few minutes of pleasantry and 10 minutes reserved for questions), is that you are only really likely to reveal 2 camps of people—taking in good faith that they are not "cheating". One who is approaching the problem from first principles and the other who knows the solution already.

Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.

I often see comments like: this person had this huge storied resume but couldn't code their way out of a paper bag. Now having been that engineer stuck in a paper bag a few times, I think this is a very narrow way to view others.

I don't know the optimal way to interview engineers. I do know the style of interview that I prefer and excel at[0], but I wouldn't be so naive to think that the style that works for me would work for all. Often I chuckle about an anecdote from the fabled I.P. Sharp: Ian Sharp would set a light meter on his desk and measure how wide an interviewees eyes would get when he explained to them about APL. A strange way to interview, but is it any less strange than interviewing people via leetcode problems?

0: I think my ideal tech screen interview question is one that 1) has test cases 2) the test cases gradually ramp up in complexity 3) the complexity isn't revealed all at once; the interviewer "hides their cards," so to speak 4) is focused on a data structure rather than an algorithm such that the algorithm falls out naturally rather than serves as the focus. 5) Gives the opportunity for the candidate to weigh tradeoffs, make compromises, and cut corners given the time frame. 6) Doesn't combine big ideas (i.e. you shouldn't have to parse complex input and do something complicated with it); pick a single focus. Interviews I have participated and enjoyed like this: construct a Set class (union, difference, etc); implement an rpn calculator (ramp up the complexity by introducing multiple arities); create a range function that works like the python range function (for junior engineers, this one involves a function with different behavior based on arity).

dbrumbaugh

>Take maximum subarray problem, which can be optimally solved with Kadane's algorithm. If you don't know that, you are looking at the problem as Professor Kadane once did. I can't say for sure, but I suspect it took him longer than 30-45 minutes to come up with his solution, and I also imagine he didn't spend the whole time blabbering about his thought process.

This is something that drives me nuts in academia when it comes to exam questions. I once took an exam that asked us to invent vector clocks from whole cloth, basically, having only knowledge of a basic Lamport clock for context. I think one person got it--and that person had just learned about vector clocks in a different class. Given some time, it's possible I could have figured it out. But on an exam, you've got like 10-15 minutes per question.

The funny thing about it is that I do the same damn thing from the other side all the time when working with students. It's incredibly tempting once you know the solution to a problem (especially if you didn't "solve" it yourself, but had the solution presented to you already) to present the question as though it has an obvious solution and expect somebody else to immediately solve it.

I'm aware of the effect, I've experienced it many times, and I still catch myself doing it. I've never interviewed a candidate for a job, but I can only imagine how tempting it would be to fall into that trap.

jimbokun

Yes that's a tricky one.

When I'm interviewing a candidate, I'm often asking myself if this question is just something I happen to know therefor expect the candidate to know too, or if it's crucial to doing the job?

Sometimes it may not be fair to expect a random developer to be familiar with a specific concept. But at the same time it might be critical to the kind of work we're doing.

codr7

The current job market is so messed up that I honestly can't see myself getting a job until we hit a wall and people start using their brains again.

I have 26 years of solid experience, been writing code since I was 8.

There should be a ton of companies out there just dying to hire someone with that kind of experience.

But I'm not perfect, no one is; and faking doesn't work very well for me.

svilen_dobrev

> There should be a ton of companies out there just dying to hire someone with that kind of experience.

heh.. they are probably dead already?

i have even longer years.. But this time i am looking since.. september? Applying 1-2 per day, on average.. Widening the fishing net each month.. ~2% showed some interest.. but no bingo.

"overqualified" is about half of the "excuses" :/

Time to plant tomatoes maybe..

codr7

Or maybe join forces and show them how it's really done?

Not that I mind growing tomatoes, quite the opposite :)

ipunchghosts

I am with you! Been programming since I was 10 and have 20YoE. Many of my prototypes have grown into full fledged products, I have 40+ published papers, and I am regularly sought out for advice and help by those who know me. Everyone i have been, I am always told I am a good catch.

However, I won't do leet coding. I want to hear about why I should come work for u. What about my works makes u think I could help ubm with your problem. Then let's have a talk about your problems and where I can create value for you.

My experience in hiring is that leet coders are good one trick ponies. But long term don't become technical peers.

codr7

Part of the problem is there just aren't a lot of people out there who can correctly judge that level of experience, and looking up the spectrum tends to simply look weird.

ipunchghosts

I agree. Do you have any thoughts on how to mitigate this? After all, its in your best advantage and the companies to hire someone with talents because of the value they can bring.

alsobrsp

I mostly skipped the technical questions in the last few interviews I have conducted. I have a conversation, ask them about their career, about job changes, about hobbies, what they do after work. If you know the subject, skilled people talk a certain way, whether it is IT, construction, sailing.

I do rely on HR having, hopefully, done their job and validated the work history.

I do have one technical question that started out as fun and quirky but has actually shown more value than expected. I call it the desert island cli.

What are your 5 linux cli desert island commands?

Having a hardware background, today, mine are: vi, lsof, netcat, glances, and I am blanking on a fifth. We have been doing a lot of terraform lately

I have had several interesting responses

Manager level candidate with 15+ years hands on experience. He thought it was a dumb question because it would never happen. He became the teams manager a few months after hiring. He was a great manager and we are friends.

Manager level to replace the dumb question manager. His were all Mac terminal eye candy. He did not get the job.

Senior level SRE hire with a software background. He only needed two emacs and a compiler, he could write anything else he needed.

aleph_minus_one

> I have a conversation, ask them about their career, about job changes, about hobbies, what they do after work. If you know the subject, skilled people talk a certain way, whether it is IT, construction, sailing.

My experience differs a lot. Many insanely skilled people are somewhat "weird" (including possibly

- being a little on the spectrum,

- "living a little bit in their own world",

- having opinions on topics that are politically "inappropriate" (not in the sense of "being on the 'wrong' side of a political fence", but rather in the sense of "an opinion that is quite different than what you have ever heard in your own bubble", and is thus not "socially accepted")

- being a little bit "obnoxious" (not in bad sense, but in a sense that might annoy a particular kind of people))

What you consider to be "skilled people" is what I would rather call "skilled self-promoters" (or possibly "smooth talker"). "Skilled people" and "skilled self-promoter" are quite different breeds of people.

alsobrsp

> My experience differs a lot. Many insanely skilled people are somewhat "weird" (including possibly

I am actually a bit weird myself, so I can relate.

> What you consider to be "skilled people" is what I would rather call "skilled self-promoters". "Skilled people" and "skilled self-promoter" are quite different breeds of people.

I don't mean that they have told me that they are skilled, or that their resume has implied it. I mean that they actually have the skills. Self-promoters that don't know the information always look good on paper, but after a few minutes of talking to them you can tell that they don't quite match.

Before IT, I was a live sound engineer TV, theater, music. There was also a entertainment university starting up around the same time. They were pumping out tons of "trained" engineers that looked good on paper but couldn't mix for shit. I think we can blame them for the shitification of pop music.

aleph_minus_one

> Self-promoters that don't know the information always look good on paper, but after a few minutes of talking to them you can tell that they don't quite match.

My experience differs here: these are not "good self-promoters", but impostors.

Good self-promoters typically have some above-average (though commonly not really exceptional) skills in their area, but their expertise is in the capability of smooth talking (including smalltalk), promoting their contributions, and talking at eye level with various stakeholders.

If you are really exceptional in your area, you will often (though not always) consider smalltalk to be waste of your time, and will often have difficulties talking at eye level with various stakeholders, because either they are not sufficiently knowledgable in your area of expertise to understand you, or the other way round (for the latter point: becoming really great in one area often means that you won't have the time to get sufficiently deep into a lot of other areas, even though for some of them you might become quite skilled if you had more time).

adregan

I have an ice breaker type question which is “what’s something (tool, tech, whatever) you are interested/excited about and wish people knew more about?” Selfishly, interviewing is kind of boring, so I’m always looking to learn something new.

Sadly, out of 100s of people, I’ve probably only gotten an interesting response a handful of times. Mostly people say some well known tech from the job description.

I never held that against anyone, but the people who had an interest in something were more fun to work with.

dennis_jeeves2

>I mostly skipped the technical questions in the last few interviews I have conducted. I have a conversation

Sir, you have attained dizzying intellectual heights that few men have.

My comment is meant to be a compliment, not snarky. And indeed I have noticed that the best people I have encountered can often size people up accurately with very general questions often on unrelated subjects.

alsobrsp

Thank you. I took it as both. :)

lappet

Do you have network access? I would pick ssh.

jessekv

> blanking on a fifth

grep? (or ripgrep if allowed)

arccy

busybox: so many tools bundled into one binary

alsobrsp

Excellent answer.

michaelt

> What are your 5 linux cli desert island commands?

Are you familiar with busybox ?

LPisGood

> Manager level to replace the dumb question manager. His were all Mac terminal eye candy. He did not get the job

Huh? Please explain

alsobrsp

We hired the guy who said it was a dump question, he became our manager. He then decided to retire and we had to replace him. One of the candidates answered the 5 cli question with terminal eye candy, not functional commands. He was not hired for the job.

jakubmazanec

The problem isn't AI, the problem is companies don't know how to properly select between candidates, and they don't apply even the basics of Psychometrics. Do they do item analysis of their custom coding tests? Do they analyse the new hires' performances and relate them to their interview scores? I seriously doubt it.

Also, the best (albeit the most expensive) selection process is simply letting the new person to do the actual work for a few weeks.

[1] https://en.wikipedia.org/wiki/Psychometrics

michpoch

> Also, the best (albeit the most expensive) selection process is simply letting the new person to do the actual work for a few weeks.

What kind of desperate candidate would agree to that? Also, what do you expect to see from the person in a few weeks? Usual onboarding (company + project) will take like 2-3 months before a person is efficient.

jakubmazanec

Candidate would be compensated, obviously. That's why it's expensive.

You don't need him to become efficient. Also I don't think it is always necessary to have such long onboarding. I'll never understand why a new hire (at least in senior position) can't start contributing after a week.

michpoch

> Candidate would be compensated, obviously. That's why it's expensive

Ok... take me through it. I apply to your company and after a short call you offer me to spend 4 weeks working at your place instead of an interview.

I go back to my employer, give them resignation letter, work the rest of my notice period (2 months - 3 months), working on all handovers, saying goodbyes.

Unless the idea is to compensate me for the risk (I guess at least 6 months salary, probably more), then I do not see how you'd get anyone who is just a poor candidate to sign up for this.

> You don't need him to become efficient

So what will you see? Efficiency, being independent and being a good team player are the main things that are difficult to test during a regular interview.

askonomm

And so that self-selects for people who already are unemployed then, right? Most developers I know (including myself) look for a new job while still having a job, as to not create a financial hole in-between. I'd be curious if that doesn't then end up with lower quality candidates who ended up unemployed to begin with?

noirbot

I'd argue the bigger expense is on the team having to onboard what could potentially be a revolving door of temporary hires. Getting a new engineer to the point where they understand how things work and the specific weirdness of the company and its patterns is a pretty big effort at anywhere I've worked.

michpoch

> can't start contributing after a week.

Because you have zero context of what the org is working on.

Tade0

If you work with Boring Technology, your onboarding process has no reason to be longer than a week, unless you're trying to make the non-tech parts of the role too interesting.

michpoch

> unless you're trying to make the non-tech parts of the role too interesting.

Unless your role is trivial to replace with an LLM, you need to understand the business. Maybe not for really junior role, but everything above - you need to solve issues. Tech is just a tool.

datadrivenangel

How do you control for confounders and small data?

For data size, if you're a medium-ish company, you may only hire a few engineers a year (1000 person company, 5% SWE staff, 20% turnover annually = 10 new engineers hired per year), so the numbers will be small and a correlation will be potentially weak/noisy.

For confounders, a bad manager or atypical context may cause a great engineer to 'perform' poorly and leave early. Human factors are big.

jakubmazanec

Sure, psychological research is hard because of this, but that's not what I'm proposing - I'm talking about just having some data on predictive validity of the hiring process. If there's some coding test: is it reliable and valid? Aren't some items redundant because they're too easy or too hard? Which items have the best discrimination parameter? How the total scores correlate with e.g. length of the test takers tenures?

Sure, the confidence intervals will be wide, but it doesn't matter, even noisy data are better than no data.

Maybe some companies already do this, but I didn't see it (though my sample is small).

kace91

My last interview, for the job I'm currently employed in, asked for a take home assignment where I was allowed to use any tool I'd use regularly including AI. Similar process for a live coding interview iterating on the take home that followed. I personally used it to speed up wirting initial boilerplate and test suites.

I fail to see why this wouldn't be the obvious choice. Do we disallow linters or static analysis on interviews? This is a tool and checking for skill and good practices using it makes all sense.

knowaveragejoe

As someone on the other side of the table, I don't care if you used AI to complete a take-home project. I care if you can explain the strengths and weaknesses of the approach it took, or if you chose to steer it in one direction or another. It usually becomes quite clear those who actually understand what the AI actually did for them.