Ask HN: What is interviewing like now with everyone using AI?
734 comments
·February 2, 2025fhd2
The last time I've used a leet code style interview was in 2012, and it resulted in a bad hire (who just happened to have trained on the questions we used). I've hired something like 150 developers so far, and what I ended up with after a few years of trial and error:
1. Use recruiters and network: Wading through the sheer volume of applications was even nasty before COVID, I don't even want to imagine what it's like now. A good recruiter or a recommendation can save a lot of time.
2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
3. Put the candidate at ease - nervous people don't interview well, another problem with non-trivial tasks in technical interviews. I rarely do any live coding, if I do, it's pairing and for management roles, to e.g. probe how they manage disagreement and such. But for developers, they mostly shine when not under pressure, I try to see that side of them.
4. Talk through past and current challenges, technical and otherwise. This is by far the most powerful part of the interview IMHO. Had a bad manager? Cool, what did you do about it? I'm not looking for them having resolved whatever issue we talk about, I'm trying to understand who they are and how they'd fit into the team.
I've been using this process for almost a decade now, and currently don't think I need to change anything about it with respect to LLMs.
I kinda wish it was more merit based, but I haven't found a way to do that well yet. Maybe it's me, or maybe it's just not feasible. The work I tend to be involved in seems way too multi faceted to have a single standard test that will seriously predict how well a candidate will do on the job. My workaround is to rely on intuition for the most part.
Stratoscope
When I was interviewing candidates at IBM, I came up with a process I was really happy with. It started with a coding challenge involving several public APIs, in fact the same coding challenge that was given to me when I interviewed there.
What I added:
1. Instead of asking "do you have any questions for me?" at the very end, we started with that general discussion.
2. A few days ahead, I emailed the candidate the problem and said they are welcome to do it as a take home problem or we could work on it together. I let them know that if they did it ahead of time, we would do a code review and I would ask them about their design and coding choices. Or if they wanted to work on it together, they should consider it a pair programming session where I would be their colleague and advisor. Not some adversarial thing!
3. This the innovation I am proud of: a segment at the beginning of the interview called "teach me something". In my email I asked the candidate to think of something they would like to teach me about. I encouraged them to pick a topic unrelated to our work, or it could be programming related if they preferred that. Candidates taught me things like:
• How to choose colors of paint to mix that will get the shade you want.
• How someone who is bilingual thinks about different topics in their different languages.
• How the paper bill handler in an ATM works.
• How to cook pork belly in an air fryer without the skin flying off (the trick is punching holes in the skin with toothpicks).
I listed these in more recent emails as examples of fun topics to teach me about. And I mentioned that if I were asked this question, I might talk about how to tune a harmonica, and why a harmonica player would want to do that
This was fun for me and the candidate. It helped put them at ease by letting them shine as the expert in some area they had a special interest in.
chucksmash
"Teach me something" is how the test prep companies would interview instructors. Same idea—have a candidate explain a topic they are totally comfortable with—but they were more focused how engaging the lesson was, how the candidate handled questions they didn't expect, etc. moreso than I expect you would if you use this in a IC coding interview. It's a neat idea though, I can imagine lots of different ways it would be handy.
martypitt
Kudos to you - this sounds like a fantastic interview format.
I especially like that you're prepping them very clearly in advance, giving them every opportunity to succeed, and clearly setting a tone of collaboration.
In person design and coding challenges are a pressure cooker, and not real-world. However, giving people the choice, seems like a great way to achieve the balance.
Honestly, I'm really just commenting here so that this shows up in my history, so I can borrow heavily from this format next time I need to interview! :) Thanks again for sharing.
vanceism7_
That sounds really cool. I wish I was running into more job interviews like the one you describe. The adversarial interviewing really hurts the entire feel of the process
palata
> The adversarial interviewing really hurts the entire feel of the process
Agreed. I have been through technical interviews where at the end, my feeling as a candidate was "I don't want to work with this asshole".
ddingus
Man, I really hate it!
I can't fathom how making the whole process being all hostile makes any real sense!
We all want that great person to show up and really help, not just cope or put up a facade to survive.
Yet, that is exactly what happens. People that other people expect to bond with get put through the wringer....
Seems counterproductive to me.
asalahli
Love the 3rd point! I might start using that in my future interviews
Stratoscope
Please report back when you do!
I don't remember how I came up with the idea. Maybe I just like learning things.
One candidate even wrote after their interview, "that was fun!"
Have you ever had a candidate say that? This was the moment when I realized I might be on to something. :-)
Interviews are too often an adversarial thing: "Are you good enough for us?"
But the real question is would we enjoy working together and build great things!
People talk about "signal" in an interview. Someone who has an interest they are passionate and curious about and likes to share it with others? That's a pretty strong signal to me.
Even if it has nothing to do with "coding".
snickerer
I immediately want to learn about all these cool things you listed.
I work as a developer and as an interviewer (both freelance). Now I want to integrate your point 3. into my interviews, but not to choose better candidates, just to learn new stuff I never thought about before.
It is your fault that I see now this risk in my professional life, coming at me. I could get addicted to "teach me something". 'Hey candidate, we have 90 minutes. Just forget about that programming nonsense and teach me cool stuff'
biztos
What, what? Harmonicas are tunable? TIL...
Stratoscope
Oh yes, they are!
You just need a small file, like a point file. Any gasheads remember those? And a single edge razor blade or the like to lift the reed.
To raise the pitch, you file the end of the reed, making it lighter so it vibrates faster.
To lower the pitch, you file near the attached end of the reed. I am not sure on the physics of this and would appreciate anyone's insight.
The specific tuning I've done many times is to convert a standard diatonic harp (the Richter tuning) to what is now called the Melody Maker tuning.
The Richter tuning was apparently designed for "campfire songs". You could just "blow, man, blow" and all the chords would sound OK.
Later, blues musicians discovered that you could emphasize the draw notes, the ones that are easy to bend to a flat note to get that bluesy sound. This is called "cross harp". For example, in a song in G you would use a C harp instead of one tuned in G.
The problem with cross harp is that the 7th is a minor 7th and you have no way to raise it up to a major 7th if that would fit your song. And the 2nd is completely missing! In fact you just have the tonic (G in this case) on both the draw and blow notes where you might hope to hit the 2nd (A). There is no A in this scale, only the G twice.
To imagine a song where this may be a problem, think of the first three notes of the Beatles song All My Loving. It starts with 3-2-1. Oops, I ain't got the 2. Just the 1 twice.
This is where the file comes in. You raise the blow 1st to a major 2nd. And you raise the minor 7th to a major 7th in both octaves.
Now you have a harp with that bluesy sound we all love, but in a major scale!
thakoppno
Isn’t every physical thing tunable since its materials have a resonant frequency?
ddingus
I spent a decade and change doing adult instruction in CAD. Early on, many were still transitioning off 2d hand drawings. Boy, was that an art! I got to do a few the very old school way and have serious respect for the people who can produce a complex assembly drawing that way. Same for many shapes that may feature compound curves, for example.
But I digress!
Asking them what they wanted or would teach me was very illuminating and frankly, fun all around! I was brought a variety of subjects and not a single one was dull!
Seems I had experiences similar to yours.
One of my questions was influenced by someone who I respected highly asking, "what books are on your shelf at home?"
This one almost always brought out something about the candidate I would have had no clue about otherwise. As time advanced, it became more about titles because fewer people maintain a physical book shelf!
chipsrafferty
Are you still interviewing?
Stratoscope
I am! But I am now on the other side of the virtual table.
IBM conducted a mass layoff a few months ago, and as our little team lost our only customer at the time (this is public information), many of us were let go. Ah well.
One door closes, another opens.
If anyone out there is curious about "who is this guy with the strange and interesting ideas about interviewing", you can find my LinkedIn and other contact info in my HN profile.
biztos
As someone with a pretty long career already, and who's comfortable talking about it, I was a bit surprised that in three interviews last year nobody asked a single thing about any of my previous work. One was live coding and tech trivia, the other two were extensive take-home challenges.
To their credit, I think they would have hired "the old guy" if I'd aced their take-homes, but I was a bit rusty and not super thrilled about their problem areas anyway so we didn't get that far. And honestly it seems like a decent system for hiring well-paid cogs in your well-oiled machine for the short term.
Your method sounds like what we were trying to do ten years ago, and it worked pretty well until our pipeline dried up. I wish you, and your candidates, continued success with it: a little humanity goes a long way these days.
palerdot
So, did you find a company that you are happy with (interviewing or otherwise)? I would be really interested to know how you are dealing with tech landscape changes lately, and your plans for staying in tech ...
Wishing you all the best for your career!
kamaal
>>Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
The biggest problem with the take home tests are not people who don't show up due to not being able to finish the assignment, But that those people who do, now expect to get hired.
95% people don't finish the assignment. 5% do. Teams think submitting the assignment with 100% feature set, unit test cases, onsite code review and onsite additional feature implementation still shouldn't mean a hire(If not anything, there are just not enough positions to fill). From the candidate's perspective, its pointless to spend a week working and doing every thing the hiring team asked for and still receive a 'no'.
I think if you are not ready to pay 2x - 5x market comp you shouldn't use take home assignments to hire people. There is too much work to do, and receive a reject at the end. Upping the comp absolutely makes sense as working a week to get a chance at earning 3x more or so makes sense.
sage76
Most of the time, those take home tests cannot be done in 2 hours. I remember one where I wasn't even done with the basic setup in 2 hours, installing various software/libraries and debugging issues with them.
kamaal
People mostly fix some issue, or implement a feature at work. And then somebody outside can do the same in the same time.
In reality they need to look at this like onboarding a new team member.
mavhc
If you're expecting a week's worth of work from them you'd better pay them for their time, if they turn up.
wkat4242
> From the candidate's perspective, its pointless to spend a week working and doing every thing the hiring team asked for and still receive a 'no'.
Uhhh yeah. That would really piss me off. Like reviewbombing glassdoor type of pissed.
Fr3dd1
We did a lot of these assignments and no one assumed that they will be hired if they complete it. Its about how you communicate your intent. I always told the candidates, that the goal of the task is 1. to see some code and if some really basic stuff is on point and 2. that you can argue with someone about his or her code.
dolmen
If I have a public portfolio of existing projects on GitHub, couldn't that replace an assignment? Choose one of my projects (or let me choose one), and let's discuss about it during the review interview.
kamaal
>>We did a lot of these assignments and no one assumed that they will be hired if they complete it. Its about how you communicate your intent.
Be upfront that finishing the assignment doesn't guarantee a hire and very likely the very people you want to hire won't show up.
Please note that as much as you want good people to participate in your processes. Most good talent doesn't like to waste its time and effort. How would you feel if someone wasted your time and effort?
jkaplowitz
How does that process handle people who have been out of work for a few years and can pass a take-home technical challenge (without a LLM) but cannot remember a convincing level of detail on the specifics of their past work? I’ve been experiencing your style of interview a lot and running up against this obstacle, even though I genuinely did the work I’m claiming to have done.
Especially people with ADHD don’t remember details as long as others, even though ADHD does not make someone a bad hire in this industry (and many successful tech workers have it).
I do prefer take-home challenges to live coding interviews right now, or at least a not-rushed live coding interview with some approximate advance warning of what to expect. That gives me time to refresh my rust (no programming language pun intended) or ramp up on whichever technologies are necessary for the challenge, and then to complete the challenge well even if taking more time than someone who is currently fresh with the perfectly matched skills might need. I want the ability to show what I can do on the job after onboarding, not what I can do while struggling with long-term unemployment, immigration, financial, and family health concerns. (My fault for marrying a foreigner and trying to legally bring her into the US, apparently.)
And, no, my life circumstances do not make it easy for me to follow the common suggestion of ramping up in my “spare time” without the pressures of a job or a specific interview task. That’s completely different from when I can do on the job or for a specific interview’s technical challenge.
sbarre
This is slightly tangential to your questions, but to address the "remembering details about your past work", I've long-encouraged the developers I mentor to keep a log/doc/diary of their work.
If nothing else, it's a useful tool when doing reviews with your manager, or when you're trying to evaluate your personal growth.
It comes in really handy when you're interviewing or looking for a new job.
Julia Evans calls it a "brag doc": https://jvns.ca/blog/brag-documents/
It doesn't have to be super detailed either. I tell people to write 50% about what the work was, and 50% about what the purpose/value of the work was.. That tends to be a good mix of details and context.
Writing in it once a month, or once a sprint, is enough. Even if it's just a paragraph or two..
jkaplowitz
Yeah, my resume does include a summary of what I did each job, but it sounds like the brag doc idea would involves much more detail.
Honestly, putting those details in writing in a way that I retrain after leaving might leave me vulnerable to claims of violating my corporate NDA. But legalities aside, yes, it would help with these kinds of interview issues - prospectively only of course, not retrospectively.
Annoyingly, it's also exactly the kind of doc that's difficult for people with ADHD to create and maintain rigorously, even though people with ADHD are more likely than average to need the reminders of those details. A lot of ADHD problems are that kind of frustrating catch-22, especially when interacting with a world that is largely designed around more typical brain types.
red-iron-pine
one of my buddies was a navy psychologist assistant. was a corpsman (navy medic) who moved that way.
dude maintained a "me wall" of achievements, commendation medals, etc. plus a file full of other crap. he mentioned he got a lot of promotions because you have a section to fill out where you describe all the shit you did over the least year, and he always came with examples...
fhd2
I can't say I've interviewed someone to which this applies - unfortunately! Probably just doesn't get surfaced by my channels.
I would definitely not expect someone out of work for a while to have any meaningful side projects. I mean, if they do, that's cool, but I bet the sheer stress of not having a job kills a lot of creativity and productivity. Haven't been there, so I can only imagine.
For such a candidate, I'd probably want to offer them a time limited freelance gig rather quickly, so I can just see what they work like. For people who are already in a job, it's more important to ensure fit before they quit there, but your scenario opens the door to not putting that much pressure on the interview process.
jkaplowitz
Thanks for being understanding and compassionate about the situation! I like your idea of a time limited freelance gig, though immigration obstacles do sometimes make that quite complicated (as in my current situation). It's way better than not considering that people might have a reason for a resume with gaps or other irregularities which is not being a bad employee.
Beyond immigration types of obstacles, somehow recruiters and hiring managers rarely consider how many bad managers, bad executives, and bad companies there are when evaluating gaps and short tenures in an employee's resume. But equally, discussing those matters during an interview risks being seen as unprofessional and unreasonably negative. "Why did you leave this company?" "Oh, the warnings I got about the leadership from a former employee before I joined turned out to be true, and they didn't want to hear professionally presented necessary feedback about emotional safety in an emergency incident response situation, so they fired me without a single meaningful 1:1 discussion other than trying to assign blame." / "Oh, the CEO was enough of a problem that the investors eventually replaced him in a subsequent funding round despite him being the majority shareholder, but that was long after I had resigned or been fired due to that CEO's particular problems." / "Oh, they communicated in a very idiosyncratic way which didn't work for me and which I haven't seen at any company before or since." / etc. Most of that doesn't fly in an interview, but equally, even if it did, saying too much of that sounds like making up excuses for oneself even when it's 100% true. Same thing for why a job search doesn't succeed quickly.
Ours is a messy and imperfect industry, and it sucks that interview candidates have to pretend otherwise to seem like they'll be reasonable employees. Meanwhile the companies and executives that act in those ways get to present whatever positive and successful image they want, and they get to praise each other for firing fast or cutting costs with attrition or layoffs.
apercu
Just a suggestion, but I have done 3-5 projects a year for a long, long time, and as an executive (for almost a decade) had dozens of projects annually I was overseeing and/or contributing to. I am not a fan of LinkedIn, but I did eventually start logging at least some of the projects I do in their "projects" section. That helps me remember and revisit some of the projects, and when you go back 10 years later it's sort of joyful walk down memory lane sometimes.
axegon_
I feel like take home tests are meaningless and I always have. Even more so now with LLMs, though 9/10 times you can tell if it's an LLM-people don't normally put trivial comments in the code such as
> // This line prevents X from happening
I've seen a number of those. The issue here is that you've already wasted a lot of time with a candidate.
So being firmly against take home tests or even leetcode, I think the only viable option is a face to face interview with a mixture of general CS questions(i.e. what is a hashmap, benefits and drawbacks, what is a readers-writer lock, etc) and some domain specific questions: "You have X scenario(insert details here), which causes a race condition, how do you solve it."
throwaway2037
> I feel like take home tests are meaningless and I always have. Even more so now with LLMs
This has been discussed many times already here. You need to set an "LLM trap" (like an SSH honey trap) by asking the candidate to explain the code they wrote. Also, you can wait until the code review to ask them how they would unit test the code. Most cheaters will fall apart in the first 60 seconds. It is such an obvious tell. And if they used an LLM, but they can very well explain the code, well, then, they will be a good programmer on your team, where an LLM is simply one more tool in their arsenal.I am starting to think that we need two types of technical interview questions: Old school (no LLMs allowed) vs new school (LLMs strongly encouraged). Someone under 25 (30?) is probably already making great use of LLMs to teach themselves new things about programming. This reminds me of when young people (late 2000s/early 2010s) began to move away from "O'Reilly-class" (heh, like a naval destroyer class) 500 page printed technical books to reading technical blogs. At first, I was suspicious -- essentially, I was gatekeeping on the blog writers. Over time, I came to appreciate that technical learning was changing. I see the same with LLMs. And don't worry about the shitty programmers who try to skate by only using LLMs. Their true colours will show very quickly.
Can I ask a dumb question? What are some drawbacks of using a hash map? Honestly, I am nearly neck-bearded at this point, and I would be surprised by this question in an interview. Mostly, people ask how do they work (impl details, etc.) and what are some benefits over using linear (non-binary) search in an array.
ninetyninenine
What if I use an LLM but I understand the code?
The drawback is that elements in a hashmap can’t be sorted and accessing a specific element by key is slower then accessing something in an array by index.
Linear search is easier to implement.
These are all trivial questions you ask to determine if a person can develop code. The hard questions are whether the person is the cream of the crop. The amount of supply of developers is so high most people don’t ask trivial questions like that.
axegon_
"Drawbacks" was the wrong word to use here, "potential problems" is what I meant - collisions. Normally a follow up question: how do you solve those. But drawbacks too: memory usage - us developers are pretty used to having astronomical amounts of computational resources at our disposals but more often than not, people don't work on workstations with 246gb of ram.
Cthulhu_
If you really need to test them / check that they haven't used an LLM or hired someone else to do it for them (which was how people "cheated" on take-home tests before), ask them to implement a feature live; it's their code, it should be straightforward if they wrote it themselves.
dheera
If you are evaluating how well people code without LLMs you are likely filtering for the wrong people and you are way behind the times.
For most companies, the better strategy would be to explicitly LET them use LLMs and see whether they can accomplish 10X what a coder 3 years ago could accomplish, in the same time. If they accomplish only 1X, that's a bad sign that they haven't learned anything in 3 years about how to work faster with new power tools.
A good analogy of 5 years ago would be forcing candidates to write in assembly instead of whatever higher level language you actually use in your work. Sure, interview for assembly if that's what you use, but 95% of companies don't need to touch assembly language.
jpc0
> ... LET them use LLMs and see whether they can accomplish 10X what a coder 3 years ago could accomplish...
Do you seriously expect a 10x improvement with the use of LLMs vs no LLMs? Have you seen this personally, are you 1 10th the developer without an LLM? Or is the coding interview questions you ask or get asked, how to implement quicksort, or something?
Let's make it concrete, do you feel like you could implement a correct concurrent http server in 1/10th the time with an LLM than what you could do it without? Because if you jut let the LLM do the work I could probably find some issue in that code or alternatively completely stump you with an architectural question unless you are already familiar with it, and you should not be having an LLM implement something you couldn't have written yourself.
fhd2
To add: I can very well imagine this process isn't suitable for FAANG, so I can understand their university exam style approach to a degree. It's easy to arm chair criticise, but I don't know if I could come up with something better at their scale. These days, I'm mostly engaged by startups to help them build an initial team, I acknowledge that's different from what a lot of other folks hire for.
adastra22
Why not? Plenty of large organizations hire this way. My first employer is bigger than any FAANG company by head count, and they hired this way. Why is big tech different?
michaelt
When you're operating a small furniture company, your master craftsmen can hand-fit every part. Match up the hole that's slightly too big with the dowel that's slightly too big, which they put to one side the other day.
When you're operating a huge furniture company, you want every dowel and every hole the same size. Then any fool can assemble it, and you don't have to waste time and space on a collection of slightly-too-large dowels.
To scale up often means focusing on consistency and precision, and less on expert judgement and care.
lmz
The desire for a scalable, standardized scoring mechanism so they can avoid lawsuits.
scarface_74
Was it in the US? Unless it’s Walmart, there is no company in the US that is larger than the largest FAANG by headcount.
fhd2
Well, I respect the scale and speed. My process was still working fine at ~5 per month. I have doubts it'd work with orders of magnitude more. There's a lot of intuition and finesse in there, that is probably difficult to blindly delegate. Plus, big companies have very strong internal forces to eliminate intuition in favour of repeatable, measurable processes.
throwaway2037
> Put the candidate at ease - nervous people don't interview well
This is great advice. I have great success with it. I give the same 60 second speech at the start of each interview. I tell candidates that I realise that tech interviews are stressful -- "In 202X, the 'tech universe' is infinitely wide and deep. We can always find something that you don't know. If you don't have experience in a topic that we raise, let us know. We will move to a new topic. All, yes, all people that we interviewed had at least one topic where they had no experience, or none recent." Also, it helps to do "interview ramp-up", where you start with some very quick wins to build up confidence with the candidate. It is OK to tell them "I will push a bit harder here" so they know you are not being a jerk... only trying to dig deeper on their knowledge.dmoy
Putting candidate at ease is definitely important.
Another reason:
If you're only say one of four interviewers, and you're maybe not the last interviewer, you really want the candidate to come out of your interview feeling like they did well or at least ok enough, so that they don't get tilted for the next interview. Because even if they did really poorly in your interview, maybe it's a fluke and they won't fail the rest of the loop.
Which is then a really difficult skill as an interviewer - how do you make sure someone thinks they do well even if they do very poorly? Ain't easy if there's any technical guts in the interview.
I sure as shit didn't get any good at that until I'd conducted like 100+ interviews, but maybe I'm just a slow learner haha
joshvm
I’ve done the “at home” test for ML recently for a small AI consulting firm. It's a nice approach and got me to the next round, but the way the company evaluated it was to go through the questions and ask "fundamental ML bingo" questions. I don't think I had a single discussion about the company in the entire interview process. I was told up front "we probably won't get to the third question because it will take time to discuss theory for the first two".
If you're a company that does this, please dog food your problems and make sure the interview makes the effort feel valued. It also smells weird if you claim it's representative of a typical engineering discussion. We all know that consultancy is wrangling data, bad data and really bad data. If you're arguing over what optimiser we're choosing I'd say there's better ways to waste your customer's money.
On the other hand I like leetcode interviews. They're a nice equalizer and I do think getting good at them improves your coding skill. The point is to not ask ludicrously obscure hard problems that need tricks. I like the screen share + remote IDE. We used Code which was nice and they even had tests integrated so there wasn't the whiteboard pressure to get everything right in your head. You also know instantly if your solution works and it's a nice confidence if you get it first try, plus you can see how candidates would actually debug, etc.
sergioisidoro
I've let people use GPT in coding interviews, provided that they show me how they use it. At the end I'm interested in knowing how a person solves a problem, and thinks about it. Do they just accept whatever crap the gpt gives them, can they take a critical approach to it, etc.
So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI. So far my opinion is if you have a good interview process, you can clearly see who are the good candidates with or without ai.
alexjplant
Earlier this past week I asked Copilot to generate some Golang tests and it used some obscure assertion library that had a few hundred stars on GitHub. I had to explicitly ask it to generate idiomatic tests and even then it still didn't test all of the parameters that it should have.
At a previous job I made the mistake of letting it write some repository methods that leveraged SQLAlchemy. Even though I (along with my colleague via PR) reviewed the generated code we ended up with a preprod bug because the LLM used session.flush() instead of session.commit() in exactly one spot for no apparent reason.
LLMs are still not ready for prime-time. They churn out code like an overconfident 25-year-old that just downed three lunch beers with a wet burrito at the Mexican place down the street from the office on a rainy Wednesday.
DustinBrett
I feel like I am taking crazy pills that other devs don't feel this way. How bad are the coders that they think these AI's are giving them super powers. The PR's with AI code are so obvious and when you ask the devs why, they don't even know. They just say, well the AI picked this, as if that means something in and of itself.
doix
AI gives super powers because it saves you an insane amount of typing. I used to be a vim fanatic, I was very efficient but whenever I changed language there was a period where I had to spend getting efficient. Setup some new snippets for boilerplate, maybe tweak some LSP settings, save some new macros.
Now in cursor I just write "extract this block of code into its own function and set up the unit tests" and it does it, with no configuration from my part. Before I'd have a snippet for the unit test boilerplate for that specific project, I'd have to figure out the mocks mysel, etc.
Yes, if you use AI to just generate new code blindly and check it in without no understanding, you end up with garbage. But those people were most likely copy pasting from SO before AI, AI just made them faster.
linsomniac
>I feel like I am taking crazy pills that other devs don't feel this way.
Don't take this the wrong way, but maybe you are.
For example, this weekend I was working on something where I needed to decode a Rails v3 session cookie in Python. I know, roughly, nothing about Rails. In less than 5 minutes ChatGPT gave me some code that took me around 10 minutes to get working.
Without ChatGPT I could have easily spent a couple hours putzing around with tracking down old Rails documentation, possibly involving reading old Rails code and grepping around to find where sessions were generated, hunting for helper libraries, some deadends while I tried to intuit a solution ("Ok, this looks like it's base64 encoded, but base64 decoding kind of works but produces an error. It looks like there's some garbage at the end. Oh, that's a signature, I wonder how it's signed...")
Instead, I asked for an overview of Rails session cookies, a fairly simple question about decoding a Rails session cookie, guided it to Rails v3 when I realized it was producing the wrong code (it was encrypting the cookie, but my cookies were not encrypted). It gave me 75 lines of code that took me ~15 minutes to get working.
This is a "spare time" project that I've wanted to do for over 5 years. Quite simply, if I had to spend hours fiddling around with it, I probably wouldn't have done it; it's not at the top of my todo list (hence, spare time project).
I don't understand how people don't see that AI can give them "superpowers" by leveraging a developers least productive time into providing their highest value.
fenomas
Devs who don't feel that way aren't talking about the stuff you're talking about.
Look at it this way - a powerful debugger gives you superpowers. That doesn't mean it turns bad devs into good devs, or that devs with a powerful debugger never write bad code! If somebody says a powerful debugger gives them superpowers they're not claiming those things; they're claiming that it makes good devs even better.
steelframe
> They just say, well the AI picked this, as if that means something in and of itself.
In any other professional field that would be grounds for termination for incompetence. It's so weird that we seem to shrug off that kind of behavior so readily in tech.
eru
Depending on what language you use and what domain your problem is in, current AIs can vary widely in how useful they are.
I was amazed at how great ChatGPT and DeepSeek and Claude etc are at quickly throwing together some small visualisations in Python or JavaScript. But they struggled a lot with Bird-style literate Haskell. (And just that specific style. Plain Haskell code was much less of a problem.)
trevor-e
Because there are plenty of devs who take the output, actually read if it makes sense, do a code review, iterate back and forth with it a few times, and then finally check in the result. It's just a tool. Shitty devs will make shitty code regardless of the tool. And good devs are insanely productive with good tools. In your example what's the difference with that dev just copy/pasting from StackOverflow?
throwawayffffas
In my experience the lack of correctness and accuracy (I have seen a lot of calls to hallucinated apis), is made up from the "eagerness" to fill out boilerplate.
It's indeed like having a super junior, drunk intern working for you if we are using the gps analogy.
Some work is done but you have to go over it and fix a bunch of things.
randomNumber7
You are. Only noobs do this, experts using a llm are way more efficient. (Unless you work like with the same language and libraries for years)
fenomas
I don't follow this take. ChatGPT outputted a bug subtle enough to be overlooked by you and your colleague and your test suite, and that means it's not ready for prime time?
The day when generative AI might hope to completely handle a coding task isn't here yet - it doesn't know your full requirements, it can't run your integration tests, etc. For now it's a tool, like a linter or a debugger - useful sometimes and not useful other times, but the responsibility to keep bugs out of prod still rests with the coder, not the tools.
Sammi
Yes and this means it doesn't replace anyone or make someone who isn't able to code able to code. It just means it's a tool for people who already know how to code.
csense
> an overconfident 25-year-old that just downed three lunch beers with a wet burrito at the Mexican place down the street from the office on a rainy Wednesday
That's...oddly specific.
alexjplant
I'm only 33 and I've worked with at least two of 'em. They're a type :-D
hyperdimension
I thought it was nicely poetic.
Cthulhu_
Is that the LLM's fault or SQLAlchemy for having that API in the first place? Or was that a gap in your testing strategy, as (if I'm reading it right), flush() doesn't write anything to the database but is only intended as an intermediate step (and commit() calls flush() under water).
I think we're in a period similar to self-driving cars, where the LLMs are pretty good, but not perfect; it's those last few percent that break it.
Seattle3503
> At a previous job I made the mistake of letting it write some repository methods that leveraged SQLAlchemy. Even though I (along with my colleague via PR) reviewed the generated code we ended up with a preprod bug because the LLM used session.flush() instead of session.commit() in exactly one spot for no apparent reason.
Ive had ChatGPT do the same thing with code involving SQLAlchemy.
linsomniac
>it used some obscure assertion library that had a few hundred stars on GitHub.
That sounds like a lot of developers I've worked with.
fooker
You are using it wrong.
Give examples and let it extrapolate.
lionkor
"You're holding it wrong" doesn't make a small, too light, crooked, and backwards hammer any better.
bboygravity
You can't tell us that LLM's aren't ready for prime time in 2025 after you tried Copilot twice last year.
New better models are coming out almost daily now and it's almost common knowledge that Copilot was and is one of the worst. Especially right now, it doesn't even come close to what better models have to offer.
Also the way to use them is to ask for small chunks of code or questions about code after you gave them tons of context (like in Claude projects for example).
"Not ready for prime time" is also just factually incorrect. It is already being used A LOT. To the point that there are rumors that Cursor is buying so much compute from Anthropic that they are making their product unstable, because nvidia can't supply them hardware fast enough.
59nadir
I stopped using AI for code a little over a year ago and at that point I'd used Copilot for 8-12 months. I tried Cursor out a couple of weeks ago for very small autocomplete snippets and it was about the same or slightly worse than Copilot, in my opinion.
The integration with the editor was neat but the quality of the suggestions were no different than what I'd had with Copilot much earlier, and the pathological cases where it just spun off into some useless corner of its behavior (recommending code that was already in the very same file, recommending code that didn't make any sense, etc.) seemed to happen more than with Copilot.
This was a ridiculously simple project for it to work on, to be clear, just a parser for a language I started working on, and the general structure was already there for it to work with when I started trying Cursor out. From prior experience I know the base is pretty easy to work with for people who aren't even familiar with it (or even parsing in general), so I think given the difficulties that Cursor had even putting together pretty basic things it might be that a user of Cursor would see minimal gains in velocity and end up having less understanding in the medium to long term, at least in this particular case.
consp
> It is already being used A LOT
Which is an argument for quality why? Bad coders are not going to produce better code that way. Just more with less effort.
KeplerBoy
Copilot isn't a single model. Copilot is merely a brand and uses openAI and anthropics newest models.
t-writescode
I imagine most of the things that would be good uses for seniors in AI aren't great uses for a coding interview anyway.
"Oh, I don't remember how to do parameterized testing in junit, okay, I'll just copy-paste like crazy, or make a big for-loop in this single test case"
"Oh, I don't remember the API call for this one thing, okay, I'll just chat with the interviewer, maybe they remember - or I'll just say 'this function does this' and the interviewer and I will just agree that it does that".
Things more complicated than that that need exact answers shouldn't exist in an interview.
ehnto
> Things more complicated than that that need exact answers shouldn't exist in an interview.
Agreed, testing for arcane knowledge is pointless in a world where information lookup is instant, and we now have AI librarians at our fingertips.
Critical thinking, capacity to ingest and process new information, fast logic processing, software fundamentals and ability to communicate are attributes I would test for.
An exception though is proving their claimed experience, you can usually tease that out with specifics about the tools.
alpha_squared
We do the same thing. It's perfectly fine for candidates to use AI-assistive tooling provided that they can edit/maintain the code and not just sit in a prompt the whole time. The heavier a candidate relies on LLMs, the worse they often do. It really comes down to discipline.
sureIy
Discipline for what?
To me it's the lack of skill. If the LLM spits out junk you should be able to tell. ChatGPT-based interviews could work just as well to determine the ability to understand, review and fix code effectively.
skeeter2020
>> If the LLM spits out junk you should be able to tell.
Reading existing code and ensuring correctness is way harder than writing it yourself. How would someone who can't do it in the first place tell if it was incorrect?
meesles
This has been my experience as well. The ones that have most heavily relied on GPT not only didn't really know what to ask, but couldn't reason about the outputs at all since it was frequently new information to them. Good candidates use it like a search engine - filling known gaps.
vanceism7_
Yea I agree. I don't rely on the AI to generate code for me, I just use it as a glorified search engine. Sure I do some copypasta from time to time, but it almost always needs modification to work correctly... Man does AI get stuff wrong sometimes lol
hsuduebc2
I don't really can't imagine being it usefull in the way where it writes logical part of the code for you. If you are not being lousy you still need to think about all the edge cases when it generates the code which seems harder for me.
sdesol
> you are not being lousy you still need to think about all the edge cases
This is honestly where I believe LLMs can really shine. I think we like to believe the problems we are solving are unique, but I strongly believe most of us are solving problems that have already been solved. What I've found is, if you provide the LLM with enough information, it will surface edge cases that you haven't thought of and implement logic in your code that you haven't thought of.
I'm currently using LLMs to build my file drag and drop component for my chat app. You can view the problem and solution statement at https://beta.gitsense.com/?chat=da8fcd73-6b99-43d6-8ec0-b1ce...
By chatting with the LLM, I created four user stories that I never thought of to improve user experience and security. I don't necessarily think it is about knowing the edge cases, but rather it is about knowing how to describe the problem and your solution. If you can do that, LLMs can really help you surface edge cases and help you write logic.
Obviously what I am working on, is really not novel, but I think a lot of the stuff we are doing isn't that novel, if we can properly break it down.
So for interviews that allow LLMs, I would honestly spend 5 minutes chatting with it to create a problem and solution statement. I'm sure if you can properly articulate the problem, the LLM can help you devise a solution and a plan.
piloto_ciego
This makes me feel good because it’s exactly how I use it.
I’m basically pair programming with a wizard all day who periodically does very stupid things.
onemoresoop
I like that you’re openminded to allow candidates to be who they are and judge them for the outcome rather than using a prescribed rigid method to evaluate them. Im not looking to interview right now but I’d feel very comfortable interviewing with someone like you, I’d very likely give out my best in such an interview. Id probably choose not to use an LLM during the interview unless I wanted to show how I brainstormed a solution.
Keyframe
same thing here. Interview is basically a representative thing of what we do, but also depends on the level of seniority. I ask people just to share the screen with me and use whatever you want / fell comfortable with. Google, ChatGPT, call your mom, I don't care as long as you walk me through how you're approaching the thing at hand. We've all googled tar xvcxfgzxfzcsadc, what's that permission for .pem is it 400, etc.. no shame in anything and we all use all of the things through day. Let's simulate a small task at hand and see where we end up at. Similarly, there is a bias where people leaning more on LLMs doing worse than those just googling or, gasp, opening documentation.
apwell23
It took a while for googling during interviews to be accepted
alok-g
This is the best way to go.
I would love to go through mock interviews for myself with this approach just to have some interview-specific experience.
>> So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI.
Thanks for sharing your experience! Makes sense actually.
twoparachute45
My company, a very very large company, is transitioning back to only in-person interviews due to the rampant amount of cheating happening during interviews.
As an interviewer, it's wild to me how many candidates think they can get away with it, when you can very obviously hear them typing, then watching their eyes move as they read an answer from another screen. And the majority of the time the answer is incorrect anyway. I'm happy that we won't have to waste our time on those candidates anymore.
cuuupid
So far 3 of the 11 people we interviewed have been clearly using ChatGPT for the >>behavioral<< part of the interview (like, just chatting about background, answering questions about their experience). I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.
We actually allow using AI in our in-person technical interviews, but our questions are worded to fail safety checks. We'll talk about smuggling nuclear weapons, violent uprising, staging a coup, manufacturing fentanyl, etc. (within the context of system design) and that gives us really good mileage on weeding out those who are just transcribing what we say into AI and reading the response.
hn8726
> I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.
I'm genuinely curious what questions you ask during the behavioral interview. Most companies ask questions like "recall a time when..." and I know people who struggle with these kinds of questions despite being good teammates, either because they find it difficult to explain the situation, or due to stress. And recruitment process is not a "basic conversation" — as a recruiter you're in far more comfortable position. I find it hard to believe anyone would use an LLM if you ask them question like "what were your responsibilities in your last role", and I do see how they might've primed the chat to help them communicate an answer to a question like "tell me about a situation when you had a conflict with your manager"
cuuupid
We usually just ask them to share their background, like the typical background exchange handshake at the beginning of any external call.
That normally prompts some follow ups about specific work, specific projects, if they know so-and-so moot at their old company. I call it behavioral because I don’t have another word but it’s not brainteasers and etc like consulting/finance interviews.
dmazzoni
Ha ha, that's a great idea!
I love the idea of embedding sensitive topics that ChatGPT and other LLMs will steer clear of, within the context of a coding question.
Have you ever had any candidate laugh?
Any candidates find it offensive?
cuuupid
We usually get laughs, some quick jokes, etc., some really involved candidates will ask if it’s worded that way to prevent using ChatGPT.
No one’s found it offensive, the prompt is mostly neutral just very “dangerous activity” coded.
pllbnk
I think you (your company) and many other commenters here are just trying too hard.
I had just recently lead through several interview rounds for software engineering role and we have not had any issue with LLM use. What we do for the technical interview part is very simple - live whiteboarding design task where we try to identify what the candidate's focus is and might pivot at any time or dig deeper into particular topics. Sometimes, we will even go as detailed as talking about particular algorithms the candidate would use.
In general, I found that this type of interview is the most fun for both sides. The candidates don't feel pressure that they must do the only right thing as there is a lot of room for improvisation; the interviewers don't get bored with repetitive interviews over and over as new candidates come by with different perspectives. Also, there is no room for LLM use because the candidate has to be involved in drawing on the whiteboard and showing their technical presentation skills, which are very important for developers.
lewisleclerc
Unfortunately, we've noticed that candidates are on another call and their screen is fed by someone else using chatGPT and pasting the responses, as they can hear both the interviewer and the candidate
eastbound
> if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.
I wouldn’t be surprised if the effect of Google Docs and Gmail forcing full AI, is a generation of people who can’t even talk about themselves, and can’t articulate even a single email.
Is it necessary? Perhaps. Will it make the world boring? Yes.
xarope
what actually happens to the interviewee? Do they suddenly go blank when they realise the LLM has replied "I'm sorry I cannot assist you with this", or they try to make something up?
cuuupid
Yeah pretty much, they either go silent for 2-3 minutes or leave the call and claim their internet has cut out and need to reschedule.
Just one time someone got mad and yelled at the interviewer about nothing specific, just stuff like I’m not who you are looking for, you will never find anybody to hire.
soheil
llama2-uncensored to the rescue
mr_00ff00
So depressing to hear that “because of rampant cheating”
As a person looking for a job, I’m really not sure what to do. If people are lying on their resumes and cheating in interviews, it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.
But to this day I haven’t done either.
chowells
Here's the thing: 95% of cheaters still suck, even when cheating. Its hard to imagine how people can perform so badly while cheating, yet they consistently do. All you need to do to stand out is not be utterly awful. Worrying about what other people are doing is more detrimental to your performance than anything else is. Just focus on yourself: being broadly competent, knowing your niche well, and being good at communicating how you learn when you hit the edges of your knowledge. Those are the skills that always stand out.
hnisoss
Yea but I also suck in 95% of FAANG like interviews since I'm very bad at leetcode medium/hard type of questions. It's just something that I never practiced. It's very tempting at this point to trow in my towel and just use some aid. No one cares about my intense career and the millions I helped my clients earn, all that matters (and sometimes directly affects comp rate) is how I do on the "coding task".
dkga
Well, cheaters only cheat because they suck and they know it. Otherwise cheat would not be a rational approach.
roughly
Yeah, we found this when we started doing take-home exams: it turns out that a junior dev who spends twice as much time on the problem than what we asked them to doesn’t put out senior-level code - we could read the skill level in the code almost instantly. Same thing with cheating like that - it turns out knowing the answer isn’t the same thing as having experience, and it’s pretty obvious pretty quickly which one you’re dealing with.
ryandvm
I don't know, I kind of feel like leetcode interviews are a situation where the employer is cheating. I mean, you're admittedly filtering out a great number of acceptable candidates knowing that if you just find 1 in a 1000, that'll be good enough. It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.
In my opinion, if a prospective employee is able to successfully use AI to trick me into hiring them, then that is a hell of a lot closer to the actual work they'll be hired to do (compared to leetcode).
I say, if you can cheat at an interview with AI, do it.
twoparachute45
I dunno why there is always the assumption in these threads that leetcode is being used. My company has never used leetcode-style questions, and likely never will.
I work in security, and our questions are pretty basic stuff. "What is cross-site scripting, and how would you protect against it?", "You're tasked with parsing a log file to return the IP addresses that appear at least 10 times, how would you approach this?" Stuff like that. And then a follow-up or two customized to the candidate's response.
I really don't know how we could possibly make it easier for candidates to pass these interviews. We aren't trying to trick people, or weed people out. We're trying to find people that have the foundational experience required to do the job they're being hired for. Even when people do answer them incorrectly, we try to help them out and give them guidance, because it's really about trying to evaluate how a person thinks rather than making sure they get the right answer.
I mean hell, it's not like I'm spending hours interviewing people because I get my rocks off by asking people lame questions or rejecting people; I want to hire people! I will go out of my way to advocate for hiring someone that's honest and upfront about being incorrect or not knowing an answer, but wants to think through it with me.
But cheating? That's a show stopper. If you've been asked to not use ChatGPT, but you use it anyway, you're not getting the benefit of the doubt. You're getting rejected and blacklisted.
kortilla
The employer sets the terms of the interview. If you don’t like them, don’t apply.
What you’re suggesting here isn’t any different than submitting a fraudulent resume because you disagree with the required qualifications.
hsuduebc2
I wouldn't call it cheating but most of the time it's just stupid. For majority of software developer jobs would be more suitable to discuss the solution of the more complex problem tham randomly stress out people just because you think you should.
nkrisc
> It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.
On the other hand, if you have 1,000 candidates, and you only need 1, why not do it if the top candidate selected by this method can do well on the test and your work?
ninetyninenine
It’s unfair but it meets their objective of finding a high in candidate. Google admits they do this.
The companies that do this only do it because they can. They have to have hundreds of people applying. The companies that don’t do this basically don’t have many people applying.
dahart
> it feels like there’s nothing I can do except do the same.
Why does it feel like that when you’re replying to someone who already points out that it doesn’t work? Cheating can prevent you from getting a job, and it can get you fired from the job too. It can also impede your ability to learn and level up your own skills. I’m glad you haven’t done it yet, just know that you can be a better candidate and increase your chances by not cheating.
Using an LLM isn’t cheating if the interviewer allows it. Whether they allow it or not, there’s still no substitute for putting in the work. Interviews are a skill that can (and should) be practiced. Candidates are rarely hired for technical skill alone. Attitude, communication, curiosity, and lots of other soft skills are severely underestimated by so many job seekers, especially those coming right out of school. A small amount of strengthening your non-code abilities can improve your odds much faster than leetcode ever will. And if you have time, why not do both?
ghaff
Note also "And the majority of the time the answer is incorrect anyway."
I haven't looked for development-related jobs this millennium, but it's unclear to me how effective a crutch AI is for interviews--at least for well-designed and run interviews. Maybe in some narrow domains for junior people.
As a few of us have written elsewhere, I consider not having in-person interviews past an initial screen sheer laziness and companies generally deserve whoever they end up with.
aprilthird2021
> it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.
Never buy into this mentality. Because once you do, it never goes away. After the interview, your coworkers might cheat, so you cheat too. Then your business competitors might cheat, so you cheat too. And on and on.
johnnyanmac
sounds cheesy, but keep being honest. Eventually companies will realize (as we have years ago) that automating recruiting gets you automated candidates.
But YMMV. I have 9 years and still can get interviews the old fashioned way.
null
wccrawford
When I was interviewing entry level programmers at my last job, we gave them an assignment that should only take a few hours, but we basically didn't care about the code at all.
Instead, we were looking to see if they followed instructions, and if they left anything out.
I never had a chance to test it out, since we hadn't hired anyone new in so long, but ChatGPT/etc would almost always fail this exam because of how bad it is at making sure everything was included.
And bad programmers also failed it. It always left us with a few candidates that paid attention, and from there we figure if they can do that, they can learn the rest. It seemed to work quite well.
I was recently laid off from that company, and now I'm realizing that I really want to see what current-day candidates would turn in. Oh well.
z3t4
For those tests I never follow the rules, I just make something quick and dirty because I refuse to spend unpaid hours. In the interview the first question is why I didnt follow the instructions, and they think my reason is fair.
Companies seem to think that we program just for fun and ask to make a full blown app... also underestimating the time candidates actually spend making it.
MathCodeLove
If you’re spending the time applying and submitting something then you might as well spend the extra 30 minutes or so to do it right, no?
prisenco
The industry (all industries really) might want to reconsider online applications, or at least privilege in-person resume drop-offs because the escalating ai application/evaluation war that's happening doesn't seem to be helping anyone.
datavirtue
No it's because AI shifted power over to the applicant.
bdangubic
this is very strange statement. in what world did AI possibly shift power to the applicant?? applicants have almost never been in shittier position than they are now and things are getting much, much worse by the day
prisenco
How so? Tons of companies are moving to AI automated intake systems because they're getting flooded with low-quality AI generated resumes. Of course, the original online applications systems were terrible already which is what encouraged people towards low effort in their applications so it's become a stale-mate.
hibikir
Did it? What I see instead is total mistrust of the open resume pool, because the percentage of outright lies, from resume to behavioral to everything else is just that high. So I see companies raise their hands and going back to maximum prioritization of in-network candidates, where we have someone vouching that the candidate is not a total waste of everyone's time.
The one who loses all power is the new junior straight out of school, which used to already be difficult to distinguish from many other candidates with similar resumes: Now they compete with thousands upon thousands of total fakes which claim more experience anyway.
paxys
> it's wild to me how many candidates think they can get away with it
Remember that you are only catching the candidates who are bad at cheating.
johnnyanmac
That's fine. The ones who are "good cheaters" are probably smarter than many honest people. Think about those school days where your smartest peers were cheating anyway, despite teaching you organically earlier on. Those kinds of cheaters do it to turn an A into an A+, not because they don't understand the material.
wsintra2022
Is it cheating if I can solve the problem using the tools of AI, or is it just solving the problem?
dahart
Interviews aren’t about solving problems. The interviewer isn’t interested in a problem’s solution, they’re interested in seeing how you get to the answer. They’re about trying to find out if you’ll be a good hire, which notably includes whether you’re willing and interested in spending effort learning. They already know how to use AI, they don’t need you for that. They want to know that you’ll contribute to the team. Wanting to use AI probably sends the wrong message, and is more likely to get you left out of the next round of interviews than it is to get you called back.
Imagine you need to hire some people, and think about what you’d want. That’ll answer your question. Do you want people who don’t know but think AI will solve the problems, or do you want people who are capable of thinking through it and coming up with new solutions, or of knowing when and why the AI answer won’t work?
nyarlathotep_
> They’re about trying to find out if you’ll be a good hire, which notably includes whether you’re willing and interested in spending effort learning
I admire this worldview, and wish for it to be true, but I can't help but see it in conflict with much of what floats around these parts.
There's a recent thread on Aider where the authors' proudly proclaim that ~80% of code is written by Aider itself.
I've no idea what to make of the general state of the programming profession at all at the moment, but I can't help but feel learning various programming trivia has a lower return on investment than ever.
I get learning the business and domain and etc, but it seems like we're in a fast race to the bottom where the focus is on making programmers' skills as redundant as possible as soon as possible.
davely
> Interviews aren’t about solving problems.
Eh, I wish more people felt that way, I have failed so many interviews because I haven't solved the coding problem in time.
The feedback has always been something along the lines of "great at communicating your thoughts, discussing trade-offs, having a good back and forth" but "yeah, ultimately really wanted to see if you could pass all the unit tests."
Even in interview panels I've personally been a part of, one of the things we evaluate (heavily) is whether the candidate solved the problem.
icecube123
Isnt one of the ways of solving the problem using all the tools at your disposal? If at the end of the day, isnt having working code the fundamental goal? I guess you could argue that the code needs to be efficient, stable, and secure. But if you could use "AI" to get part way there, then use smarts to finish it off. Isnt that reasonable? (Devils advocate) The other big question is the legality of using code from an AI in a final commercial product.
ninetyninenine
Yeah everyone says that they are interested in how you got there but this isn’t true in reality from my experience. Your bias inevitably judges them on the solution because you have many other candidates who got the correct solution.
twoparachute45
If you've been given the problem of "without using AI, answer this question", and you use an AI, you haven't solved the problem.
The ultimate question that an interview is trying to answer is not "can this person solve this equation I gave them?", it's usually something along the lines of "does this person exhibit characteristics of a trustworthy and effective employee?". Using AI when you've been asked not to is an automatic failure of trust.
This isn't new or unique to AI, either. Before AI people would sometimes try to look up answers on Google. People will write research papers by looking up information on Wikipedia. And none of those things are wrong, as long as they're done honestly and up front.
jacobsenscott
If you are pretending to have knowledge and skills you don't have you are cheating. And if you have the required knowledge and skill AI is a hindrance, not a help. You can solve the problem easily without it. So "is using ai cheating"? IDK, but logically you wouldn't use AI unless you were cheating.
NotMichaelBay
Knowledge and skill are two different things. Sometimes interviewers test that you know how to do something, when in practice it's irrelevant if you A) know how to retrieve that knowledge and B) know when to retrieve it.
isbvhodnvemrwvn
For the goal of the interview - showing your knowledge and skills - you are failing miserably. People know what LLMs can do, the interview is about you.
risyachka
I guess its more of a question if you can solve the problem without AI.
In most interview tasks you are not solving the task “with” ai.
Its AI who solves the task while you watch it do it.
hibikir
Some can be quite good at the cheating: At least good enough to get through multiple layers. I've been in hiring meetings where I was the only one of 4 rounds that caught the cheating, and they were even cheating in behaviorals. I've also been in situations with a second interviewer, where the other interviewer was completely oblivious even when it was clear I was basically toying with the guy reading from the AI, leading conversation in unnatural ways.
Detection of AI in remote interviews, behavioral and technical, just has to be taught today if you are ever interviewing people that don't come from in-network recommendations. Completely fake candidates are way too common.
aprilthird2021
I'm at the same company I think. I don't get why we can't just use some software that monitors clicking away or tabbing away from the window, and just tell candidates explicitly that we are monitoring them, and looking away or tabbing away will appear suspect.
ryan-duve
My startup got acquired last year so I haven't interviewed anyone in a while, but my technical interview has always been:
- share your screen
- download/open the coding challenge
- you can use any website, Stack Overflow, whatever, to answer my questions as long as it's on the screenshare
My goal is to determine if the candidate can be technically productive, so I allow any programming language, IDE, autocompleter, etc, that they want. I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.
shaneoh
I recently interviewed for my team and tried this same approach. I thought it made sense because I want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job.
It proved to be awkward and clumsy very quickly. Some candidates resisted it since they clearly thought it would make them judged harsher. Some candidates were on the other extreme and basically tried asking ChatGPT the problem straight up, even though I clarified up front "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."
After just the initial batch of candidates it became clear it was muddying things too much, so I simply forbade using it for the rest of the candidates, and those interviews went much smoother.
mmh0000
Over the years, I've walked from several "live coding" interviews. Arguably though, if you're looking for "social coders" maybe the interview is working as intended?
But for me, it's just not how my brain works. If someone is watching me, I'll be so self-conscious the entire time you'll get a stream of absolute nonsense that makes me look like I learned programming from YouTube last night. So it's not worth the time.
You want some good programming done? I need headphones, loud music, a closed door and a supply of Diet Coke. I'll see you in a few hours.
AznHisoka
Yep, if I’m forced to talk through the problem, I’ll force myself to go through various things that you might want to hear, that I wouldn’t do.
Whereas my natural approach would be to take a long shower, workout etc and let my brain wander a bit before digging into it. But that wouldn’t fly during an interview..
shaneoh
Ironically this is exactly how I am too. Even at work, if I'm talking through a problem on a presentation or with my boss, I'm much more scatterbrained, and I'll try to dodge those kinds of calls with "Just give me 30 minutes and I'll figure it out." which always goes better for me.
That said, now we're just talking about take home challenges for interviews and you always hear complaints about those too. And shorter, async timed challenges (something like "Here's a few hours to solve this problem, I'll check back in later") are now going to be way more difficult to judge since AI is now ubiquitous.
So I really don't think there's any perfect methodology out there right now. The best I can come up with is to get the candidate in front of you and talk through problems with them. The best barometer I found so far was to set up a small collection of files making up a tiny app and then have candidates debug it with me.
joquarky
I need my default mode network to produce good code, and I don't talk while it's active
gjulianm
The interview works as intended because the main priority is to avoid hiring people who will be a negative for the company. Discarding a small number of good candidates is an acceptable tradeoff.
946789987649
What do you do if a junior asks for help and it's easiest to code through with them?
Aeolun
What are you supposed to ask chatGPT if you can’t just ask it the answer? That’d confuse me too.
no-reply
Some part of the problem statement you want help with (rather than a complete answer)?
shaneoh
One example would be looking up syntax and common functions. In a high-pressure situation it's much tougher to bumble around Google and Stack Overflow, so this would be a way for solving for "I totally know how to do this thing but it's just not coming to mind at this moment" which is fair. Usually we the interviews can obviously just tell them ourselves though, but that's what I was going for.
But yeah, the point is that once I applied it in practice it did quickly become confusing, so now I know from experience not to use it.
I think the other suggestions in this thread about how to use it are good ones, but they would present their own meta challenges for an interview too. Just about finding whatever balance works for you I guess.
datavirtue
Just another interview methodology pulled out of someone's ass. They don't know.
layer8
Did you tell them that you “want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job”? Just curious.
shaneoh
Yup, we told them exactly that.
raincole
> "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."
No, it's not "obvious" whatsoever. Actually it's obviously confusing: why you are allowing them to use ChatGPT but forbidding them from asking the questions directly? Do you want an employee who is productive at solving problems, or someone who guess your intentions better?
If AI is an issue for you then just ban it. Don't try to make the interview a game of who outsmart who.
shaneoh
See my answer to the other comment on this question. We figured there were some good use cases for AI in an interview that weren't just copy/pasting code, it's not about guessing intentions. It seemed most helpful to potentially unstick candidates from specific parts of the problem if they were drawing a blank under pressure, basically just an easier "You can look it up on Google" in a way that would burn less time for them. However we quickly found it was just easier for us to unstick them ourselves.
> If AI is an issue for you then just ban it.
Yes, that was the conclusion I just said we rapidly came to.
946789987649
I've had a few people chuck the entire problem into ChatGPT, it was still very much useful in a few ways:
- You get to see how they then review the generated code, do they spot potential edge cases which the AI missed? - When I ask them to make a change not in the original spec, a lot of them completely shut down because they either didn't understand the code generated well enough, or they themselves didn't really know how to code.
And you still get to see people who _do_ know how to use AI well, which at this point is a must for its overall productivity benefits.
skinner927
Maybe come up with a problem that isn’t so simple you can just ask it to ChatGPT. Create some context that would be difficult/tedious to convey.
bagels
If you really don't penalize them for this, you should clearly state it. Some people may still think they'll be penalized as that is the norm.
bbarnett
I'd be fine with the GPT side of things, as long as I could somehow inject poor answers, and see if the interviewee notices and corrects.
cpursley
That's actually a horribly awesome idea.
htrp
the trick is to phrase the problem in a way that GPT4 will always give the incorrect answer (due to vagueness of your problem) and that multiple rounds of guiding/correcting are needed to solve.
gtirloni
That's pretty good because it can exhaust the context window quickly and then it starts spiraling out of control, which would require the candidate to act.
hibikir
There's more than one possible AI on the other end, so crafting something that will not annoy a typical candidate, but will lead every AI astray seems pretty difficult.
notpushkin
Maybe you could allow using AI, but only through the interviewer-provided interface. That interface would allow using any model the candidate likes, but before sending the response it will inject errors into the code (either randomly or through another AI prompt).
staticautomatic
I did this while hiring last year and the number of candidates who got stuff wrong because they were too proud to just look up the answer was shocking.
prisenco
Is it pride or is it hard to shake the (reasonable, I'd say) fear the reviewer will judge regardless of their claims?
ryandrake
Exactly. You never know. Some interviewers will penalize you for not having something memorized and having to look it up, some will penalize you for guessing, some will penalize you for simply not knowing and asking for help. Some interviewers will penalize you for coming up with something quick and dirty and then refining it, some will penalize you for jumping right to the final product. There's no consistency.
staticautomatic
I do what I can to allay that fear. The rest is up to them.
random_walker
I love these kind of interviews. This would very closely simulate real world on-job Performance.
dalmo3
If I had to do real world on-job coding while someone looks over my shoulder at all times (i.e. screensharing), I'd be flipping burgers.
silasdavis
I don't care how you're good at it so long as I can watch.
yieldcrv
> I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.
Too many people are the opposite that I would literally never tell you
And this works.
what can we do to help that?
I’ve had interviews where AI use was encouraged as well.
but so many casual tirades against it dont make me want to ever try being forthcoming. most organizations are realistically going to be 10 years behind the curve on this
pnathan
Screen share or in person are what I think the best ways are. These are not the best options.
I do not want AI. The human is the value add.
I understand that people won't feel super comfortable with this, and I try not to roast the candidate with leetcode. It should be a conversation where I surface technical reality and understanding.
explorigin
Part of my resume review process is trying to decide if I can trust the person. If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate.
Once you get to the interview process, it's very clear if someone thinks they can use AI to help with the interview process. I'm not going to sit here while you type my question into OpenAI and try to BS a meaningful response to my question 30 seconds later.
AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.
brianstrimp
Good interviews are a conversation, a dialog to uncover how the person thinks, how they listen, how they approach problems and discuss. Also a bit detail knowledge, but that's only a minor component in the end. Any interview where AI in its current form helps is not good anyway. Keep in mind that in our industry, the interview goes both ways. If the candidate thinks your process is bad then they are less inclined to join your company because they know that their coworkers will have been chosen by a subpar process.
That said, I'm waiting for an "interview assistant" product. It listens in to the conversation and silently provides concise extra information about the mentioned subjects that can be quickly glanced at without having to enter anything. Or does this already exist?
Such a product could be useful for coding to. Like watching me over the shoulder and seeing aha, you are working with so-and-so library, let me show you some key parts of the API in this window, or you are trying to do this-and-that, let me give you some hints. Not as intrusive as current assistants that try to write code for you, just some proactive lookup without having to actively seek out information. Anybody knows a product for that?
kmoser
That might be good for newbie developers but for the rest of us it'll end up being the Clippy of AI assistants. If I want to know more about an API I'm using, I'll Google (or ask ChatGPT) for details; I don't need an assistant trying to be helpful and either treating me like a child, or giving me info that maybe right but which I don't need at the moment.
The only way I can see that working is if it spends hundreds of hours watching you to understand what you know and don't know, and even then it'll be a bit of a crap shoot.
sien
I'm pretty sure I've been in an interview with an 'interview assistant' and that it was another person.
This was 2-3 years ago in a remote interview. The candidate would hear the question, BS us a bit and then sometimes provide a good answer.
But then if we asked follow up questions they would blow those.
They also had odd 'AV issues' which were suspicious.
ktallett
This, and tbh this has always been the best way. Someone who has projects, whether personal or professional, and has the capability to discuss those projects in depth and with passion will usually be a better employee than a leet code specialist.
remus
Doesn't even have to be a project per se, if they can discuss some sort of technical topic in depth (i.e. the sort of discussion you might have when discussing potential solutions to a problem) then that's a great sign imo.
polishdude20
My resume has a bunch of personal projects on there as well as work experience and the project experience seems to not help at all. Just rejections after rejections.
ktallett
My suggestion was in an ideal world which sadly this isn't. Your issue suggests they aren't tailored for each application, which could potentially be a reason. It is better to show why one project makes you a great fit as opposed to how many projects you have done. Sometimes the person in charge of hiring may not fully have all the expertise in the area they are hiring for.
vunderba
Agreed. This is why - while I won't ding an applicant for not having a public Github, I'm always happy when they do because usually they'll have some passion projects on there that we can discuss.
pdimitar
I have 23 years of experience and I am almost invisible on GitHub, and for all those years I've been fired from 4 contracts due to various disconnects (one culture mis-fit and two under-performances due to illness I wasn't aware of at the time, and one because the company literally restructured over the weekend and fired 80% of all engineers), and I have been contracting a lot in the last 10 years (we're talking 17-19 gigs).
If you look solely at my GitHub you'd likely reject me right away.
I wish I had the time and energy for passion projects in programming. I so wish it was so. But commercial work has all but destroyed my passion for programming, though I know it can be rekindled if I can ever afford to take a properly long sabbatical (at least 2 years).
I'll more agree with your parent / sibling comments: take a look at the resume and look for bad signs like too vanilla / AI language, too grandiose claims (though when you are experienced you might come across as such so 50/50), or almost no details, general tone etc.
And the best indicator is a video call conversation, I found as a candidate. I am confident in what I can do (and have done), I am energetic and love to go for the throat of the problems on my first day (provided the onboarding process allows for it) and it shows -- people have told me that and liked it.
If we're talking passion, I am more passionate about taking a walk with my wife and discussing the current book we're reading, or getting to know new people, or going to the sauna, or wondering what's the next meetup we should be going to, stuff like that. But passion + work, I stand apart by being casual and not afraid of any tech problems, and by prioritizing being a good teammate first and foremost (several GitHub-centric items come to mind: meaningful PR comments and no minutiae, good commit messages, proper textual comment updates in the PR when f.ex. requirements change a bit, editing and re-editing a list of tasks in the PR description).
I already do too much programming. Don't hold it against me if I don't live on the computer and thus have no good GitHub open projects. Talk to me. You'll get much better info.
johnnyanmac
Iroincally I'd probably have more github projects if I didn't spend 20 months looking for a full-time job.
And tbh, at the senior level they rarely care about personal projects. I must have had 60+ interviews and I feel a lack of a github cost me maybe 2 positions. When you job is getting a job, you rarely have the time for passion.I'm doing contract work in the meantime; prevents gaps from showing, more appealing than a personal project, and I can talk about that to the extent of my NDA (plenty of tech to talk about without revealing the project)
aakresearch
Brilliantly put! Upvoted and "favorited".
I would also add meticulous attention to documenting requirements and decisions taken along the development process, especially where compromises were made. All the "why's", so to speak.
But yes, commercial development, capital-A "Agile" absolutely kills the drive.
dennis_jeeves2
>I am energetic and love to go for the throat of the problems on my first day
The long term goal should be that you should NOT be energetic, yet be able to pretend that you are. We'll see where you are at the 33 year mark :)
satvikpendem
Also because most people are busy with actual work and don't have the time to have passion projects. Some people do, and that's great, but most people are simply not passionate about labor, regardless of what kind of labor it is.
nyrikki
To add to this, lots of senior people in the consultanting world are brought in under escalations. They often have to hide the fact they are an external resource.
Also if you have a novel or disclosure sensitive passion project, GitHub may be avoided even as a very conservative bright line.
As stated above I think it can be good to find common points to enhance the interview process, but make sure to not use it as a filter.
AlgebraFox
I really hate those who ask for GitHub profiles. Mine is psuedo anonymous and I don't want to share it with my employer or anyone I don't want to. Besides privacy, I do not understand why a company would even expect the candidate to have free contribution in the first place. Can't the candidate have other hobbies to enjoy or learn?
yosito
> If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate
So you just subjectively say "this resume is too perfect, it must be bullshit"? How the fuck is any actual, qualified engineer supposed to get through your gauntlet of subjectivity?
colonial
You'd be surprised at how good you can get at sniffing out slop, especially when it's the type prompted by fools who think it'll get them an easy win. Often the actual content doesn't even factor in - what triggers my mental heuristics is usually meta stuff like tone and structure.
I'm sure some small % of people get away with it by using LLaMA-FooQux-2552-Finetune-v3-Final-v1.5.6 or whatever, but realistically, the majority is going to be obvious to anyone that's been force-fed slop as part of their job.
alok-g
The strong language used aside, indeed, we should be cautious of our own potential biases when screening or otherwise.
I am imagining an AI saying my CV is AI-generated, when in reality, I do not even use Auto-correct or Auto-suggest when I (type)write! :-)
RajT88
> AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.
Generally, this is how to figure out if a candidate is full of crap or not. When they say they did a thing, ask them questions about that thing.
If they can describe their process, the challenges, how they solved the challenges, and all of it passes the sniff test: If they are bullshitting, they did crazy research and that's worth something too.
satvikpendem
There are much more sophisticated methods than that now with AI, like speech to text to LLM. It's getting increasingly harder to detect interviewees cheating.
yowlingcat
I think GP's point is that this says as much about the interview design and interviewer skill as it does about the candidate's tools.
If you do a rote interview that's easy to game with AI, it will certainly be harder to detect them cheating.
If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
satvikpendem
> If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
I understood their point but my point is a direct opposition to theirs, that at some point with AI advances this will essentially become impossible. You can make it as open ended as you want but if AI continues to improve, the human interviewee can simply act as a ventriloquist dummy for the AI and get the job. Stated another way, what kind of "effective and well designed open ended interview" can you make that would not succumb to this problem?
hibikir
There's candidates running speech-to-text that avoid the noticeable delays, but it's still possible to do the right kind of digging the AI will almost always refuse to do, because it's way too polite.
It's as if we were testing for replicants in Blade Runner: The AI response will rarely figure out you are aiming to look for something frustrating, that they are actually proud of, or figure out when you are looking for a hot take you can then disagree with.
lolinder
The traditional tech interview was always designed to optimize for reliably finding someone who was willing to do what they were told even if it feels like busywork. As a rule someone who has the time and the motivation to brush up on an essentially useless skill in order to pass your job interview will likely fit nicely as a cog in your machine.
AI doesn't just change the interviewing game by making it easy to cheat on these interviews, it should be changing your hiring strategy altogether. If you're still thinking in terms of optimizing for cogs, you're missing the boat—unless you're hiring for a very short term gig what you need now is someone with high creative potential and great teamwork skills.
And as far as I know there is no reliable template interview for recognizing someone who's good at thinking outside the box and who understands people. You just have to talk to them: talk about their past projects, their past teams, how they learn, how they collaborate. And then you have to get good at understanding what kinds of answers you need for the specific role you're trying to fill, which will likely be different from role to role.
The days of the interchangeable cog are over, and with them easy answers for interviewing.
nouveaux
Have you spent a lot of time trying to hire people? I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees. This perspective smells completely like "If I were in charge, things would be so much better." Guess what? If you were to take your idea and try to lead this change across a 100 people engineering org, there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.
"talk about their past projects, their past teams, how they learn, how they collaborate"
You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
dakiol
My take is:
- “big” tech companies like Google, Amazon, Microsoft came up with these types of tech interviews. And there it seems pretty clear that for most of their positions they are looking for cogs
- The vast majority of tech companies have just copied what “big” tech is doing, including tech interviews. These companies may not be looking for cogs, but they are using an interview process that’s not suitable for them
- Very few companies have their own interview process suitable for them. These are usually small companies and therefore the number of engineers in such companies is negligible to be taken into account (most likely, less than 1% of the audience here work at such companies)
nouveaux
And what is wrong with being a cog? Not everyone is going to invent the next ai innovation and not everyone is cut out to build the next hot programming language.
Bugs need to be fixed. Features need to be implemented. If it weren't for cogs, you'd have people just throwing new projects over the fence and dropped 6 months after release. Don't want to be another cog? Join a startup. Plenty of those hiring. The reality is that when you work at a large company, you're one of 50,000 people. By definition, only 1% are in the top 1%.
Someone has to wash the dishes and clear the tables. Let's stop looking down at jobs just because it's not hot and sexy. People who show up and provide value is great and should be appreciated.
bodge5000
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
This is the job of a good interviewer. I've run the gauntlet from terrible to great answers to the exact same questions depending on the interviewer. If you literally just ask that question out of the blue, you'll either get a bad or rehearsed response. If you establish some rapport, and ask it in a more natural way, you'll get a more natural answer.
It's not easy, but neither is being on the other side of the interviewer, and that's never been accepted as an excuse
dennis_jeeves2
> I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees.
The council itself is made of "busywork" worker bees. Slave hiring slaves - the vast majority of IT interviewers and candidates are idiot savants - they know very little outside of IT, or even realize that there is more to life than IT.
northern-lights
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
This was the norm until perhaps for about the last 10-15 years of Software Engineering.
lolinder
> I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees.
I didn't say that. I said that this style of interview was designed to hire pluggable cogs. As others have noted, that was the correct move for Big Tech and was cargo culted into a bunch of other companies that didn't know why their interviews were shaped the way they were.
> there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.
In answer to your original question: yes, I'm actively involved in hiring at a 100+ person engineering org that hires this way. And no, we're not looking to figure out how to hire compliant people, we're hiring engineers who will push back and do what works well, not just act because an executive says so.
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
Only if you suck at making people comfortable and at understanding different (potentially awkward) communication styles. You don't have to discriminate against people for being awkward, that's a choice you can make. You can instead give them enough space to find their train of thought and pursue it, and it does work—I recently sat in on an interview like that with someone who fits your description exactly, and we strongly recommended him.
dahart
> what you need now is someone with high creative potential and great teamwork skills.
That’s exactly what we always needed, long before LLMs arrived. That’s why all the interviews I’ve seen or give already were designed to have conversations.
I’m agreeing with you, but I’ve never seen these ‘interchangeable cog’ interviews you’re talking about.
lolinder
Right, I agree. The leetcode interviews are a bad fit for almost every company—they only made sense in the Googles and Microsofts that invented them and actually did want to optimize for cogs.
shihab
To get an idea of just how advanced cheating tools has become, take a look here:
I think every interviewer, hiring manager ought to know or be trained on these tools, your intuition about candidate's behaviour isn't enough. Otherwise, we will soon reach a tipping point where honest candidates will be at a severe disadvantage.
paxys
Tbh I’m very happy these tools exist. If your company wants to ask dumb formulaic leetcode questions and doesn’t care about the candidate’s actual ability then this is what you deserve. If they can automate the interview so well then they should also be able to automate the job right? Or are your interview questions not representative of what the job actually entails?
shihab
I understand this sentiment for experienced developers. It is an imperfect signal. But what is in your opinion a better signal for junior or new grads?
Every alternative I can think of is either worse, or sounds nice but impractical to implement in practice at scale.
I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with. And by the way, not all of them are completely dumb, they do know computer science, just perhaps not as well as an honest competitor.
johnnyanmac
>But what is in your opinion a better signal for junior or new grads?
They are juniors, I don't expect them to be experts, I expect eagerness and passion. They spent 4 or more years focusing on schooling, show me the results of your projects. Let them talk and see how well they understand what they did. Side projects are even better to stand out.
And you know... apparently people can still fail fizzbuzz in 2025. If you really question their ability to code, ask the basics, not if they can write a Sudoku verifier on the spot. If you aren't a sudoku game studio I don't see the application outside of "can they work with arrays?"
>I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with.
everyone has a different style. Personally I care a lot less about programming proficiency and a lot more about technical communication. If they only wrote 10 lines of code for a group project but can explain every aspect of the project as if they written it themselves, what am I really missing? The odds of that sort of technical reaspning being accompanied by poor coding is a lot rarer than the alternative of a Leetcode wizard who can't grasp architectural concept nor adjust to software tooling, in my experiences.
the_snooze
>But what is in your opinion a better signal for junior or new grads?
For students specifically, the strongest signal is if they've done research with a past collaborator of mine and my collaborator vouches for them. It's a great signal because it's very high barrier to entry and absolutely does not scale.
Realistically, being able to speak confidently about something they did/built during their education is a decent proxy. If they can handle open-ended follow-up questions like "what did you learn?" and "what trade-offs did you make?" and "how would you tweak it under X different requirements?" then that's a great signal too.
These aren't "gotcha" questions, but they insist on the candidate being reasonably competent and (most importantly) an actual human who can think on their feet.
forrestthewoods
> Or are your interview questions not representative of what the job actually entails?
100% of all job interviews are a proxy. It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.
A leetcode interview either is or not a meaningful proxy. AI tools either do or not invalidate the validity of that proxy.
Personally I think leetcode interview are an imperfect but relatively decent proxy. And that AI tools render that proxy invalid.
Hopefully someday someone invents a better interview scheme that can reliably and consistently scale to thousands of interviews, hundreds of hires, and dozens of interviewers. That’d be great. Alas it’s a really hard and unsolved problem!
alok-g
>> It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.
>> ... leetcode interview are an imperfect but relatively decent proxy.
I think all this is just the status quo that should be challenged instead of being justified.
When I conduct interviews (environment: a FAANG company), I focus on (a) fundamental understanding and (b) practical problems. None of the coding problems I pose are more than O[N] in complexity. Yet, my hiring decisions have never gone wrong.
pydry
Yeah, it's just a pity that human stupidity perpetuated leetcode as an interviewing tool to the point that AI had to kill it....
Im really happy it's finally broken though. Dumbest fad our industry ever had.
lubujackson
I dunno... estimating how many golf balls fit in a bus or explaining why manhole cover are round make leetcode look almost... useful by comparison.
crooked-v
I think this is the first interview cheating tool I've seen that feels morally justified to me. I wonder if it will actually change company behavior at all.
0x20cowboy
The faster leetcode interviews can be completely broken to the point they are abandoned the better.
Der_Einzige
Glad this exists and big +1 to the creator.
"Cheating" on leetcode is a net positive for society. Unironically.
blazing234
this is a good thing.
anyone I know who actually got a job through leetcode style in the last 2 years cheated. they would get their friends to mirror monitor and then type the answers in from chatgpt LOL
low_tech_punk
We are approaching a singularity where we actually want to hire the cheating tool, not the cheater.
dinkumthinkum
I strongly disagree. This is nothing. You can sort out if someone is using something like this to cheat. You have a conversation. You can ask conceptual questions about algorithms and time complexity and figure out their level and see how their sophistication matches their solution on the LeetCode problem or whatever. Now, if you have really bad intuition or understanding of human behavior then yeah it would probably be hard but in that case being a good interviewer is probably hopeless anyway.
ktallett
The key is having interviewers that know what they are talking about so in-depth meandering discussions can be had regarding personal and work projects which usually makes it clear whether the applicant knows what they are talking about. Leetcode was only ever a temporary interview technique, and this 'AI' prominence in the public domain has simply sped up it's demise.
_puk
This completely..
You ask a rote question and you'll get a rote answer while the interviewee is busy looking at a fixed point on the screen.
You then ask a pointed question about something they know or care about, and suddenly their face lights up, they're animated, and they are looking around.
It's a huge tell.
crooked-v
You know, this makes me wonder if a viable remote interview technique, at least until real-time deepfaking gets better, would be to have people close their eyes while talking to them. For somebody who knows their stuff it'll have zero impact; for someone relying entirely on GPT, it will completely derail them.
steve_taylor
A filter could probably do it already. There are already filters to make you appear to be looking at the camera no matter where your eyes are pointing.
ickelbawd
That’s an interesting idea. Sadly I think the next AI interviewing tool to be developed in response would make you look like your eyes are closed. But in the interim period it could be an interesting way to interview. Doesn’t really help for technical interviews where they kinda need to have their eyes open, but for pre-screens maybe…
wiether
> For somebody who knows their stuff it'll have zero impact
I just tried and it's: hard. It feels like being ask to keep one's breath like writing something.
I need to focus too much on keeping my eyes closed, I don't have enough bandwith left to thing about anything relevant.
null
danielbln
This is the way. We do an intro call, an engineering chat (exactly as you describe), a coding challenge and 2 team chat sessions in person. At the end of that, we usually have a good feeling about how sharp the candidate is, of they like to learn and discover new things, what their work ethic is. It's not bullet proof, but it removes a lot of noise from the signal.
The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
satvikpendem
> The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
Do you state this upfront or is it some hidden requirement? Generally I'd expect an interview coding exercise to not be done with AI, but if it's a hidden requirement that the interviewer does not disclose, it is unfair to be penalized for not reading their minds.
ktallett
I would say as long as it is stated you can complete the coding exercise using any tool available it is fine. I do agree, no task should be a trick.
I am personally of the view you should be able to use search engines, AI, anything you want, as the task should be representative of doing the task in person. The key focus has to be the programmer's knowledge and why they did what they did.
danielbln
Well, the challenge involves using a python LLM framework to build a simple RAG system for recipes.
It's not a hidden requirement per se to use LLM assistance, but the candidate should have a good answer ready why they didn't use an LLM to solve the challenge.
crooked-v
> as it's that much of a productivity boost when used right
Frankly, if an interviewer told me this, I would genuinely wonder why what they're building is such a simple toy product that an LLM can understand it well enough to be productive.
soheil
its demise
dijit
I've always just tried to hold a conversation with the candidate, what they think their strengths are weaknesses are and a little probing.
This works especially well if I don't know the area they're strongest in, because then they get to explain it to me. If I don't understand it then it's a pretty clear signal that they either don't understand it well enough or are a poor communicator. Both are dealbreakers.
Otherwise, for me, the most important thing is gauging: Aptitude, Motivation and Trustworthiness. If you have these three attributes then I could not possibly give a shit that you don't know how kubernetes operators work, or if you can't invert a binary tree.
You'll learn when you need it; it's not like the knowledge is somehow esoteric or hidden.
punk_coder
This is how I interview potential hires. I’ll admit I haven’t interviewed someone below a senior level in probably 10 years, so I interview someone that has a resume with experience that I can draw from. I read what they’ve worked on and just go from there. I hope I never have to submit someone to some stupid take home test or Leet Code interview.
vrosas
As someone currently job searching it hasn’t changed much, besides companies adding DO NOT USE AI warnings before every section. Even Anthropic forces you to write a little “why do you want to work here DO NOT USE AI” paragraph. The irony.
Pooge
They will very happily use AI to evaluate your profile, though :)
pizzalife
Applying at Anthropic was a bad experience for me. I was invited to do a timed set of leetcode exercises on some website. I didn't feel like doing that, and focused on my other applications.
Then they emailed me a month later after my "invitation" expired. It looked like it was written by a human: "Hey, we're really interested in your profile, here's a new invite link, please complete this automated pre-screen thingie".
So I swallowed my pride and went through with that humiliating exercise. Ended up spending two hours doing algorithmic leetcode problems. This was for a product security position. Maybe we could have talked about vulnerabilities that I have found instead.
I was too slow to solve them and received some canned response.
x0x0
fyi, that's because (from experience) the last job req I publicly posted generated almost 450 responses, and (quite generously) over a third were simply not relevant. It was for a full-stack rails eng. Here, I'm not even including people whose experience was django or even React; I mean people with no web experience at all, or were not in the time zone requested. Another 20% or so were nowhere near the experience level (senior) requested either.
The price of people bulk applying with no thought is I have to bulk filter.
Pooge
So you allow yourself to use AI in order to save time, but we have to put up with the shit[1] companies make up? That's good, it's for the best if I don't work for a company that thinks so lowly of its potential candidates.
[1]: Including but not limited to: having to manually fill a web form because the system couldn't correctly parse a CV; take-home coding challenges; studying for LeetCode interviews; sending a perfectly worded, boot-licking cover letter.
meter
For the time being, I’ve banned LLMs in my interviews.
I want to see how the candidate reasons about code. So I try to ask practical questions and treat them like pairing sessions.
- Given a broke piece of code, can you find the bug and get it working?
- Implement a basic password generator, similar to 1Password (with optional characters and symbols)
If you can reason about code without an LLM, then you’ll do even better with an LLM. At least, that’s my theory.
I never ask trick questions. I never pull from Leetcode. I hardly care about time complexity. Just show me you can reason about code. And if you make some mistakes, I won’t judge you.
I’m trying to be as fair as possible.
I do understand that LLMs are part of our lives now. So I’m trying to explore ways to integrate them into the interview. But I need more time to ponder.
meter
Thinking out loud, here’s one idea for an LLM-assisted interview:
- Spin up a Digital Ocean droplet
- Add the candidate’s SSH key
- Have them implement a basic API. It must be publicly accessible.
- Connect the API to a database. Add more features.
- Set up a basic deployment pipeline. Could be as simple as script that copies the code from your local machine to the server.
Anything would be fair game. The goal would be to see how the candidate converses with the LLM, how they handle unexpected changes, and how they make decisions.
Just a thought.
blazing234
for the first point you better provide a jira ticket with steps to get there (;
i would just look at stack overflow for your second point lol...
MacsHeadroom
Changed enormously. Both resumes and interviews are effectively useless now. If our AI agents can't find a portfolio of original work nearly exactly what we want to hire you for then you aren't ever going to hear from us. If you are one of the 1 in 4000 applications who gets an interview then you're already 70% likely to get an offer and the interview is mostly a formality.
Gigachad
What worked for me is just ignoring the job listing websites, and calling recruiters directly on the phone. Don’t bother hitting “easy apply” just scroll to the bottom and call the number.
I’ve also been asked for the first time in ages to come to the companies office to do interviews.
andrewflnr
What do you tell them on the phone? Are they prepared for just "Hi I want to apply for the $job position"? And do they have an answer besides "cool, use the website"?
Gigachad
They put their phone number there because they want you to call it. I say "I saw this position <position name> advertised on LinkedIn and I'm interested, is this still available?"
Last time I did this they told me it is but that they are at late stages of interviewing so I shouldn't bother applying for that one, but they got down my details and had other jobs that matched what I was looking for. Recruiters are sales people and you just reversed cold called them making their job easier. The majority of applications are AI bots and people who don't live in the country the job is listed in. By making a phone call you are up the top of the list of "most likely to be a legitimate applicant".
tmpz22
If the interview is mostly a formality is it still multiple hours of leet code?
MacsHeadroom
No, there are AI's specifically for solving leet code. It's a waste of time.
johnnyanmac
>If our AI agents can't find a portfolio of original work nearly exactly what we want to hire you for
that'd be a huge issue for most candidates (and basically all top candidates) because "exactly what you want to hire you for" is probably not open source code to begin with.
>If you are one of the 1 in 4000 applications who gets an interview then you're already 70% likely to get an offer and the interview is mostly a formality.
That has not been my experience at all in 2023/2024.
fifilura
Does that mean you will not hire anyone without a public portfolio?
Pooge
I thought that meant what you typically write in the "Experience" section. GP, am I wrong?
Is everyone writing a "Projects" section by rewording what they wrote in "Experience"?! For me, "Projects" should strictly be personal projects. If not, maybe that's what I'm missing.
sshine
Projects are personal projects, or at least projects in which you did a distinguishable effort.
They don't have to be public to the whole world, you can have links that are only in your resume.
But if they're on GitHub, they have to be public, since there aren't unlisted repositories.
MacsHeadroom
I'm saying the sections of the resume don't matter at all. The resume is basically ignored. You either have public code you can point to on Github or you aren't ever hearing from us.
MacsHeadroom
Essentially, yes. Public portfolios come in different flavors though. Most often it's code. But sometimes it's research, a blog, transcripts of talks ripped from YouTube.
crooked-v
> a portfolio of original work
I'm too busy doing actual paid work for companies for that.
lnsru
That’s the reality for most people. Creating many things under NDA with tools watching for IP theft. So no single line of code can leave the company. I know a guy who has a portfolio, but he’s freelance web designer.
MacsHeadroom
IP secrecy isn't a moat in the age of AI. Open Source is the only way.
null
Have you gone back to in-person whiteboards? More focus on practical problems? I really have no idea how the traditional tech interview is supposed to work now when problems are trivially solvable by GPT.