Ask HN: What is interviewing like now with everyone using AI?
155 comments
·February 2, 2025sergioisidoro
I've let people use GPT in coding interviews, provided that they show me how they use it. At the end I'm interested in knowing how a person solves a problem, and thinks about it. Do they just accept whatever crap the gpt gives them, can they take a critical approach to it, etc.
So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI. So far my opinion is if you have a good interview process, you can clearly see who are the good candidates with or without ai.
alpha_squared
We do the same thing. It's perfectly fine for candidates to use AI-assistive tooling provided that they can edit/maintain the code and not just sit in a prompt the whole time. The heavier a candidate relies on LLMs, the worse they often do. It really comes down to discipline.
Keyframe
same thing here. Interview is basically a representative thing of what we do, but also depends on the level of seniority. I ask people just to share the screen with me and use whatever you want / fell comfortable with. Google, ChatGPT, call your mom, I don't care as long as you walk me through how you're approaching the thing at hand. We've all googled tar xvcxfgzxfzcsadc, what's that permission for .pem is it 400, etc.. no shame in anything and we all use all of the things through day. Let's simulate a small task at hand and see where we end up at. Similarly, there is a bias where people leaning more on LLMs doing worse than those just googling or, gasp, opening documentation.
randall
i like this. it seems like a good and honest use of time.
twoparachute45
My company, a very very large company, is transitioning back to only in-person interviews due to the rampant amount of cheating happening during interviews.
As an interviewer, it's wild to me how many candidates think they can get away with it, when you can very obviously hear them typing, then watching their eyes move as they read an answer from another screen. And the majority of the time the answer is incorrect anyway. I'm happy that we won't have to waste our time on those candidates anymore.
mr_00ff00
So depressing to hear that “because of rampant cheating”
As a person looking for a job, I’m really not sure what to do. If people are lying on their resumes and cheating in interviews, it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.
But to this day I haven’t done either.
chowells
Here's the thing: 95% of cheaters still suck, even when cheating. Its hard to imagine how people can perform so badly while cheating, yet they consistently do. All you need to do to stand out is not be utterly awful. Worrying about what other people are doing is more detrimental to your performance than anything else is. Just focus on yourself: being broadly competent, knowing your niche well, and being good at communicating how you learn when you hit the edges of your knowledge. Those are the skills that always stand out.
ryandvm
I don't know, I kind of feel like leetcode interviews are a situation where the employer is cheating. I mean, you're admittedly filtering out a great number of acceptable candidates knowing that if you just find 1 in a 1000, that'll be good enough. It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.
In my opinion, if a prospective employee is able to successfully use AI to trick me into hiring them, then that is a hell of a lot closer to the actual work they'll be hired to do (compared to leetcode).
I say, if you can cheat at an interview with AI, do it.
kortilla
The employer sets the terms of the interview. If you don’t like them, don’t apply.
What you’re suggesting here isn’t any different than submitting a fraudulent resume because you disagree with the required qualifications.
twoparachute45
I dunno why there is always the assumption in these threads that leetcode is being used. My company has never used leetcode-style questions, and likely never will.
I work in security, and our questions are pretty basic stuff. "What is cross-site scripting, and how would you protect against it?", "You're tasked with parsing a log file to return the IP addresses that appear at least 10 times, how would you approach this?" Stuff like that. And then a follow-up or two customized to the candidate's response.
I really don't know how we could possibly make it easier for candidates to pass these interviews. We aren't trying to trick people, or weed people out. We're trying to find people that have the foundational experience required to do the job they're being hired for. Even when people do answer them incorrectly, we try to help them out and give them guidance, because it's really about trying to evaluate how a person thinks rather than making sure they get the right answer.
I mean hell, it's not like I'm spending hours interviewing people because I get my rocks off by asking people lame questions or rejecting people; I want to hire people! I will go out of my way to advocate for hiring someone that's honest and upfront about being incorrect or not knowing an answer, but wants to think through it with me.
But cheating? That's a show stopper. If you've been asked to not use ChatGPT, but you use it anyway, you're not getting the benefit of the doubt. You're getting rejected and blacklisted.
ghaff
Note also "And the majority of the time the answer is incorrect anyway."
I haven't looked for development-related jobs this millennium, but it's unclear to me how effective a crutch AI is for interviews--at least for well-designed and run interviews. Maybe in some narrow domains for junior people.
As a few of us have written elsewhere, I consider not having in-person interviews past an initial screen sheer laziness and companies generally deserve whoever they end up with.
null
wccrawford
When I was interviewing entry level programmers at my last job, we gave them an assignment that should only take a few hours, but we basically didn't care about the code at all.
Instead, we were looking to see if they followed instructions, and if they left anything out.
I never had a chance to test it out, since we hadn't hired anyone new in so long, but ChatGPT/etc would almost always fail this exam because of how bad it is at making sure everything was included.
And bad programmers also failed it. It always left us with a few candidates that paid attention, and from there we figure if they can do that, they can learn the rest. It seemed to work quite well.
I was recently laid off from that company, and now I'm realizing that I really want to see what current-day candidates would turn in. Oh well.
z3t4
For those tests I never follow the rules, I just make something quick and dirty because I refuse to spend unpaid hours. In the interview the first question is why I didnt follow the instructions, and they think my reason is fair.
Companies seem to think that we program just for fun and ask to make a full blown app... also underestimating the time candidates actually spend making it.
MathCodeLove
If you’re spending the time applying and submitting something then you might as well spend the extra 30 minutes or so to do it right, no?
prisenco
The industry (all industries really) might want to reconsider online applications, or at least privilege in-person resume drop-offs because the escalating ai application/evaluation war that's happening doesn't seem to be helping anyone.
datavirtue
No it's because AI shifted power over to the applicant.
wsintra2022
Is it cheating if I can solve the problem using the tools of AI, or is it just solving the problem?
isbvhodnvemrwvn
For the goal of the interview - showing your knowledge and skills - you are failing miserably. People know what LLMs can do, the interview is about you.
risyachka
I guess its more of a question if you can solve the problem without AI.
In most interview tasks you are not solving the task “with” ai.
Its AI who solves the task while you watch it do it.
ryan-duve
My startup got acquired last year so I haven't interviewed anyone in a while, but my technical interview has always been:
- share your screen
- download/open the coding challenge
- you can use any website, Stack Overflow, whatever, to answer my questions as long as it's on the screenshare
My goal is to determine if the candidate can be technically productive, so I allow any programming language, IDE, autocompleter, etc, that they want. I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.
shaneoh
I recently interviewed for my team and tried this same approach. I thought it made sense because I want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job.
It proved to be awkward and clumsy very quickly. Some candidates resisted it since they clearly thought it would make them judged harsher. Some candidates were on the other extreme and basically tried asking ChatGPT the problem straight up, even though I clarified up front "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."
After just the initial batch of candidates it became clear it was muddying things too much, so I simply forbade using it for the rest of the candidates, and those interviews went much smoother.
Aeolun
What are you supposed to ask chatGPT if you can’t just ask it the answer? That’d confuse me too.
layer8
Did you tell them that you “want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job”? Just curious.
bagels
If you really don't penalize them for this, you should clearly state it. Some people may still think they'll be penalized as that is the norm.
staticautomatic
I did this while hiring last year and the number of candidates who got stuff wrong because they were too proud to just look up the answer was shocking.
prisenco
Is it pride or is it hard to shake the (reasonable, I'd say) fear the reviewer will judge regardless of their claims?
ryandrake
Exactly. You never know. Some interviewers will penalize you for not having something memorized and having to look it up, some will penalize you for guessing, some will penalize you for simply not knowing and asking for help. Some interviewers will penalize you for coming up with something quick and dirty and then refining it, some will penalize you for jumping right to the final product. There's no consistency.
silasdavis
I don't care how you're good at it so long as I can watch.
random_walker
I love these kind of interviews. This would very closely simulate real world on-job Performance.
bbarnett
I'd be fine with the GPT side of things, as long as I could somehow inject poor answers, and see if the interviewee notices and corrects.
cpursley
That's actually a horribly awesome idea.
htrp
the trick is to phrase the problem in a way that GPT4 will always give the incorrect answer (due to vagueness of your problem) and that multiple rounds of guiding/correcting are needed to solve.
gtirloni
That's pretty good because it can exhaust the context window quickly and then it starts spiraling out of control, which would require the candidate to act.
OutOfHere
[flagged]
evilduck
It's pretty obvious when someone's input focus changes to nothing or when their mouse leaves the screen entirely, or you could just ask to see the display settings to begin. Doesn't solve for multiple computers but it's pretty obvious in real time when someone's actual attention drifts or they suddenly have abilities they didn't have before.
Either way, screen sharing beats whiteboards. Even if we throw our hands up and give up, we'll be firing frauds before the probationary period ends.
OutOfHere
There is nothing fraudulent about using LLMs. If people can use them on the job, it's okay to use them on the interview. They're the calculators of tomorrow if not of today.
Interviewing just needs to adapt such as by assessing one's open source projects and contributions. Not much more is needed. And if the candidate completely misrepresents their open source profile, this can be handled by an initial contract-to-hire period.
dijit
I've always just tried to hold a conversation with the candidate, what they think their strengths are weaknesses are and a little probing.
This works especially well if I don't know the area they're strongest in, because then they get to explain it to me. If I don't understand it then it's a pretty clear signal that they either don't understand it well enough or are a poor communicator. Both are dealbreakers.
Otherwise, for me, the most important thing is gauging: Aptitude, Motivation and Trustworthiness. If you have these three attributes then I could not possibly give a shit that you don't know how kubernetes operators work, or if you can't invert a binary tree.
You'll learn when you need it; it's not like the knowledge is somehow esoteric or hidden.
explorigin
Part of my resume review process is trying to decide if I can trust the person. If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate.
Once you get to the interview process, it's very clear if someone thinks they can use AI to help with the interview process. I'm not going to sit here while you type my question into OpenAI and try to BS a meaningful response to my question 30 seconds later.
AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.
ktallett
This, and tbh this has always been the best way. Someone who has projects, whether personal or professional, and has the capability to discuss those projects in depth and with passion will usually be a better employee than a leet code specialist.
remus
Doesn't even have to be a project per se, if they can discuss some sort of technical topic in depth (i.e. the sort of discussion you might have when discussing potential solutions to a problem) then that's a great sign imo.
brianstrimp
Good interviews are a conversation, a dialog to uncover how the person thinks, how they listen, how they approach problems and discuss. Also a bit detail knowledge, but that's only a minor component in the end. Any interview where AI in its current form helps is not good anyway. Keep in mind that in our industry, the interview goes both ways. If the candidate thinks your process is bad then they are less inclined to join your company because they know that their coworkers will have been chosen by a subpar process.
That said, I'm waiting for an "interview assistant" product. It listens in to the conversation and silently provides concise extra information about the mentioned subjects that can be quickly glanced at without having to enter anything. Or does this already exist?
Such a product could be useful for coding to. Like watching me over the shoulder and seeing aha, you are working with so-and-so library, let me show you some key parts of the API in this window, or you are trying to do this-and-that, let me give you some hints. Not as intrusive as current assistants that try to write code for you, just some proactive lookup without having to actively seek out information. Anybody knows a product for that?
sien
I'm pretty sure I've been in an interview with an 'interview assistant' and that it was another person.
This was 2-3 years ago in a remote interview. The candidate would hear the question, BS us a bit and then sometimes provide a good answer.
But then if we asked follow up questions they would blow those.
They also had odd 'AV issues' which were suspicious.
kmoser
That might be good for newbie developers but for the rest of us it'll end up being the Clippy of AI assistants. If I want to know more about an API I'm using, I'll Google (or ask ChatGPT) for details; I don't need an assistant trying to be helpful and either treating me like a child, or giving me info that maybe right but which I don't need at the moment.
The only way I can see that working is if it spends hundreds of hours watching you to understand what you know and don't know, and even then it'll be a bit of a crap shoot.
vunderba
Agreed. This is why - while I won't ding an applicant for not having a public Github, I'm always happy when they do because usually they'll have some passion projects on there that we can discuss.
pdimitar
I have 23 years of experience and I am almost invisible on GitHub, and for all those years I've been fired from 4 contracts due to various disconnects (one culture mis-fit and two under-performances due to illness I wasn't aware of at the time, and one because the company literally restructured over the weekend and fired 80% of all engineers), and I have been contracting a lot in the last 10 years (we're talking 17-19 gigs).
If you look solely at my GitHub you'd likely reject me right away.
I wish I had the time and energy for passion projects in programming. I so wish it was so. But commercial work has all but destroyed my passion for programming, though I know it can be rekindled if I can ever afford to take a properly long sabbatical (at least 2 years).
I'll more agree with your parent / sibling comments: take a look at the resume and look for bad signs like too vanilla / AI language, too grandiose claims (though when you are experienced you might come across as such so 50/50), or almost no details, general tone etc.
And the best indicator is a video call conversation, I found as a candidate. I am confident in what I can do (and have done), I am energetic and love to go for the throat of the problems on my first day (provided the onboarding process allows for it) and it shows -- people have told me that and liked it.
If we're talking passion, I am more passionate about taking a walk with my wife and discussing the current book we're reading, or getting to know new people, or going to the sauna, or wondering what's the next meetup we should be going to, stuff like that. But passion + work, I stand apart by being casual and not afraid of any tech problems, and by prioritizing being a good teammate first and foremost (several GitHub-centric items come to mind: meaningful PR comments and no minutiae, good commit messages, proper textual comment updates in the PR when f.ex. requirements change a bit, editing and re-editing a list of tasks in the PR description).
I already do too much programming. Don't hold it against me if I don't live on the computer and thus have no good GitHub open projects. Talk to me. You'll get much better info.
nyrikki
To add to this, lots of senior people in the consultanting world are brought in under escalations. They often have to hide the fact they are an external resource.
Also if you have a novel or disclosure sensitive passion project, GitHub may be avoided even as a very conservative bright line.
As stated above I think it can be good to find common points to enhance the interview process, but make sure to not use it as a filter.
satvikpendem
Also because most people are busy with actual work and don't have the time to have passion projects. Some people do, and that's great, but most people are simply not passionate about labor, regardless of what kind of labor it is.
satvikpendem
There are much more sophisticated methods than that now with AI, like speech to text to LLM. It's getting increasingly harder to detect interviewees cheating.
yowlingcat
I think GP's point is that this says as much about the interview design and interviewer skill as it does about the candidate's tools.
If you do a rote interview that's easy to game with AI, it will certainly be harder to detect them cheating.
If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
satvikpendem
> If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
I understood their point but my point is a direct opposition to theirs, that at some point with AI advances this will essentially become impossible. You can make it as open ended as you want but if AI continues to improve, the human interviewee can simply act as a ventriloquist dummy for the AI and get the job. Stated another way, what kind of "effective and well designed open ended interview" can you make that would not succumb to this problem?
khazhoux
With AI making traditional coding problems trivial, tech interviews are shifting toward practical, real-world challenges, system design, and debugging exercises rather than pure algorithm puzzles. Some companies are revisiting in-person whiteboarding to assess thought processes, while others embrace AI, evaluating how candidates integrate it into their workflow. There's also a greater focus on explaining decisions, trade-offs, and collaboration. Instead of banning AI, many employers now test how effectively candidates use it while ensuring they have foundational skills. The trend favors assessing problem-solving in real work scenarios rather than just coding ability under artificial constraints.
lolinder
The traditional tech interview was always designed to optimize for reliably finding someone who was willing to do what they were told even if it feels like busywork. As a rule someone who has the time and the motivation to brush up on an essentially useless skill in order to pass your job interview will likely fit nicely as a cog in your machine.
AI doesn't just change the interviewing game by making it easy to cheat on these interviews, it should be changing your hiring strategy altogether. If you're still thinking in terms of optimizing for cogs, you're missing the boat—unless you're hiring for a very short term gig what you need now is someone with high creative potential and great teamwork skills.
And as far as I know there is no reliable template interview for recognizing someone who's good at thinking outside the box and who understands people. You just have to talk to them: talk about their past projects, their past teams, how they learn, how they collaborate. And then you have to get good at understanding what kinds of answers you need for the specific role you're trying to fill, which will likely be different from role to role.
The days of the interchangeable cog are over, and with them easy answers for interviewing.
nouveaux
Have you spent a lot of time trying to hire people? I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees. This perspective smells completely like "If I were in charge, things would be so much better." Guess what? If you were to take your idea and try to lead this change across a 100 people engineering org, there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.
"talk about their past projects, their past teams, how they learn, how they collaborate"
You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
northern-lights
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
This was the norm until perhaps for about the last 10-15 years of Software Engineering.
dakiol
My take is:
- “big” tech companies like Google, Amazon, Microsoft came up with these types of tech interviews. And there it seems pretty clear that for most of their positions they are looking for cogs
- The vast majority of tech companies have just copied what “big” tech is doing, including tech interviews. These companies may not be looking for cogs, but they are using an interview process that’s not suitable for them
- Very few companies have their own interview process suitable for them. These are usually small companies and therefore the number of engineers in such companies is negligible to be taken into account (most likely, less than 1% of the audience here work at such companies)
dennis_jeeves2
> I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees.
The council itself is made of "busywork" worker bees. Slave hiring slaves - the vast majority of IT interviewers and candidates are idiot savants - they know very little outside of IT, or even realize that there is more to life than IT.
ktallett
The key is having interviewers that know what they are talking about so in-depth meandering discussions can be had regarding personal and work projects which usually makes it clear whether the applicant knows what they are talking about. Leetcode was only ever a temporary interview technique, and this 'AI' prominence in the public domain has simply sped up it's demise.
_puk
This completely..
You ask a rote question and you'll get a rote answer while the interviewee is busy looking at a fixed point on the screen.
You then ask a pointed question about something they know or care about, and suddenly their face lights up, they're animated, and they are looking around.
It's a huge tell.
crooked-v
You know, this makes me wonder if a viable remote interview technique, at least until real-time deepfaking gets better, would be to have people close their eyes while talking to them. For somebody who knows their stuff it'll have zero impact; for someone relying entirely on GPT, it will completely derail them.
danielbln
This is the way. We do an intro call, an engineering chat (exactly as you describe), a coding challenge and 2 team chat sessions in person. At the end of that, we usually have a good feeling about how sharp the candidate is, of they like to learn and discover new things, what their work ethic is. It's not bullet proof, but it removes a lot of noise from the signal.
The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
satvikpendem
> The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
Do you state this upfront or is it some hidden requirement? Generally I'd expect an interview coding exercise to not be done with AI, but if it's a hidden requirement that the interviewer does not disclose, it is unfair to be penalized for not reading their minds.
ktallett
I would say as long as it is stated you can complete the coding exercise using any tool available it is fine. I do agree, no task should be a trick.
I am personally of the view you should be able to use search engines, AI, anything you want, as the task should be representative of doing the task in person. The key focus has to be the programmer's knowledge and why they did what they did.
danielbln
Well, the challenge involves using a python LLM framework to build a simple RAG system for recipes.
It's not a hidden requirement per se to use LLM assistance, but the candidate should have a good answer ready why they didn't use an LLM to solve the challenge.
crooked-v
> as it's that much of a productivity boost when used right
Frankly, if an interviewer told me this, I would genuinely wonder why what they're building is such a simple toy product that an LLM can understand it well enough to be productive.
rachofsunshine
We haven't seen major issues with AI with candidates on camera. The couple that have tried to cheat have done so rather obviously, and the problem we use is more about problem-solving than it is about reverse-a-linked-list.
This is borne out by results downstream with clients. No client we've sent more than a couple of people has ever had concerns about quality, so we're fairly confident that we are in fact detecting the cheating that is happening with reasonable consistency.
I actually just looked at our data a few days ago to see how candidates who listed LLMs or related terms on their resume did on our interview. On average, they did much worse (about half the pass rate, and double the hard-fail rate). I suspect this is a general "corporate BS factor" and not anything about LLMs specifically, but it's certainly relevant.
themanmaran
On our side we've transitioned to only in person interviews.
The biggest thing I've noticed is take home challenges have lost all value. Since GPT can plausibly solve almost anything you throw at it, and it doesn't give you any indication of how the candidate thinks.
And to be fair, I want a candidate that uses GPT / Cursor / whatever tools get the job done. But reading the same AI solution to a coding challenge doesn't tell me anything about how they think or approach problems.
ghaff
I'm not a fan of take-home challenges anyway (for the most part). Anything non-trivial is a big time suck and you know some people will spend all weekend on you two hour assignment.
Sometimes you have to. In my previous analyst stint a writing sample was pretty non-negotiable unless they could oint to publicly-published material--which was much preferred. ChatGPT isn't much use there except to save some time. It's very formulaic and wouldn't pass though, honestly, some people are worse on their own.
acwan93
I don’t know the answer, but I’d like to share that I asked a simple question about scheduling a phone interview to learn more about a candidate.
The candidate’s first response? “Memory updated”. That led to some laughs internally and then a clear rejection email.
buggy6257
My first read of this was they made a joke (not wise when scheduling for interviews sure but maybe funny) by intentionally responding that way.
This is because my brain couldn't fathom what is likely the reality here -- that someone was just pumping your email thru AI and pumping the response back unedited and unsanitized, and so the first thing you got back was just the first "part" of the AI response.
...Christ.
rantallion
I'm with you. Looking at the way people respond online to things now since LLMs and GenAI went mainstream is baffling. So many comments along the lines of "this is AI" when there are more ordinary explanations.
john-radio
Yeah I don't know about this specific situation, but as someone who is on the job market, is a good developer, but can come off as a little odd sometimes, I often wonder how often I roll a natural 1 on my Cha check and get perceived as an AI imposter.
acwan93
If anything, coming across as “a little odd” can be a sign I’m actually talking to a human.
acwan93
Your perception of the reality is spot on. For this round I was hiring for entry level technical support and we had limited time to properly vet candidates.
Unfortunately what we end up doing is have to make some assumptions. If something seems remotely fishy, like that “Memory updated” or typeface change (ChatGPT doesn’t follow your text formatting when pasting into your email compose window), it raises a lot of eyebrows and very quickly leads to a rejection. There’s other cases where your written English is flawless but your phone interview indicates you don’t understand the English language compared to when we correspond over email/Indeed/etc.
Mind you, this is all before we even get to the technical knowledge part of any interview.
On a related hire, I am also in the unfortunate position where we may have to let a new CS grad go because it seemed like every code change and task we gave him was fully copy/pasted through ChatGPT. When presented with a simple code performance and optimization bug, he was completely lost on general debugging practices which led our team to question his previous work while onboarding. Using AI isn’t against company policy (see: small team with limited resources), but personally I see over reliance on ChatGPT as much, much worse than blindly following Stack Overflow.
gray_-_wolf
> typeface change
Long live plain text email.
() ascii ribbon campaign - against HTML e-mail
/\ www.asciiribbon.org - against proprietary attachments
anal_reactor
A friend of mine works with industrial machines, and once was tasked with translating machine's user's manual, even though he doesn't speak English. I do, and I had some free time, so I helped him. As an example, I was given user manual for a different, but similar machine.
1. The manual was mostly a bunch of phrases that were grammatically correct, but didn't really convey much meaning
2. The second half of the manual talked about a different machine than the first half
3. It was full of exceptionally bad mistranslations, and to this day "trained signaturee of the employee" is our inside joke
Imagine asking ChatGPT to write a manual except ChatGPT has down syndrome and a heart attack so it gives you five pages of complete bullshit. That was real manual that got shipped a 100 000€ or so machine. And nobody bothered to proofread it even once.
vrosas
As someone currently job searching it hasn’t changed much, besides companies adding DO NOT USE AI warnings before every section. Even Anthropic forces you to write a little “why do you want to work here DO NOT USE AI” paragraph. The irony.
Pooge
They will very happily use AI to evaluate your profile, though :)
pizzalife
Applying at Anthropic was a bad experience for me. I was invited to do a timed set of leetcode exercises on some website. I didn't feel like doing that, and focused on my other applications.
Then they emailed me a month later after my "invitation" expired. It looked like it was written by a human: "Hey, we're really interested in your profile, here's a new invite link, please complete this automated pre-screen thingie".
So I swallowed my pride and went through with that humiliating exercise. Ended up spending two hours doing algorithmic leetcode problems. This was for a product security position. Maybe we could have talked about vulnerabilities that I have found instead.
I was too slow to solve them and received some canned response.
x0x0
fyi, that's because (from experience) the last job req I publicly posted generated almost 450 responses, and (quite generously) over a third were simply not relevant. It was for a full-stack rails eng. Here, I'm not even including people whose experience was django or even React; I mean people with no web experience at all, or were not in the time zone requested. Another 20% or so were nowhere near the experience level (senior) requested either.
The price of people bulk applying with no thought is I have to bulk filter.
Pooge
So you allow yourself to use AI in order to save time, but we have to put up with the shit[1] companies make up? That's good, it's for the best if I don't work for a company that thinks so lowly of its potential candidates.
[1]: Including but not limited to: having to manually fill a web form because the system couldn't correctly parse a CV; take-home coding challenges; studying for LeetCode interviews; sending a perfectly worded, boot-licking cover letter.
Have you gone back to in-person whiteboards? More focus on practical problems? I really have no idea how the traditional tech interview is supposed to work now when problems are trivially solvable by GPT.