Anthropic: "Applicants should not use AI assistants"
254 comments
·February 3, 2025radu_floricica
latexr
> I don't find anything wrong with this
It’s not about being wrong, it’s about being ironic. We have LLMs shoved down our throats as this new way to communicate—we are encouraged to ask them to make our writing “friendlier” or “more professional”—and then one of the companies creating such a tool asks the very people most interested in it to not use it for the exact purpose we’ve been told it’s good at. They are asking you pretty please to not do the bad thing they allow and encourage everyone to do. They have no issue if you do it to others, but they don’t like it when it’s done to them. It is funny and hypocritical and pulls back the curtain a bit on these companies.
It reminded me of the time Roy Wood Jr visited a pro-gun rally where they argued guns make people safe, while simultaneously asking people to not carry guns because they were worried about safety. The cognitive dissonance is worth pointing out.
muzani
The LLM companies have always been against this kind of thing.
Sam Altman (2023): "something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and the sender using ChatGPT to condense it into the key bullet points"
3 years ago people were poking fun about how restrictive the terms were - you could get your API key blocked if you used it to pretend to be a human. Eventually people just used other AIs for things like that, so they got rid of these restrictions that they couldn't enforce anyway.
myfonj
Interesting that this quote really contains "sender" where "recipient" was intended, but it had absolutely no impact on any reader. (I even asked Claude and ChatGPT if they noticed anything strange in the sentence, and both needed additional prompting to spot that mistake.)
https://x.com/sama/status/1631394688384270336
Thanks for this heads-up, by the way, I've missed this particular tweet, but eventually got into exact same observation.
blagie
> It reminded me of the time Roy Wood Jr visited a pro-gun rally where they argued guns make people safe, while simultaneously asking people to not carry guns because they were worried about safety. The cognitive dissonance is worth pointing out.
Well, no. It's irony, but it's only cognitive dissonance in a comedy show which misses the nuance.
Most pro-gun organizations are heavily into gun safety. The message is that guns aren't unsafe if they're being used correctly. Most of the time, this means that most guns should be locked up in a safe, with ammo in a separate safe, except when being transported to a gun range, for hunting, or similar. When being used there, one should follow a specific set of procedures for keeping those activities safe as well.
It's a perfect analogy for the LLM here too. Anthropic encourages it for many uses, but not for the one textbox. Irony? Yes. Wrong? Probably not.
wepple
Huge miss on the gun analogy. The likes of NRA are pushing for 50-state constitutional carry. Everyone has a gun on their person with no licensing requirements. Yet at the NRA conference they ban guns.
There’s probably actually some other hidden factor though, like the venue not allowing it.
Edit: FWIW those late night TV shows are nothing but rage bait low brow “comedy” that divides the country. But the above remains true.
numbsafari
These are the same people that insist we arm elementary school teachers and expect those teachers to someday pull the trigger on a child instead of having proper gun laws.
There is no irony.
7bit
> Well, no. It's irony, but it's only cognitive dissonance in a comedy show which misses the nuance.
Like the nuance between sending out your love and doing the Nazi salute? Or different?
amelius
Yes, and they should state that they also don't use AI in the selection process.
xylifyx
They don't because they do. However maybe the Anthropic AI isn't performing well on AI generated applications.
I think they will get better results by having applicants talk to an AI during the application process.
jack_pp
Making your comms friendlier or whatever is one of the myriad ways to use LLMs. Maybe you personally have "LLMs shoved down your throat" by your corporate overlords. No one in their right mind can say that LLMs were created for such a purpose, it just so happens you can use it in this way.
fmbb
LLMs are sold by corporate overlords to corporate overlords. They all know this is what it will be used for.
The writing was on the wall that the main use will be spam and phishing.
You can say the creators did not intend on this purpose, but it was created with knowledge that this would be the main use case.
YurgenJurgensen
LLMs aren’t making your comms friendlier; they’re just making them more vapid. When I see the language that ChatGPT spits out when you tell it to be friendly, I immediately think ‘okay, what is this person trying to sell me?’
xnorswap
I'm with you, I'm very surprised by the amount arguments which boil down to, "Well I can cheat and get away with it, so therefore I should cheat".
I have read that people are getting more selfish[1], but it still shocks me how much people are willing to push individualism and selfishness under the guise of either, "Well it's not illegal" or "Well, it's not detectable".
I think I'm just very much out of tune with the zeitgeist, because I can't imagine not going along with what's a polite request not to use AI.
I guess that puts me at a serious disadvantage in the job market, but I am okay with that, I've always been okay with that. 20 years ago my cohort were doing what I thought were selfish things to get ahead, and I'm fine with not doing those things and ending up on a different lesser trajectory.
But that doesn't mean I won't also air my dissatisfaction with just how much people seem to justify selfishness, or don't even regard it as selfish to ignore this request.
[1] https://fortune.com/2024/03/12/age-of-selfishness-sick-singl...
latexr
> I think I'm just very much out of tune with the zeitgeist, because I can't imagine not going along with what's a polite request not to use AI.
No, what you are is ignoring the context.
This request comes from a company building, promoting, and selling the very thing they are asking you not to use.
Yes, asking you not to use AI is indeed a polite request. It is one you should respect. “The zeitgeist” has as much people in favour of AI as against it, and picking either camp doesn’t make anyone special. Either stance is bound to be detrimental in some companies and positive in others.
But none of that matters, what makes this relevant is the context of who’s asking.
xnorswap
I didn't miss that context, I understand who Anthropic are.
anal_reactor
Honesty is not something our modern societies optimize for, although I do wish things were different
Muromec
It's not just the society, it's this particular company who optimizes for it.
mattigames
The ones paying are in their vast majority the most selfish of them all, for example it would be reasonable to say that Jeff Bezos its one of the most selfish people on the planet, so at the end it doesn't boil down to "Well I can cheat and get away with it, so therefore I should cheat" but more like "Well I can cheat, get away with it and the victim is just another cheater, so therefore I should cheat"
exe34
> "Well it's not illegal"
What's good for the gander.... I promise you they will use AI to vet your application.
dhruvrajvanshi
> I promise you they will use AI to vet your application.
So?
implmntatio
> If you're looking to score, sure, it's somewhat unethical but it works.
Observation/Implication/Opinion:
Think reciprocal forces and trash TV ethics in both, closed and open systems. The consequences are continuously diminished AND unvarying returns. Professionally as well as personally, in all parties involved.
Nothing and nobody is resilient "enough" to the mechanism force - 'counter'-force so you better pick the right strategy. Waiting/Processing for a couple of days, lessons and honest attempts yields exponentially better results than cheating.
Companies should beware of this if they expect results that are qualitatively AND "honestly" safe & sound. This has been ignored in the past decades, which is why we are "here". Too much work, too many jobs, and way too many enabling outs have been lost irreversibly, on the individual level as well as in nano-, micro-, and macro-economics.
Applicants using AI is fine but applicants not being able to make that output usefully THEIRS is a problem.
passwordoops
I agree with your sentiment. But coming from a generative AI company that says "career development" and "communication" are among their two most popular use cases... That's like a tobacco company telling employees they are not permitted to smoke tobacco
radu_floricica
Well, they probably aren't permitted to smoke tobacco indoors.
I honestly fail to see even the irony. "Company that makes hammers doesn't want you to use hammers all the time". It's a tool.
But if I squint, I _can_ see a mean-spirited "haha, look at those hypocrites" coming from people who enjoy tearing others down for no particular reason.
passwordoops
But it's ok for Anthropic's marketing, sales and development teams to push to use case (AI for writing, communication and career development)?
Even when squinting I can't see a genuine argument for why Anthrpoic shouldn't be raked over the coals for their sheer hypocrisy
jusssi
A brewery telling their employees to not drink the product while at work?
lr4444lr
It's like an un-inked tattoo artist or a tee-totaling sommelier.
The optics are just bad. Stand behind your product, or accept that you will be fighting ridicule and suspicion endlessly.
null
Muromec
It is very sensible position and I think the quote is a bit out of context, but the important part here is who it is coming from -- the company that makes money on both cheating in the job application process (which harms employers) and replacing said jobs with AI, or at least creating another excuse for layoffs (which harms the employees).
In a sense, they poisoned the well and don't want to drink from it now. Looking at it from this perspective justifies (in the eyes of some people at least) said cheating. Something something catalytic converter from the company truck.
bayindirh
> if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.
Sorry, the thought process of thinking that using an LLM for a job application, esp. for a fields which requests candid input about one's motivation is acceptable, is beyond me.
sshine
> please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills.
There are two backwards things with this:
1) You can't ask people to not use AI when careful, responsible use is undetectable.
It just isn't a realistic request. You'll have great replies without AI use and great replies with AI use, and you won't be able to tell whether a great reply used AI or not. You will just be able to filter sludge and dyslexia.
2) This is still the "AI is cheating" approach, and I had hoped Anthropic to be thought leaders on responsible AI use:
In life there is no cheating. You're just optimizing for the wrong thing. AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
If AI is making your final product and you're none the wiser, it didn't really help you, it just made you addicted to it.
Teach a man to fish...
mirkodrummer
Can't disagree more. Talent is built and perfected upon thousands hours practice, LLMs just make you lazy. One thing people with seniority in the field don't realize, as I guess you are, is that LLMs don't help develop "muscle memory" in young practioners, it just make them miserable, often caged in an infinite feedback loop of bug fixing or trying to untangle a code mess. They may extract some value by using it for studying but I doubt it and only goes so far, when started I remember being able to extract so much knowledge just by reading a book about algorithms, try to reimplement things, break them, and so on. Today I can use an LLM because I'm wise enough and I can spot wrong answers, but still feel becoming a bit lazy.
sho_hn
I strongly agree with this comment. Anecdotal evidence time!
I'm an experienced dev (20 years of C++ and plenty of other stuff), and I frequently work with younger students in a mentor role, e.g. I've done Google Summer of Code three times as a mentor, and am also in KDE's own mentorship program.
In 2023/24, when ChatGPT was looming large, I took on a student who was of course attempting to use AI to learn and who was enjoying many of the obvious benefits - availability, tailoring information to his inquiry, etc. So we cut a deal: We'd use the same ChatGPT account and I could keep an eye on his interactions with the system, so I could help him when the AI went off the rails and was steering him into the wrong direction.
He initially made fast progress on the project I was helping him with, and was able to put more working code in place than others in the same phase. But then he hit a plateau really hard soon after, because he was running into bugs and issues he couldn't get solutions from the AI for and he just wasn't able to connect the dots himself.
He'd almost get there, but would sometimes forget to remove random single lines doing the wrong thing, etc. His mental map of the code was poor, because he hadn't written it himself in that oldschool "every line a hard-fought battle" style that really makes you understand why and how something works and how it connects to problems you're solving.
As a result he'd get frustrated and had bouts of absenteeism next, because there wasn't any string of rewards and little victories there but just listless poking in the mud.
To his credit, he eventually realized leaning on ChatGPT was holding him back mentally and he tried to take things slower and go back to API docs and slowly building up his codebase by himself.
apprentice7
It's like when you play World of Warcraft for the first time and you have this character boost to max level and you use it. You didn't go through the leveling phase and you do not understand the mechanics of your character, the behaviour of the mobs, or even how to get to another continent.
You are directly loaded with all the shiny tools and, while it does make it interesting and fun at first, the magic wears off rather quickly.
On the other hand, when you had to fight and learn your way up to level 80, you have this deeper and well-earned understanding of the game that makes for a fantastic experience.
DrNosferatu
> "every line a hard-fought battle" style that really makes you understand why and how something works
Absolutely true. However:
The real value of AI will be to *be aware* when at that local optimum, and then - if unable to find a way forward - at least reliably notify the user that that is indeed the case.
Bottom line, the number of engineering “hard thought battles” you can actually sustain should be chosen very wisely.
The performance multiplier that LLM agents brought changed the world. At least as the consumer web did in the 90s, and there will be no turning back.
This is like a computer company around 1980, would be hiring engineers but forbade access to computers for some numerical task.
This reminds me the reason Konami MSX1 games look like they do, compared to the most of the competition: having access to superior development tools - their HP hardware emulator workstations.
If you are unable to come up with a filter for your applicants that is able to detect your own product, maybe you should evolve. What about asking an AI how to solve this? ;)
Shinchy
'"every line a hard-fought battle" style that really makes you understand why and how something works'
I totally agree with this and I really like that way of wording it.
mkvoid
This is fascinating. The idea of leveling off in the learning curve is one that I hadn't considered before, although with hindsight it seems obvious. Based on your recollection (and without revealing too many personal details), do you recall any specific areas that caused the struggle? For example, was it a lack of understanding of the program architecture? Was it an issue of not understanding data structures? (or whatever) Thanks for your comment, it opened up a new set of questions for me.
Muromec
I have a feeling that "almost getting there" will simply become the norm. I have seen a lot of buggy and almost but not exactly right applications, processes and even laws that people simply have to live with.
If US can be the worlds biggest economy while having an opiod epidemy and writing paper cheques, if Germany can be Europes manufacturing hub while using faxes, sure we as a society can live in the unoptimal state of everything digital being broken 10% of the time insteaf of hald percent
Eisenstein
What about people who don't have access to a mentor? If not AI then what is their option? Is doing tutorials on your own a good way to learn?
rrr_oh_man
> listless poking in the mud
raincole
Use LLM. But do not let it be the sole source of your information for any particular field. I think it's one of the most important disciplines the younger generation - to be honest, all generations - will have to learn.
I have a rule for myself as a non-native English speaker: Any day I ask LLMs to fix my English, I must read 10 pages from traditionally published books (preferably pre-2023). Just to prevent LLM from dominating my language comprehension.
GlacierFox
You perfectly encapsulated my view on this. I'm utterly bewildered with people who take the opposing position that AI is essentially a complete replacement for the human mind and you'd be stupid not to fully embrace it as your thought process.
null
noufalibrahim
This is a straightforward position and it's the one I hold but I had to reply to thank you for stating it so succinctly.
eknkc
I drove cars before the sat nav systems and when I visited somewhere, I'd learn how to drive to there. The second drive would be from memory. However, as soon as I started relying on sat navs, I became dependent on them. I can not drive to a lot of places that I visited more than once without a sat nav these days (and I'm getting older, that's a part of it too).
I wonder if the same thing will happen with coding and LLMs.
jack_pp
Let me give you an example from yesterday. I was learning tailwind and had a really long class attribute on a div which I didn't like. I wanted to split it and found a way to do it using my JavaScript framework (the new way to do it was suggested by deepseek). When I started writing by hand the list of classes in the new format copilot gave me an auto complete suggestion after I wrote the first class. I pressed tab and it was done.
I showed this to my new colleague who is a bit older than me and sort of had similar attitudes as you. He told me he can do the same with some multi cursor shenanigans and I'll be honest in that I wasn't interested in his approach. Seems like he would've taken more time to solve the same problem even though he had superior technique than me. He said sure it takes longer but I need to verify by reading the whole class list and that's a pain but I just reloaded the page and it was fine. He still wasn't comfortable with me using copilot.
So yes, it does make me lazier but you could say the same about using go instead of C or any higher level abstraction. These tools will only get better and more correct. It's our job to figure out where it is appropriate to use them and where it isn't. Going to either extremes is where the issue is
skydhash
I wouldn’t say it’s laziness. The thing is that every line of code is a burden as it’s written once, but will be read and edited many times. You should write the bare amount that makes the project work, then make it readable and then easily editable (for maintenance). There are many books written about the last part as it’s the hardest.
When you take all three in consideration, an llm won’t really matter unless you don’t know much about the language or the libraries. When people goes on about Vim or Emacs, it’s just that it makes the whole thing go faster.
mirkodrummer
Remember though that lazyness, as I learned in computing, is kinda "doing something later": you might have pushed the change/fix faster than your senior fellow programmer, but you still need to review and test that change right? Maybe the change you're talking about was really trivial and you just needed to refresh your browser to see a trivial change, but when it's not, being lazy about a change will only gets you suffer more when reviewing a pr and testing the non trivial change working for thousands customers with different devices
Narciss
I hear this argument all the time, and I think “this is exactly how people who coded in assembly back in the day thought about those using higher level programming languages.”
It is a paradigm shift, yes. And you will know less about the implementation at times, yes. But will you care when you can deploy things twice, three times, five times as fast as the person not using AI? No. And also, when you want to learn more about a specific bit of the AI written code, you can simply delve deep into it by asking the AI questions.
The AI right now may not be perfect, so yes you still need to know how to code. But in 5 years from now? Chances are you will go in your favorite app builder, state what you want, tweak what you get and you will get the product that you want, with maybe one dev making sure every once in a while that you’re not messing things up - maybe. So will new devs need to know high level programming languages? Possibly, but maybe not.
mirkodrummer
You seem so strong opinionated and sure what the future holds for us, but I must remember you though, that in your example "from assembly to higher level programming languages" the demand for programmers didn't go down, went up, and as companies were able to develop more, more development and more investments were made, more challenges showed up, new jobs were invented and so on... You get where I'm going... The thing I'm questioning is how much lazy new technologies make you, many programmers even before LLMs had no idea how a computer works and only programmed in higher level languages, it was a disaster before with many people claming software was bad and industry going down a road where software quality matters less and less. Well that situation turbo boosted by an LLMs because "doesn't matter i can deploy 100x times a day" disrupting user experience imo won't led us far
0x1062
AI is a tool, and tool use is not lazy.
mathieuh
I think it's a lot more complicated than that. I think it can be used as a tool for people who already have knowledge and skills, but I do worry how it will affect people growing up with it.
Personally I see it more like going to someone who (claims) to know what they're doing and asking them to do it for me. I might be able to watch them at work and maybe get a very general idea of what they're doing but will I actually learn something? I don't think so.
Now, we may point to the fact that previous generations railed at the degeneration of youth through things like pocket calculators or mobile phones but I think there is a massive difference between these things and so-called AI. Where those things were tools obligatorily (if you give a calculator to someone who doesn't know any formulae it will be useless to them), I think so-called AI can just jump straight to giving you the answer.
I personally believe that there are necessary steps that must be passed through to really obtain knowledge and I don't think so-called AI takes you through those steps. I think it will result in a generation of people with markedly fewer and shallower skills than the generations that came before.
waste_monk
Tool use is fine, when you have the education and experience to use the tools properly, and to troubleshoot and recover when things go wrong.
The use of AI is not just a labour saving device, it allows the user to bypass thinking and learning. It robs the user of an opportunity to grow. If you don't have the experience to know better it may be able to masquerade as a teacher and a problem solver, but beyond a trivial level relying on it is actively harmful to one's education. At some point the user will encounter a problem that has no existing answer in the AI's training dataset, and come to realise they have no real foundation to rely on.
Code generative AI, as it currently exists, is a poisoned chalice.
eviks
It is if the way to learn is doing it without a tool. Imagine using a robot to lift weights if you want to grow your own muscle mass. "Robot is a tool"
GlacierFox
The point he's making is, we still have to learn to use tools no? There still had to he some knowledge there or else you're just sat sifting through all the crap the AI spits out endlessly for the rest of your life. The op wrote his comment like it's a complete replacement rather than an enhancement.
4ndr3vv
You could similarly consider driving a car as "a tool that helps me get to X quicker". Now tell me cars don't make you lazy.
Fricken
Tools help us to put layers of abstraction between us and our goals. when things become too abstracted we lose sight of what we're really doing or why. Tools allow us to feel smart and productive while acting stupidly, and against our best interests. So we get fascism and catastrophic climate change, stuff like that. Tools create dependencies. We can't imagine life without our tools.
"We shape our tools and our tools in turn shape us" said Marshall McLuhan.
Ekaros
For learning it can very well be. And also it really depends on the tool and task. Calculator is fine tool. But symbolic solver might be a few steps too far. If you don't already understand the process. And possibly the start and end points.
Problem with AI is that it is often black box tool. And not even deterministic one.
antihipocrat
Using the wrong tool for the job required isn't lazy but it may be dangerous, inefficient and ultimately more expensive.
jon-wood
> AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
At least in theory that’s not what homework is. Homework should be exercises to allow practicing whatever technique you’re trying to learn, because most people learn best by repeatedly doing a thing rather than reading a few chapters of a book. By applying an LLM to the problem you’re just practicing how to use an LLM, which may be useful in its own right, but will turn you into a one trick pony who’s left unable to do anything they can’t use an LLM for.
shipp02
What if you use it to get unstuck from a problem? Then come back and learn more about what you got stuck on.
That seems like responsible use.
ljm
In the context of homework, how likely is someone still in school, who probably considers homework to be an annoying chore, going to do this?
I can't really see an optimistic long-term result from that, similar to giving kids an iPad at a young age to get them out of your hair: shockingly poor literacy, difficulty with problem solving or critical thinking, exacerbating the problems with poor attention span that 'content creators' who target kids capitalise on, etc.
I'm not really a fan of the concept of homework in general but I don't think that swapping brain power with an OpenAI subscription is the way to go there.
croes
But how likely is that?
otabdeveloper4
Um, get with the times, luddite. You can use an LLM for everything, including curing cancer and fixing climate change.
(I still mentally cringe as I remember the posts about Disney and Marvel going out of business because of Stable Diffusion. That certainly didn't age well.)
croes
AI did my gym workout, still no muscles.
arkey
It would be great if all technologies freed us and gave us more time to do useful or constructive stuff instead. But the truth is, and AI is a very good example of this, a lot of these technologies are just making people dumb.
I'm not saying they are essentially bad, or that they are not useful at all, far from that. But it's about the use they are given.
> You can use an LLM for everything, including curing cancer and fixing climate change.
Maybe, yes. But the danger is rather in all the things you no longer feel you have a need to do, like learning a language, or how to properly write, or read.
LLM for everything is like the fast-food of information. Cheap, unhealthy, and sometimes addicting.
danw1979
> AI made your homework? Guess what, the homework is a proxy for your talent, and it didn't make your talent.
Well, no. Homework is an aid to learning and LLM output is a shortcut for doing the thinking and typing yourself.
Copy and pasting some ChatGPT slop into your GCSE CS assignment (as I caught my 14yo doing last night…) isn’t learning (he hadn’t even read it) - it’s just chucking some text that might be passable at the examiner to see if you can get away with it.
Likewise, recruitment is a numbers game for under qualified applicants. Using the same shortcuts to increase the number of jobs you apply for will ultimately “pay off” but you’re only getting a short term advantage. You still haven’t really got the chops to do the job.
null
llm_trw
Homework is a proxy for your retention of information and a guide to what you should review. That somehow schools started assigning grades to it is as nonsensically barbaric as public bare ass caning was 80 years ago and driven by the same instinct.
jvvw
I agree on the grades part. And I was just thinking that the university that I went to never gave us grades during the year (the only exception I can think of was when we did practice exam papers so we had an idea how we were doing).
I think homework is more than a guide to what you should review though. It's partly so that the teacher can find out what students have learned/understood so they can adapt their teaching appropriately. It's also because using class/contact time to do work that can be done independently isn't always the best use of that time (at least once students are willing and capable of doing that work independently).
dkjaudyeqooe
> careful, responsible use is undetectable
I think that's wishful thinking. You're underestimating how much people can tell about other people with the smallest amount of information. Humans are highly attuned to social interactions and synthetic responses are more obvious that you think.
DFHippie
I was a TA years ago, before there were LLMs one could use to cheat effectively. The professor and I still detected a lot of cheating. The problem was what to do once you've caught it? If you can't prove that it's cheating -- you can't cite the sources copied from -- is it worth the fight? The professor's solution was just to knock down their grades.
At that time just downgrading them was justifiable, because though they had copied in someone else's text, they often weren't competent to identify the text that was best to copy, and they had to write some of the text themselves to make it appear a coherent whole and they weren't competent to do that. If they had used LLMs we would have been stuck. We would be sure they had cheated but their essay would still be better than that of many/most of their honest peers who had tried to demonstrate relevant skill and knowledge.
I think there is no solution except to stop assigning essays. Writing long form text will be a boutique skill like flint knapping, harvesting wild tubers, and casting bronze swords. (Who knows, the way things are going these skills might be relevant again all too soon.)
djtango
Consider the case where there is a non Native English speaker and they use AI to misrepresent the standard of written English communication.
Assume their command of English is insufficient to get the job ultimately. They've just wasted their own time and the company's time in that situation.
I imagine Anthropic is not short of applicants...
llm_trw
>Hey Claude, translate this to Swahili from English. Ok, now translate my response from Swahili to English. Thanks.
We're close to the point where using a human -> stt -> llm -> tts -> human pipeline you can do real time high quality bi directional spoken translation on a desktop.
YurgenJurgensen
Why not just send the Swahili and let them MTL on the other end? At least then they have the original if there’s any ambiguity.
I’ve read multiple LLM job applications, and every single time I’d rather have just read the prompt. It’d be a quarter of the length and contain no less information.
madeofpalk
I've seen applicants use AI to answer questions during the 'behavioral' Q&A-style interviews. Those applicants are 'cheating', and it defeats the whole purpose as we want to understand the candidate and their experience, not what LLMs will regurgitate.
Thankfully it's usually pretty easy to spot this so it's basically an immediate rejection.
Draiken
If the company is doing behavioral Q&A interviews, I hope they're getting as many bad applicants as possible.
Adding a load of pseudo-science to the already horrible process of looking for a job is definitely not what we need.
I'll never submit myself to pseudo-IQ tests and word association questions for a job that will 99.9% of the time ask you to build CRUD applications.
The lengths that companies go to avoid doing a proper job at hiring people (one of the most important jobs they need to do) with automatic screening and these types of interviews is astonishing.
Good on whoever uses AI for that kind of shit. They want bullshit so why not use the best bullshit generators of our time?
You want to talk to me and get a feeling about how I'd behave? That's totally normal and expected. But if you want to get a bunch of written text to then run sentiment analysis on it and get a score on how "good" it is? Screw that.
Applejinx
You could reasonably argue that they're not cheating, indeed they're being very behaviorally revealing and you do understand everything you need to understand about them.
Too bad for them, but works for you…
I'm imagining a hiring workflow, for a role that is not 'specifically use AI for a thing', in which there is no suggestion that you shouldn't use AI systems for any part of the interview. It's just that it's an auto-fail, and if someone doesn't bother to hide it it's 'thanks for your time, bye!'.
And if they work to hide it, you know they're dishonest, also an auto-fail.
null
neilv
> You can't ask people to not use AI when careful, responsible use is undetectable.
You can't make a rule, if people can cheat undetectably?
Double_a_92
You can, but it would be pointless since it would just filter out some honest people.
oneeyedpigeon
But this is exactly what we already do. Most exams have a "no cheating" rule, even though it's perfectly possible to cheat. The point is to discourage people from doing so, not to make it impossible.
fancyfredbot
If I want to assess a candidates performance when they can't use AI then I think I'd sit in a room with them and talk to them.
If I ask people not to use AI on a task where using AI is advantageous and undetectable then I'm going to discriminate against honest people.
mrdevlar
But they don't want to do that.
They want to use AI in their hiring process. They want to be able to offload their work and biases to the machine. They just don't want other people to do it.
There's a reason that the EU AI legislation made AI that is used to hire people one of the focal points for action.
oneeyedpigeon
I think this gets to the core of the issue: interviews should be conducted by people who deeply understand the role and should involve a discussion, not a quiz.
yreg
Is it advantageous? The AI generated responses to this question are prone to be dull.
It might even give the honest people an advantage by giving them a tip to answer on their own.
_heimdall
The irony here is obvious, but what's interesting is that Anthropic is basically asking to not give then a realistic preview of how you will work.
This feels similar to asking devs to only use vim during a coding challenge and please refrain from using VS Code or another full featured IDE.
If you know, and even encourage, your employees to use LLMs at work you should want to see how well candidates present themselves in that same situation.
ben30
This application requirement really bothered me as someone who's autistic and dyslexic. I think visually, and while I have valid ideas and unique perspectives, I sometimes struggle to convert my visual thoughts into traditional spoken/written language. AI tools are invaluable to me - they help bridge the gap between my visual thinking and the written expression that's expected in professional settings.
LLMs are essentially translation tools. I use them to translate my picture-thinking into words, just like others might use spell-checkers or dictation software. They don't change my ideas or insights - they just help me express them in a neurotypical-friendly format.
The irony here is that Anthropic is developing AI systems supposedly to benefit humanity, yet their application process explicitly excludes people who use AI as an accessibility tool. It's like telling someone they can't use their usual assistive tools during an application process.
When they say they want to evaluate "non-AI-assisted communication skills," they're essentially saying they want to evaluate my ability to communicate without my accessibility tools. For me, AI-assisted communication is actually a more authentic representation of my thoughts. It's not about gaining an unfair advantage - it's about leveling the playing field so my ideas can be understood by others.
This seems particularly short-sighted for a company developing AI systems. Shouldn't they want diverse perspectives, including from neurodivergent individuals who might have unique insights into how AI can genuinely help people think and communicate differently?
oneeyedpigeon
This is an excellent comment and it more-or-less changes my opinion on this issue. I approached it with an "AI bad" mentality which, if truth be told, I'm still going to hold. But you make a very good argument for why AI should be allowed and carefully monitored.
I think it was the spell-checker analogy that really sold me. And this ties in with the whole point that "AI" isn't one thing, it's a huge spectrum. I really don't think there's anything wrong with an interviewee using an editor that highlights their syntax, for example.
Where do you draw the line, though? Maybe you just don't. You conduct the interview and, if practical coding is a part of it, you observe the candidate using AI (or not) and assess them accordingly. If they just behave like a dumb proxy, they don't get the job. Beyond that, judge how dependant they are on AI and how well they can use it as a tool. Not easy, but probably better than just outright banning AI.
jaggederest
I feel very similarly. I'm also an extremely visual thinker who has a job as a programmer, and being able to bounce ideas back and forth between a "gifted intern" and myself is invaluable (in the past I used to use actual interns!)
I regard it as similar to using a text-to-speech tool for a blind person - who cares how they get their work done? I care about the quality of their work and my ability to interact with them, regardless of the method they use to get there.
Another example I would give: imagine there's someone who only works as a pair programmer with their associate. Apart, they are completely useless. Together, they're approximately 150% as productive as any two programmers pairing together. Would you hire them? How much would you pay them as a pair? I submit the right answer is yes, and something north of one full salary split in two. But for bureaucracy I'd love to try it.
null
null
gcanyon
Everyone arguing for LLMs as a corrupting crutch needs to explain why this time is different: why the grammar-checkers-are-crutches, don't-use-wikipedia, spell-check-is-a-crutch, etc. etc. people were all wrong, but this time the tool really is somehow unacceptable.
sigmoid10
This is quite a conundrum. These AI companies thrive on the idea that very soon people will not be replaced by AI, but by people who can effectively use AI to be 10x more productive. If AI turns a normal coder into a 10x dev, then why wouldn't you want to see that during an interview? Especially since cheating this whole interview system has become trivial in the past months. It's not the applicants that are the problem, it's the outdated way of doing interviews.
sifex
Because as someone who’s interviewing, I know you can use AI — anyone can. It likely obscures me from judging the pitfalls, and design and architecture decisions that are required in proper engineering roles. Especially for senior and above applications, I want to make an assessment of how you think about problems, where it gives a chance for the candidate to show their experience, their technical understanding, and their communication skills.
We don’t want to work with AI, we are going to pay the person for the persons time, and we want to employ someone who isn’t switching off half their cognition when a hard problem approaches.
lukan
No, not everyone can really use AI to deliver something that works.
And ultimately, this is what this is about, right? Delivering working products.
miningape
> No, not everyone can really use AI to deliver something that works
"That works" is doing a lot of heavy lifting here, and really depends more on the technical skills of the person. Because, shocker, AI doesn't magically make you good and isn't good itself.
Anyone can prompt an AI for answers, it takes skill and knowledge to use those answers in something that works. By prompting AI for simple questions you don't train your skill/knowledge to answer the question yourself. Put simply, using AI makes you worse at your job - precisely when you need to be better.
Dylan16807
> not everyone can really use AI to deliver something that works.
That's not the assumption. The assumption is that if you prove you have a firm grip on delivering things that work without using AI, then you can also do it with AI.
And that it's easier to test you when you're working by yourself.
nejsjsjsbsb
Then they shouldn't use libraries, open source code or even existing compilers. They shouldn't search online (man pages is OK). They should use git plumbing commands and sh (not bash of zsh). They should not have potable water in there house but distill river water.
porridgeraisin
There is a balance to be struck. You obviously don't expect a SWE to begin by identifying rare earth metal mining spots on his first day.
Where the line is drawn is context dependent, drawing the same single line for all possible situations is not possible and it's stupid to do so.
logicchains
>We don’t want to work with AI, we are going to pay the person for the persons time
If your interview problems are representative of the work that you actually do, and an AI can do it as well as a qualified candidate, then that means that eventually you'll be out-competed by a competitor that does want to work with AI, because it's much cheaper to hire an AI. If an AI could do great at your interview problems but still suck at the job, that means your interview questions aren't very good/representative.
reshlo
Interview problems are never representative of the work that software developers do.
dailykoder
Very very true! Give them a take home assignment first and if they have a good result on that, give them an easier task, without AI, in person. Then you will quickly figure out who actually understands their work
a2128
If the interview consists of the interviewer asking "Write (xyz)", the interviewee opening copilot and asking "Write (xyz)", and accepting the code. What was the point of the interview? Is the interviewee a genius productive 10x programmer because by using AI he just spent 1/10 the time to write the code?
Sure, maybe you can say that the tasks should be complex enough that AI can't do it, but AI systems are constantly changing, collecting user prompts and training to improve on them. And sometimes the candidates aren't deep enough in the hiring process yet to justify spending significant time giving a complex task. It's just easier and more effective to just say no AI please
pydry
If an AI can do your test better than a human in 2025 it reflects not much better on your test than if a pocket calculator could do your test better than a human in 1970.
That did happen and the result from the test creators was the same back then: "we're not the problem, the machines are the problem. ban them!"
In the long run it turned out that if you could cheat with a calculator though, it was just a bad test....
I think there is an unwillingness to admit that there is a skill issue here with the test creators and that if they got better at their job they wouldnt need to ban candidates from using AI.
It's surprising to hear this from anthropic though.
neilv
Kudos to Anthropic. The industry has way too many workers rationalizing cheating with AI right now.
Also, I think that the people who are saying it doesn't matter if they use AI to write their job application might not realize that:
1. Sometimes, application questions actually do have a point.
2. Some people can read a lot into what you say, and how you say it.
jusomg
I do lots of technical interviews in Big Tech, and I would be open to candidates using AI tools in the open. I don't know why most companies ban it. IMO we should embrace them, or at least try to and see how it goes (maybe as a pilot program?).
I believe it won't change the outcomes that much. For example, on coding, an AI can't teach someone to program or reason in the spot, and the purpose of the interview never was to just answer the coding puzzle anyway.
To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc). If I give you a puzzle and you paste the most optimized answer with no reasoning or comment you're not going to pass the interview, no matter if it's done with AI, from memory or with stack overflow.
So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.
gjulianm
> So what are we afraid of? That people are going to copy paste from AI outputs and we won't notice the difference with someone that really knows their stuff inside out? I don't think that's realistic.
Candidates could also have an AI listening to the questions and giving them answers. There are other ways that they could be in the process without copy/pasting blindly.
> To me it's always been about how someone reasons, how someone communicates, people understanding the foundations (data structure theory, how things scale, etc).
Exactly, that's why I feel like saying "AI is not allowed" makes it all more clear. As interviewers we want to see these abilities you have, and if candidates use an AI it's harder to know what's them and what's the AI. It's not that we don't think AI is an useful tool, it's that it reduces the amount of signal we get in an interview; and in any case there's the assumption than the better someone performs the better they could use AI.
jusomg
You could also learn a lot from what someone is asking an AI assistant.
Someone asking: "solve this problem" vs "what is the difference between array and dict" vs "what is the time complexity of a hashmap add operation", etc.
They give you different nuances on what the candidate knows and how it is approaching the understanding of the problem and its solution.
dist-epoch
It's a new spin on the old leetcode problem - if you are good at leetcode you are not necessarily a good programmer for a company.
jonsolo
The goal of an interview is to assess talent. AI use gets in the way of that. If the goal were only to produce working code, or to write a quality essay, then sure use AI. But arguing that misunderstands the point of the interview process.
Disclaimer: I work at Anthropic but these views are my own.
Daub
Half way through a recent interview it became very apparent that the candidate was using AI. This was only apparent in the standard 'why are you interested in working here?' Questions. Once the questions became more AI resistant the candidate floundered. There English language skills and there general reasoning declined catastrophically. These question had originally been introduced to see see how good the candidate was at thinking abstractly. Example: 'what is your creative philosophy?'
gameshot911
>There English language skills... declined catastrophically.
Let he who is without sin...
null
csomar
> what is your creative philosophy?
Seriously?
autonomousErwin
This probably means they are completely unable to differentiate between AI and non-AI else they would just discard the AI piles of applications.
I'll be the contrarian and say that I don't find anything wrong with this, and if I were a candidate I'd simply take this as useful information for the application process. They do encourage use of AI, but they're asking nicely to write my own texts for the application - that's a reasonable request, and I'd have nothing against complying.
sshine reply above is coming from a very conflictual mindset. "Can I still use AI and not be caught? Is it cheating? Does it matter if it's cheating?"
I think that's a bit like lying on your first date. If you're looking to score, sure, it's somewhat unethical but it works. But if you're looking for a long term collaboration, _and_ you expect to be interviewed by several rounds of very smart people, then you're much better off just going along.