Skip to content(if available)orjump to list(if available)

Ask HN: From the MIT study, is it smarter to resign than to use forced AI?

Ask HN: From the MIT study, is it smarter to resign than to use forced AI?

14 comments

·June 28, 2025

Below is the link to the MIT study for reference.

https://arxiv.org/pdf/2506.08872

AnotherGoodName

Shrug If you are being told by your boss to use AI or quit and you absolutely refuse to use AI yes you should quit.

The end. No anger at you or your boss. It's an incompatibility.

I understand why some companies mandate usage these days. Especially for programming. The honest truth is that it does speed up development. The other honest truth is that there's resistance to change that harms productivity at times like this and the only way around that is for leadership to be very direct on this point.

To use a metaphor employers don't want weavers who refuse to make use of the loom. https://en.wikipedia.org/wiki/Luddite

tonfa

> To use a metaphor employers don't want weavers who refuse to make use of the loom

it's funny to reference that, since the luddite movement was about working conditions, pay, and quality of goods produced. It wasn't ideological opposition to technology (They didn't destroy machines when acceptable conditions were agreed)

bgwalter

You are a good employee!

> To use a metaphor employers don't want weavers who refuse to make use of the loom.

The loom actually produced something, as opposed to mediocre coders who ingratiate themselves with management by pushing "AI" because they hate all productive people.

The loom also did not steal other people's IP.

AnotherGoodName

I see this attitude a lot. The idea that people finding value from AI are mediocre programmers.

I've been staff level at Meta and Google and many other companies in my career. I've been in the industry over 20 years now. I can talk to peers in the industry at that level and above and the sentiment is pretty universal. "This saves a lot of time, we need engineers to learn to use this asap". Such decisions are not coming from a vacuum. It's literally your most senior engineers advising management that leads to these mandates.

p0w3n3d

The loom however does not make t-shirts despite you asking it for the shorts. The loom does not confabulate, create four handed sleeves and tell you this is the correct way...

techpineapple

Just like, thinking about how bad early machines were, I bet early looms made a whole bunch of annoying mistakes that required cleanup. I think one of the biggest jobs of the early Industrial Revolution was like babysitting machines.

HPsquared

Most technical jobs today are still babysitting machines, in some form.

deepsummer

Let me ask you a related question: if there was a study that handwriting is better for your brain than typing, should secretaries have quitted when typewriters and computers were introduced?

The thing is, there is no going back. There will be no significant demand for output that's created by humans even though a machine can do it as well. You can try to find a niche where AI is worse than humans. But that will be increasingly difficult to find.

So if you want to continue doing things without AI, that's fine. But most likely it will be a hobby, not a job.

m3047

+1 for the article, don't have an answer to your question.

BoorishBears

Past a certain level of seniority, your job increasingly becomes translating imprecise mandates from on high into practical outcomes.

If I heard someone making a blanket mandate for using AI, I'd translate that to them hearing this "new AI thing" allows people to do more work more efficently, and they to see that increase in their org.

I'd take that mandate as a chance to explore AI on someone else's dime, but continue doing my work otherwise, only using AI as it benefits me.

If expectations rise to un-reasonable level because of unrealistic expectations around AI, that's a seperate problem you'll have to deal with.

(It's also not a great look that your leadership wouldn't dig in to realize how silly forced AI is, but a charitable reading is that they're trying to force interactions with AI so employees can discover where it works and where it doesn't)

gebdev

This is an interesting study. I wonder how the LLM option might compare to human written responses in the same format (but with higher latency), or even to having a physical human in the room. Given some of the points from the conclusion about teachers being able to detect LLM inspired work, I wonder if either of these options may, at times, be better forms of learning due to improved quality.

savorypiano

This paper makes me glad I am not a researcher.

pavel_lishin

I don't think a lot of us here write essays for a living.

brudgers

Smarter is a very poor metric.

And a convenient excuse.