Skip to content(if available)orjump to list(if available)

AI Is Making Developers Dumb

AI Is Making Developers Dumb

193 comments

·March 16, 2025

tracerbulletx

If you want AI to make you less dumb, instead of using it like stack overflow, you can go on a road trip and have a deep conversation about a topic or field you want to learn more about, you can have it quiz you, do mock interviews, ask questions, have a chat, its incredible at that. As long as its not something where the documentation is less than a year or two old.

dfabulich

We've seen this happen over and over again, when a new leaky layer of abstraction is developed that makes it easier to develop working code without understanding the lower layer.

It's almost always a leaky abstraction, because sometimes you do need to know how the lower layer really works.

Every time this happens, developers who have invested a lot of time and emotional energy in understanding the lower level claim that those who rely on the abstraction are dumber (less curious, less effective, and they write "worse code") than those who have mastered the lower level.

Wouldn't we all be smarter if we stopped relying on third-party libraries and wrote the code ourselves?

Wouldn't we all be smarter if we managed memory manually?

Wouldn't we all be smarter if we wrote all of our code in assembly, and stopped relying on compilers?

Wouldn't we all be smarter if we were wiring our own transistors?

It is educational to learn about lower layers. Often it's required to squeeze out optimal performance. But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.

(My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)

danielmarkbruce

LLMs don't create an abstraction. They generate code. If you are thinking about LLMs as a layer of abstraction, you are going to have all kinds of problems.

sanswork

My C compiler has been generating assembly code for me for 30 years. And people were saying the same thing even earlier about how compilers and HLLs made developers dumb because they couldn't code in asm.

elicksaur

>And people were saying

Source? A quote? Or are we just making up historical strawmen to win arguments against?

lurking_swe

except we can guarantee (with tests) that the generated instructions from the compiler are bug free 99% of the time. Pretty big difference there.

danielmarkbruce

Presumably you don't throw out the c code and just check in the assembly.

__loam

C compilers are deterministic. You don't have to inspect the assembly they produce to know that it did the right thing.

haydenlingle

This is a disingenuous critique of what was said.

The point is LLMs may allow developers to write code for problems they may not fully understand at the current level or under the hood.

In a similar way using a high level web framework may allow a developer to work on a problem they don’t fully understand at the current level or under the hood.

There will always be new tools to “make developers faster” usually at a trade off of the developer understanding less of what specifically they’re instructing the computer to do.

Sometimes it’s valuable to dig and better understand, but sometimes not. And always responding to new developer tooling (whether LLMs or Web Frameworks or anything else) by saying they make developers dumber, can be naive.

danielmarkbruce

Nope, it's not disingenuous. It's a genuine critique that it's just stupid way to think about things. You don't check in a bunch of prompts, make changes to the prompts, run them through a model, compile/build the code.

It's simply not the same thing as a high level web framework.

If you have an intern, or a junior engineer - you give them work and check the work. You can give them work that you aren't an expert in, where you don't know all the required pieces in detail, and you won't get out of it the same as doing the work yourself. An intern is not a layer of abstraction. Not all divisions of labor are via layers of abstraction. If you treat them all that way it's dumb and you'll have problems.

Sparkyte

They can also generate documentation of code you've written. So it is very useful if leveraged correctly to understand what the code is doing. Eventually you learn all of the behaviors of that code and able to write it yourself or improve on it.

I would consider it as a tool to teach and learn code if used appropriately. However LLMs are bullshit if you ask it to write something, pieces yes, whole code... yeah good luck having it maintain consistency and comprehension of what the end goal is. The reason it works great for reading existing code is that the input results into a context it can refer back to but because LLMs are weighted values it has no way to visualize the final output without significant input.

simonw

Leaky abstractions is a really appropriate term for LLM-assisted coding.

The original "law of leaky abstractions" talked about how the challenge with abstractions is that when they break you now have to develop a mental model of what they were hiding from you in order to fix the problem.

(Absolutely classic Joel Spolsky essay from 22 years ago which still feels relevant today: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a... )

Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.

gopher_space

> Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.

I'm finding that, If I don't have solid mastery of at least one aspect of generated code, I won't know that I have problems until they touch a domain I understand.

keybored

[deleted]

simonw

Leaky abstractions doesn't imply abstractions are bad.

Using abstractions to trade power for simplicity is a perfectly fine trade-off... but you have to bare in mind that at some point you'll run into a problem that requires you to break through that abstraction.

I read that essay 22 years ago and ever since then I've always looked out for opportunities to learn little extra details about the abstractions I'm using. It pays off all the time.

skoodge

Not all of those abstractions are equally leaky though. Automatic memory management for example is leaky only for a very narrow set of problems, in many situations the abstraction works extremely well. It remains to be seen whether AI can be made to leak so rarely (which does not meant that it's not useful even in its current leaky state).

notTooFarGone

If we just talk in analogies: a cup is also leaky because fluid is escaping via vapours. It's not the same as a cup with a hole in it.

Llms currently have tiny holes and we don't know if we can fix them. Established abstractions are more like cups that may leak but only in certain conditions (when it's hot)

101008

> (My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)

Agree, specially useful when you join a new company and you have to navegate a large codebase (or bad-maintained codebase, which is even worse by several orders of magnitude). I had no luck asking LLM to fix this or that, but it did mostly OK when I asked how it works and what the code is trying to code (it includes mistakes but that's fine, I can see them, which is different if it was just code that I copy and paste).

elicksaur

See this as one of the major selling points.

…but I haven’t joined a new company since LLMs were a thing. How often is this use case necessary to justify $Ts in investment?

101008

No ieda about that, those are amounts of money I can't even consider because I couldn't tell the different of T vs B or even a hundred millions (as someone who never had more than 100k).

It's something I would cosnider paying for the first months when joining a new company (specially with a good salary), but not more than that, to be honest.

happytoexplain

This "it's the same as the past changes" analogy is lazy - everywhere it's reached for, not just AI. It's basically just "something something luddites".

Criticisms of each change are not somehow invalid just because the change is inevitable, like all the changes before it.

notarobot123

When a higher level of abstraction allows programmers to focus on the detail relevant to them they stop needing to know the low level stuff. Some programmers tend not to be a fan of these kinds of changes as we well know.

But do LLMs provide a higher level of abstraction? Is this really one of those transition points in computing history?

If they do, it's a different kind to compilers, third-party APIs or any other form of higher level abstraction we've seen so far. It allows programmers to focus on a different level of detail to some extent but they still need to be able to assemble the "right enough" pieces into a meaningful whole.

Personally, I don't see this as a higher level of abstraction. I can't offload the cognitive load of understanding, just the work of constructing and typing out the solution. I can't fully trust the output and I can't really assemble the input without some knowledge of what I'm putting together.

LLMs might speed up development and lower the bar for developing complex applications but I don't think they raise the problem-solving task to one focused solely on the problem domain. That would be the point where you no longer need to know about the lower layers.

vishalontheline

Last year I learned a new language and framework for the first time in a while. Until I became used to the new way of thinking, the discomfort I felt at each hurdle was both mental and physical! I imagine this is what many senior engineers feel when they first begin using an AI programming assistant, or an even more hands-off AI tool.

Oddly enough, using an AI assistant, despite it guessing incorrectly as often as it did, helped me learn and write code faster!

popularrecluse

"Some people might not enjoy writing their own code. If that’s the case, as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."

I've tolerated writing my own code for decades. Sometimes I'm pleased with it. Mostly it's the abstraction standing between me and my idea. I like to build things, the faster the better. As I have the ideas, I like to see them implemented as efficiently and cleanly as possible, to my specifications.

I've embraced working with LLMs. I don't know that it's made me lazier. If anything, it inspires me to start when I feel in a rut. I'll inevitably let the LLM do its thing, and then them being what they are, I will take over and finish the job my way. I seem to be producing more product than I ever have.

I've worked with people and am friends with a few of these types; they think their code and methodologies are sacrosanct. That if the AI moves in there is no place for them. I got into the game for creativity, it's why I'm still here, and I see no reason to select myself for removal from the field. The tools, the syntax, its all just a means to an end.

SirMaster

This is something that I struggle with for AI programming. I actually like writing the code myself. Like how someone might enjoy knitting or model building or painting or some other "tedious" activity. Using AI to generate my code just takes all the fun out of it for me.

stephantul

This so much. I love coding. I might be the person that still paints stuff by hand long after image generation has made actual paintings superfluous, but it is what it is.

simonw

One analogy that works for me is to consider mural painting. Artists who create huge building-size murals are responsible for the design of the painting itself, but usually work with a team of artist to get up on the ladders and help apply the image to the building.

The way I use LLMs feels like that to me: I'm designing the software to quite a fine level, then having the LLMs help out with some of the typing of the code: https://simonwillison.net/2025/Mar/11/using-llms-for-code/#t...

deadbabe

I don’t enjoy writing unit tests but fortunately this is one task LLMs seem to be very good at and isn’t high stakes, they can exhaustively create test cases for all kinds of conditions, and can torture test your code without mercy. This is the only true improvement LLMs have made to my enjoyment.

layer8

Saying that writing unit tests isn’t high stakes is a dubious statement. The very purpose of unit tests is to make sure that programming errors are caught that may very well be high stakes.

rileymichael

except they are not good at it. the unit tests you'll have written will be filled with (slow) mocks with tautological assertions, create no reusable test fixtures, etc.

tyre

> I like to build things, the faster the better.

what's the largest (traffic, revenue) product you've built? quantity >>>> quality of code is a great trade-off for hacking things together but doesn't lend itself to maintainable systems, in my experience.

Have you seen it work to the long term?

jaggirs

I suppose that's where the use case for LLMs starts to diminish rapidly.

switchbak

To be fair, this person wasn’t claiming they’re making a trade off on quality, just that they prefer to build things quickly. If an AI let you keep quality constant and deliver faster, for example.

I don’t think that’s what LLMs offer, mind you (right now anyway), and I often find the trade offs to not be worth it in retrospect, but it’s hard to know which bucket you’re in ahead of time.

danielmarkbruce

Sure, but the vast majority of the time in greenfield applications situations, it's entirely unclear if what is being built is useful, even when people think otherwise. So the question of "maintainable" or not is frequently not the right consideration.

yubblegum

> I've tolerated writing my own code for decades.

The only reason I got suckd into this field was because I enjoyed writing code. What I "tolerated" (professionally) was having to work on other people's code. And LLM code is other people's code.

Philip-J-Fry

I've accepted this way of working too. There is some code that I enjoy writing. But what I've found is that I actually enjoy just seeing the thing in my head actually work in the real world. For me, the fun part was finding the right abstractions and putting all these building blocks together.

My general way of working now is, I'll write some of the code in the style I like. I won't trust an LLM to come up with the right design, so I still trust my knowledge and experience to come up with a design which is maintainable and scaleable. But I might just stub out the detail. I'm focusing mostly on the higher level stuff.

Once I've designed the software at a high level, I can point the LLM at this using specific files as context. Maybe some of them have the data structures describing the business logic and a few stubbed out implementations. Then Claude usually does an excellent job at just filling in the blanks.

I've still got to sanity check it. And I still find it doing things which looks like it came right from a junior developer. But I can suggest a better way and it usually gets it right the second or third time. I find it a really productive way of programming.

I don't want to be writing datalayer of my application. It's not fun for me. LLMs handle that for me and lets me focus on what makes my job interesting.

The other thing I've kinda accepted is to just use it or get left behind. You WILL get people who use this and become really productive. It's a tool which enables you to do more. So at some point you've got to suck it up. I just see it as a really impressive code generation tool. It won't replace me, but not using it might.

ryanackley

I don't think the author is saying it's a dichotomy. Like, you're either a disciple of doing things "ye olde way" or allowing the LLM to do it for you.

I find his point to be that there is still a lot of value in understanding what is actually going on.

Our business is one of details and I don't think you can code strictly having an LLM doing everything. It does weird and wrong stuff sometimes. It's still necessary to understand the code.

moogly

I like coding on private projects at home; that is fun and creative. The coding I get to do at work inbetween waiting for CI, scouring logs, monitoring APM dashboards and reviewing PRs, in a style and abstraction level I find inappropriate is not interesting at all. A type of change that might take 10 minutes at home might take 2 days at work.

senordevnyc

I resonate so strongly with this. I’ve been a professional software engineer for almost twenty years now. I’ve worked on everything from my own solo indie hacker startups to now getting paid a half million per year to sling code for a tech company worth tens of billions. I enjoy writing code sometimes, but mostly I just want to build things. I’m having great fun using all these AI tools to build things faster than ever. They’re not perfect, and if you consider yourself to be a software engineer first, then I can understand how they’d be frustrating.

But I’m not a software engineer first, I’m a builder first. For me, using these tools to build things is much better than not using them, and that’s enough.

MarcelOlsz

I've had a similar experience. I built out a feature using an LLM and then found the library it must have been "taking" the code from, so what I ended up was a much worse mangled version of what already existed, had I taken the time to properly research. I've now fully gone back to just getting it to prototype functions for me in-editor based off comments, and I do the rest. Setting up AI pipelines with rule files and stuff takes all the fun away and feels like extremely daunting work I can't bring myself to do. I would much rather just code than act as a PM for a junior that will mess up constantly.

When the LLM heinously gets it wrong 2, 3, 4 times in a row, I feel a genuine rage bubbling that I wouldn't get otherwise. It's exhausting. I expect within the next year or two this will get a lot easier and the UX better, but I'm not seeing how. Maybe I lack vision.

switchbak

You’re exactly right on the rage part, and that’s not something I’ve seen discussed enough.

Maybe it’s the fact that you know you could do it better in less time that drives the frustration. For a junior dev, perhaps that frustration is worth it because there’s a perception that the AI is still more likely to be saving them time?

I’m only tolerating this because of the potential for long term improvement. If it just stayed like it is now, I wouldn’t touch it again. Or I’d find something else to do with my time, because it turns an enjoyable profession into a stressful agonizing experience.

rvense

Is it just me or has this been a year or two off for at least a year or two now?

senordevnyc

It’s exponentially better for me to use AI for coding than it was two years ago. GPT-4 launched two years and two days ago. Claude 3.5 sonnet was still fifteen months away. There were no reasoning models. Costs were an order of magnitude or two higher. Cursor and Windsurf hadn’t been released.

The last two years have brought staggering progress.

jll29

LLMs also take away the motivation from students to properly concentrate and deeply understand a technical problem (including but not limited to coding problems); instead, they copy, paste and move on without understanding. The electronic calculator analogy might be appropriate: it's a tool appropriate once you have learned how to do the calculations by hand.

In an experiment (six months long, twice repeated, so a one-year study), we gave business students ChatGPT and a data science task to solve that they did not have the background for (develop a sentiment analysis classifier for German-language recommendations of medical practices). With their electronic "AI" helper, they could find a solution, but the scary thing is they did not acquire any knowledge on the way, as exist interviews clearly demonstrated.

As a friend commented, "these language models should never have been made available to the general public", only to researchers.

simonw

> As a friend commented, "these language models should never have been made available to the general public", only to researchers.

That feels to me like a dystopian timeline that we've only very narrowly avoided.

It wouldn't just have been researchers: it would have been researchers and the wealthy.

I'm so relieved that most human beings with access to an internet-connected device have the ability to try this stuff and work to understand what it can and cannot do themselves.

tokinonagare

I'm giving a programming class and students uses LLMs all the time. I see it as a big problem because:

- it puts focus on syntax instead of the big picture. Instead of finding articles or posts on Stack explaining things beyond how to write them. AI give them the "how" so they don't think of the "why"

- students almost don't ask questions anymore. Why would they when an AI give them code?

- AI output contains notions, syntax and API not seen in class, adding to the confusion

Even the best students have a difficult time answering basic questions about what have been seen on the last (3 hours) class.

fullstackwife

Job market will verify those students, but the outcome may be potentially disheartening for you, because those guys may actually succeed one way or another. Think punched cards: they are gone along with the mindset of "need to implement it correctly on first try".

8note

students pay for education such that at the end, they know something. if the job market filters them out because they suck, the school did a bad job teaching.

the teachers still need to figure out how to teach with LLMs around

llmoron

[dead]

thierrydamiba

What do you think is the big difference between these tools and calculators?

remexre

Imagine a calculator that computes definite integrals, but gives non-sensical results on non-smooth functions for whatever reason (i.e., not an error, but an incorrect but otherwise well-formed answer).

If there were a large number of people who didn't quite understand what it meant for a function to be continuous, let alone smooth, who were using such a calculator, I think you'd see similar issues to the ones that are identified with LLM usage: a large number of students wouldn't learn how to compute definite or indefinite integrals, and likely wouldn't have an intuitive understanding of smoothness or continuity either.

I think we don't see these problems with calculators because the "entry-level" ones don't have support for calculus-related functionality, and because people aren't taught how to arrange the problems that you need calculus to solve until after they've given some amount of calculus-related intuition. These conditions obviously aren't the case for LLMs.

simonw

I think we don't see these problems with calculators because we have figured out how to teach people how to use them.

We are still very early in the process of figuring out how to teach people to use LLMs.

adverbly

Calculators do not accept ambiguous instructions and they work 100% of the time.

andrehacker

>> Calculators do not accept ambiguous instructions and they work 100% of the time.

That is stated with a lot of confidence :)

https://news.ycombinator.com/item?id=43066953 https://apcentral.collegeboard.org/courses/resources/example... https://matheducators.stackexchange.com/questions/27702/what...

rusk

If you divide by 0 you’ll get an “E” - LLM will just make something up

exceptione

I will bite. Correct question would be:

  What do you think is the big difference between these tools and *outsourcing*?
AI is far more comparable to delegating work to *people*. Calculators and compilers are deterministic. Using them doesn't change the nature of your work.

AI, depending on how you use it, gives you a different role. So take that as a clue: if you are less interested in building things and more interested into getting results, maybe a product management role would be a better fit.

snickerbockers

Fundamentally nothing, but everybody already knows that you shouldn't teach young kids to rely on calculators during the basic "four-function" stage of their mathematics education.

Calculators for the most part don't solve novel problems. They automate repetitive basic operations which are well-defined and have very few special cases. Your calculator isn't going to do your algebra for you, it's going to give you more time to focus on the algebraic principles instead of material you should have retained from elementary school. Algebra and calculus classes are primarily concerned with symbolic manipulation, once the problem is solved symbolically coming to a numerical answer is time-consuming and uninteresting.

Of course, if you have access to the calculator throughout elementary school then you're never going to learn the basics and that's why schoolchildren don't get to use calculators until the tail-end of middle school. At least that's how it worked in the early 2000s when i was a kid; from what i understand kids today get to use their phones and even laptops in class so maybe i'm wrong here.

Previously I stated that calculators are allowed in later stages of education because they only automate the more basic tasks; Matlab can arguably be considered a calculator which does automate complicated tasks and even when i was growing up the higher-end TI-89 series was available which actually could solve algebra and even simple forms of calculus problems symbolically; we weren't allowed access to these when i was in high school because we wouldn't learn the material if there was a computer to do it for us.

So anyways, my point (which is halfway an agreement with the OP and halfway an agreement with you) is that AI and calculators are fundamentally the same. It needs to be a tool to enhance productivity, not a crutch to compensate for your own inadequacies[1]. This is already well-understood in the case of calculators, and it needs to be well-understood in the case of AI.

[1] actually now that i think of it, there is an interesting possibility of AI being able to give mentally-impaired people an opportunity to do jobs they might never be capable of unassisted, but anybody who doesn't have a significant intellectual disability needs to be wary of over-dependence on machines.

null

[deleted]

tyre

Calculators either get you through math you won't use in the real world or can aid in calculating when you know the right formula already.

Calculators don't pretend to think or solve a class of problems. They are pure execution. The comparison in tech is probably compilers, not code.

SpicyLemonZest

There's a reason we don't let kids use calculators to learn their times tables. In order to be effective at more advanced mathematics, you need to develop a deep intuition for what 9 * 7 means, not just what buttons you need to push to get the calculator to spit out 63.

llmoron

[dead]

butterlettuce

I wish I had an LLM as a student because I couldn’t afford a tutor and googling for information was tedious.

It’s the college’s responsbility now to teach students how to harness the power of LLMs effectively. They can’t keep their heads in the sand forever.

tpmoney

I had this realization a couple weeks ago that AI and LLMs are the 2025 equivalent of what Wikipedia was in 2002. Everyone is worried about how all the kids are going to just use the “easy button” and get nonsense that’s in-checked and probably wrong and a whole generation of kids are going to grow up not knowing how to research, and trusting unverified sources.

And then eventually overall we learned what the limits of Wikipedia are. We know that it’s generally a pretty good resource for high level information and it’s more accurate for some things than for others. It’s still definitely a problem that Wikipedia can confidently publish unverified information (IIRC wasn’t the Scottish translation famously hilariously wrong and mostly written by an editor with no experience with the language?)

And yet, I think if these days people were publishing think pieces about how Wikipedia is ruining the ability of students to learn, or advocating that people shouldn’t ever use Wikipedia to learn something, we’d largely consider them crackpots, or at the very least out of touch.

I think AI tools are going to follow the same trajectory. Eventually we’ll gain enough cultural knowledge of their strengths and weaknesses to apply them properly and in the end they’ll be another valuable asset in our ever growing lists of tools.

layer8

It’s not the same because you can’t ask Wikipedia to do your homework or programming task without even reading the result.

luser99

[dead]

currymj

it's particularly bad for students who should be trying to learn.

at the same time in my own life, there are tasks that I don't want to do, and certainly don't want to learn anything about, yet have to do.

For example, figuring out a weird edge case combination of flags for a badly designed LaTeX library that I will only ever have to use once. I could try to read the documentation and understand it, but this would take a long time. And, even if it would take no time at all, I literally would prefer not to have this knowledge wasting neurons in my brain.

maratc

A personal anecdote from my previous place:

A junior developer was tasked with writing a script that would produce a list of branches that haven't been touched for a while. I've got the review request. The big chunk of it was written in awk -- even though many awk scripts are one-liners, they don't have to be -- and that chunk was kinda impressive, making some clever use of associative arrays, auto-vivification, and more pretty advanced awk stuff. In fact, it was actually longer than any awk that I have ever written.

When I asked them, "where did you learn awk?", they were taken by surprise -- "where did I learn what?"

Turns out they just fed the task definition to some LLM and copied the answer to the pull request.

simonw

I wonder if it would work to introduce a company policy that says you should never commit consensus aren't able to explain how it works?

I've been using that as my own personal policy for AI-assisted code and I am finding it works well for me, but would it work as a company company policy thing?

maratc

I assume that would be seen as creating unnecessary burden, provided that the script works and does what's required. Is it better than the code written by people who have departed, and now no one can explain how it works?

The developer in question has been later promoted to a team lead, and (among other things) this explains why it's "my previous place" :)

patrickmay

This should be (the major) part of the code review.

luckylion

One of the advantages of working with people who are not native english-speakers is that, if their english suddenly becomes perfect and they can write concise technical explanations in tasks, you know it's some LLM.

Then if you ask for some detail on a call, it's all uhm, ehm, ehhh, "I will send example later".

luser99

[dead]

jtwaleson

Plato, in the Phaedrus, 370BC: "They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks."

tarkin2

And our memory has declined. He was right

taurath

Has it? Or do we instead have vast overfilled palaces of the sum of human knowledge, often stored in pointers and our limited working memory readily available for things recently accessed?

I'd argue that our ability to recall individual moments has gone down, but the sum of what we functionally know has gone up massively.

jtwaleson

For sure. But through writing we've been able to learn more, and through AI we'll produce more.

tarkin2

With a diminished ability to store, recall and thus manipulate information, our learning is arguably more shallow.

With AI trained on increasingly generic input used by the casual, then the quality of our production will increase in quantity but decrease in quality.

I am not arguing to abandon the written word or LLMs.

But the disadvantages--which will be overlooked by the young and those happy to have a time-saving tool, namely the majority--will do harm, harm most will overlook favouring the output and ignoring the atrophying user.

mindcrime

That's not really the question though. The question is, would we be better off if we didn't have "[the] means of external marks"?

I'm sure somebody out there would argue that the answer is yes, but personally I have my doubts.

tarkin2

I think the question is that were Plato's fears unfounded. I don't think the question is "is writing bad", although it is framed as that to justify a carefree adoption of LLMs in daily life.

nextts

Good job he wrote that down.

igor_varga

Good one

layer8

Plato deliberately did not put some of his teachings into writing and only taught them orally, because he found written text unfit for the purpose.

https://en.wikipedia.org/wiki/Plato%27s_unwritten_doctrines

luser99

[dead]

MrMcCall

I mean, he wasn't wrong, but he couldn't foresee the positives that would come of the new tech.

But sometimes the new tech is a hot x-ray foot measuring machine.

zusammen

I may be old-fashioned but I remember a time when silent failure was considered to be one of the worst things a system can do.

LLMs are silent failure machines. They are useful in their place, but when I hear about bosses replacing human labor with “AI” I am fairly confident they are going to get what they deserve: catastrophe.

mahoro

> There is a concept called “Copilot Lag”. It refers to a state where after each action, an engineer pauses, waiting for something to prompt them what to do next.

I've been experiencing this for 10-15 years. I type something and then wait for IDE to complete function names, class methods etc. From this perspective, LLM won't hurt too much because I'm already dumb enough.

snickerbockers

It's really interesting how minor changes in your workflow can completely wreck productivity. When I'm at work I spend at least 90% of my time in emacs, but there are some programs I'm forced to use that are only available via Win32 GUI apps, or cursed webapps. Being forced to abandon my keybinds and move the mouse around hunting for buttons to click and then moving my hand from the mouse to the keyboard then back to the mouse really fucks me up. My coworkers all use MSVC and they don't seem to mind it all because they're used to moving the mouse around all the time; conversely a few of them actually seem to hate command-driven programs the same way I hate GUI-driven programs.

As I get older, it feels like every time I have to use a GUI I get stuck in a sort of daze because my mind has become optimized for the specific work I usually do at the expense of the work I usually don't do. I feel like I'm smarter and faster than I've ever been at any prior point in my life, but only for a limited class of work and anything outside of that turns me into a senile old man. This often manifests in me getting distracted by youtube, windows solitaire, etc because it's almost painful to try to remember how to move the mouse around though all these stupid menus with a million poorly-documented buttons that all have misleading labels.

mahoro

I feel your pain. I have my own struggles with switching tasks and what helps to some degree is understanding that that kind of switching and adapting is a skill which could be trained by doing exactly this. At least I feel less like a victim and more like a person who improves himself :)

But it appears I'm in a better position because I don't have to work with clearly stupid GUIs and have no strong emotions to them.

SoftTalker

This is the reason I don’t use auto completing IDEs. Pretty much vanilla emacs. I do often use syntax highlighting for the language, but that’s the limit of the crutches I want to use.

wilburTheDog

An LLM is a tool. It's your choice how you use it. I think there are at least two ways to use it that are helpful but don't replace your thinking. I sometimes have a problem I don't know how to solve that's too complex to ask google. I can write a paragraph in ChatGPT and it will "understand" what I'm asking and usually give me useful suggestions. Also I sometimes use it to do tedious and repetitive work I just don't want to do.

I don't generally ask it to write my code for me because that's the fun part of the job.

Bukhmanizer

I think the issue is that a lot of orgs are systematically using the tool poorly.

I’m responsible for a couple legacy projects with medium sized codebases, and my experience with any kind of maintenance activities has been terrible. New code is great, but asking for fixes, refactoring, or understanding the code base has had an essentially 2% success rate for me.

Then you have to wonder, how the hell orgs expect to maintain/scale and more code from fewer devs, who don’t even understand how the original code worked?

LLMs are just a tool but overreliance on them is just as much of a code smell as - say - deciding your entire backend is going to be in Matlab; or all your variables are going to be global variables - you can do it, but I guarantee that it’s going to cause issues 2-3 years down the line.

simonw

"understanding the code base has had an essentially 2% success rate for me"

How have you been using LLMs to help understand existing code?

I have been finding that to work extremely well, but mainly with the longer context models like Google Gemini.

jazzcomputer

I'm learning javascript as my first programming language and I'm somewhere around beginner/intermediate. I used Chatgpt for a while, but stopped after a time and just mostly use documentation now. I don't want code solutions, I want code learning and I want certainty behind that learning.

I do see a time where I could use copilot or some LLM solution but only for making stuff I understand, or to sandbox high level concepts of code approaches. Given that I'm a graphic designer by trade, I like 'productivity/automation' AI tools and I see my approach to code will be the same - I like that they're there but I'm not ready for them yet.

I've heard people say I'll get left behind if I don't use AI, and that's fine as I'll just use niche applications of code alongside my regular work as it's just not stimulating to have AI fill in knowledge blanks and outsource my reasoning.

moribvndvs

I am at the point of abandoning coding copilots because I spend most of my time fighting the god damned things. Surely, some of this is on me, not tweaking settings or finding the right workflow to get the most of it. Some of it is problematic UX/implementation in VSCode or Cursor. But the remaining portion is an assortment of quirks that require me to hover over it like an overattentive parent trying to keep a toddler from constantly sticking its fingers in electrical sockets. All that plus the comparatively sluggish and inconsistent responsivity is fucking exhausting and I feel like I get _less_ done in copilot-heavy sessions. Up to a point they will improve over time, but right now it makes programming less enjoyable for me.

On the other hand, I am finding LLMs increasingly useful as a moderate expert on a large swath of subjects available 24/7, who will never get tired of repeated clarifications, tangents, and questions, and who can act as an assistant to go off and research or digest things for you. It’s mostly decent rubber duck.

That being said, it’s so easy to land in the echo chamber bullshit zone, and hitting the wall where human intuition, curiosity, ingenuity, and personality would normally take hold for even a below average person is jarring, deflating, and sometimes counterproductive, especially when you hit the context window.

I’m fine with having it as another tool in the box, but I rather do the work myself and collaborate with actual people.