What the Hell Is Going On?
80 comments
·August 22, 2025ascendantlogic
> Here’s the thing - we want to help. We want to build good things. Things that work well, that make people’s lives easier. We want to teach people how to do software engineering!
This is not what companies want. Companies want "value" that customers will pay for as quickly and cheaply as possible. As entities they don't care about craftsmanship or anything like that. Just deliver the value quickly and cheaply. Its this fundamental mismatch between what engineers want to do (build elegant, well functioning tools) and what businesses want to do (the bare minimum to get someone to give them as much money as possible) that is driving this sort of pulling-our-hair-out sentiment on the engineering side.
Lauris100
“The only way to go fast, is to go well.” Robert C. Martin
Maybe spaghetti code delivers value as quickly as possible in the short term, but there is a risk that it will catch up in the long term - hard to add features, slow iterations - ultimately losing customers, revenue and growth.
gjsman-1000
Or, you can be like many modern CTOs: AI will likely get better and eventually be capable of mostly cleaning up its own mess today. In which case, YOLO - your startup dies, or AI is sufficiently advanced enough by the time it succeeds.
armada651
While this is true, the push-pull between sales and engineering resulted in software that is built well enough to last without being over-engineered. However if both sales and the engineers start chasing quick short term gains over long term viability that'll result in a new wave of shitty low-quality software being released.
AI isn't good enough yet to generate the same quality of software as human engineers. But since AI is cheaper we'll gladly lower the quality bar so long as the user is still willing to put up with it. Soon all our digital products will be cheap AI slop that's barely fit for purpose, it's a future I dread.
gjsman-1000
Right; I discovered at the new company I joined, they want velocity more than anything. The sloppy code, risk of mistakes, it’s all priced in to the risk assessment of not gaining ground first. So… I’m shooting out AI-written code left and right and that’s what they want. My performance? Excellent. Will it be a problem in the future? Well, either the startup fails, or AI might be able to rewrite it in the future.
It’s not what I want… but at the same time, how many of our jobs do what we want? I could easily end up being the garbage man. I’m doing what I’m paid to do and I’m paid well to do it.
aeon_ai
AI is a change management problem.
Using it well requires a competent team, working together with trust and transparency, to build processes that are designed to effectively balance human guidance/expertise with what LLM's are good at. Small teams are doing very big things with it.
Most organizations, especially large organizations, are so far away from a healthy culture that AI is amplifying the impact of that toxicity.
Executives who interpret "Story Points" as "how much time is that going to take" are asking why everything isn't half a point now. They're so far removed from the process of building maintainable and effective software that they're simply looking for AI to serve as a simple pass through to the bottom line.
The recent study showing that 95% of AI pilots failed to deliver ROI is a case study in the ineffectiveness of modern management to actually do their jobs.
dingdingdang
This so many times over, using/introducing AI in an already managerially dysfunctional organisation is like giving automatic weapons to a band of vikings - it will with utmost certitude result in a quickening of their demise.
A demise that in the case of a modern dysfunctional organisation would otherwise often be arriving a few years later as a results of complete and utter bureaucratic failure.
My experience is that all attempts to elevate technology to a "pivotal force" for the worse is always missing the underlying social and moral failure of the majority (or a small, but important, managerial minority) to act for the common good rather than egotistic self-interest.
grey-area
Or maybe it's just not as good as it's been sold to be. I haven't seen any small teams doing very big things with it, which ones are you thinking of?
datadrivenangel
I've seen small teams of a few people write non-trivial software services with AI that are useful enough to get users and potentially viable as a business.
We'll see how well they scale.
sim7c00
you are not wrong. the only 'sane' approaches ive seen with vibe coding is making a PoC to see if some concept works. then rewrite it entirely to make sure its sound.
besides just weird or broken code, anything exposed to user input is usually severly lacking sanity checks etc.
llms are not useless for coding. but imho letting llms do the coding will not yield production grade code.
A4ET8a8uTh0_v2
POC approach seems to work for me lately. It still takes effort to convince manager that it makes sense to devote time to polishing it afterwards, but some of the initial reticence is mitigated.
edit: Not a programmer. Just a guy who needs some stuff done for some of the things I need to work on.
bbarnett
Koko the gorilla understood language, but most others of her ilk simlpy make signs because a thing will happen.
Move hand this way and a human will give a banana.
LLMs have no understanding at all of the underlying language, they've just seen that a billion times a task looks like such and such, so have these tokens after them.
davedx
I saw that study, it was indeed about pilots. When do you ever expect a pilot to immediately start driving big revenue increases? The whole thing is a strawman
throwaway1777
[dead]
popcorncowboy
This ends in Idiocracy. The graybeards will phase out, the juniors will become staff level, except.. software will just be "more difficult". No-one really understands how it works, how could they? More importantly WHY should they? The Machine does the code. If The Machine gets it wrong it's not my fault.
The TRUE takeaway here is that as of about 12 months ago, spending time investing in becoming a god-mode dev is not the optimal path for the next phase of whatever we're moving into.
fidotron
About 15 years ago I was introduced to an environment where approximately a hundred developers spent their lives coaxing a classic style expert system ( https://en.wikipedia.org/wiki/Expert_system ) into controlling a build process to adjust the output for thousands of different output targets. I famously described the whole process as "brain damaging", demonstrated why [1], and got promoted for it.
People that spend their lives trying to get the LLMs to actually write the code will find it initially exhilarating, but in the long run they will hate it, learn nothing, and end up doing something stupid like outputting thousands of different output targets when you only need about 30.
If you use them wisely though they really can act as multipliers. People persist in the madness because of the management dream of making all the humans replaceable.
[1] All that had happened was the devs had learned how to recognize very simple patterns in the esoteric error messages and how to correct them. It was nearly trivial to write a program that outperformed them at this.
RyanOD
For me, AI is just a tool. I'm not a high-level developer, but when I'm coding out a personal project and I'm stuck. I present my ideas to AI and ask it for feedback. Then, I take that feedback and move forward. What I do NOT do is ask AI to write code for me. Again, these are my own projects so I can develop them any way I like.
Having AI write code for me (other than maybe simple boilerplate stuff) goes entirely against why I write code in the first place which is the joy of problem solving, building things, and learning.
Edit: Typo
grey-area
The heart of the article is this conclusion, which I think is correct from first-hand experience with these tools and teams trying to use them:
So what good are these tools? Do they have any value whatsoever?
Objectively, it would seem the answer is no.
mexicocitinluez
I need you to tell me how when I just fed Claude a 40 page Medicare form and asked it to translate it to a print-friendly CSS version and uses Cottle for templating "objectivtely" was of no value to me?
What about 20 minuets ago when I threw a 20-line Typescript error in and it explained it in English to me? What definition of "objective" would that fall under?
Or get this, I'm building off of an existing state machine library and asked it to find any potential performance issues and guess what? It actually did. What universe do you live in where that doesn't have objective value?
Am I going to need to just start sharing my Claude chat history to prove to people who live under a rock that a super-advanced pattern matcher that can compose results can be useful???
Go ahead, ask it to write some regex and then tell me how "objectively" useless it is?
dlachausse
AI tools absolutely can deliver value for certain users and use cases. The problem is that they’re not magic, they’re a tool and they have certain capabilities and limitations. A screwdriver isn’t a bad tool just because it sucks at opening beer bottles.
ptx
So what use cases are those?
It seems to me that the limitations of this particular tool make it suitable only in cases where it doesn't matter if the result is wrong and dangerous as long as it's convincing. This seems to be exclusively various forms of forgery and fraud, e.g. spam, phishing, cheating on homework, falsifying research data, lying about current events, etc.
barbazoo
Extracting structured data from unstructured text at runtime. Some models are really good at that and it’s immensely useful for many businesses.
dlachausse
I personally use it as a starting point for research and for summarizing very long articles.
I’m a mostly self taught hobbyist programmer, so take this with a grain of salt, but It’s also been great for giving me a small snippet of code to use as a starting point for my projects. I wouldn’t just check whatever it generates directly into version control without testing it and figuring out how it works first. It’s not a replacement for my coding skills, but an augmentation of them.
null
potsandpans
I don't think you understand what the word "objectively" means.
stanrivers
I’m scared for what happens ten years from now when none of the junior folk ever learned to write code themselves and now think they are senior engineers…
bmurphy1976
This trend started long before AI. Everybody needs 10+ years experience to get a job anywhere. As an industry we've been terrible at up-leveling the younger generations.
I've been fighting this battle for years in my org and every time we start to make progress we go through yet another crisis and have to let some of our junior staff go. Then when we need to hire again it's an emergency and we can only hire more senior staff because we need to get things done and nobody is there to fill the gaps.
It's been a vicious cycle to break.
popcorncowboy
I can second this cycle. Agentic code AI is an accelerant to this fire that sure looks like it's burning the bottom rungs of the ladder. Game theory suggests anyone already on the ladder needs to chop off as much of the bottom of the ladder as fast as possible. The cycle appears to only be getting.. vicious-er.
jihadjihad
Ten years? They'll be staff, obviously. Three years of experience is senior now, did you get that memo?
kykat
that's of course because nobody wants to hire junior and every job posting wants senior, so now everyone is "senior"
pydry
Hopefully theyll all become plumbers or schoolteachers or something.
There's a glut of junior dev talent and not enough real problems out there which a junior can apply themselves to.
This means that most of them simply arent able to get the kind of experience which will push them into the next skill bracket.
It used to be that you could use them to build cheap proofs of concept or self contained scripting but those are things AI doesnt actually suck too badly at. Even back then there were too many juniors and not enough roles though.
6LLvveMx2koXfwn
Dude, you're basically describing my career - no LLMs necessary!
boesboes
What is going on? We are blaming poor management and coaching/training on a tool again. It doesn't work, just as much as tools are never the answer to cultural problems. But blaming (or fixing) the tech is easy, that's why devops never really became more then increasingly complex shell scripting instead of having a real discussion on collaboration, shared goals and culture.
But it's a natural part of the cycle i think. Assembly language, compilers, scripting languages, application development frameworks... All lead to a new generation of programmers that "dont' understand anything!" and "it's just useful for the lazy!"..
I call BS. This is 100% a culture and management problem. I'd even go so far as to say, it is our responsibility as seniors to coach this new generation into producing quality and value with the tools they have. Don't get me wrong, I love shouting at clouds; i even mumble angrily at people in the streets sometimes and managers are mostly idiots; but we are the only ones that can guide them to the light so to speak.
Don't blame the tool, fix the people.
SeasonalEnnui
Good blog post, I recognise much of that.
The positions of both evangelists and luddites seems mad to me, there's too much emotion involved in those positions for what amounts to another tool in the toolbox that should only be used in appropriate situations.
anymouse123456
AI has been great for UX prototypes.
Get something stood up quickly to react to.
It's not complete, it's not correct, it's not maintainable. But it's literal minutes to go from a blank page to seeing something clickable-ish.
We do that for a few rounds, set a direction and then throw it in the trash and start building.
In that sense, AI can be incredibly powerful, useful and has saved tons of time developing the wrong thing.
I can't see the future, but it's definitely not generating useful applications out of whole cloth at this point in time.
SeasonalEnnui
Yes, totally agree. The 2nd thing I found it great for was to explain errors, it either finds the exact solution, or sparked a thought that lead to the answer.
criddell
For me it's useful in those areas I don't venture into very often. For example I needed a powershell script recently that would create a little report of some registry settings. Claude banged out something that worked perfectly for me and saved me an hour of messing around.
cck9672
Can you elaborate on your process and tools here? This use case may actually be valuable for me and my team.
waterproof
Tools that can build you a quick clickable prototype are everywhere. Replit, claude code, cursor, ChatGPT Pro, v0.app, they're all totally capable.
From there it's the important part: discussing, documenting, and making sure you're on the same page about what to actually build. Ideally, get input from your actual customers on the mockup (or multiple mockups) so you know what resonates and what doesn't.
shusson
The majority of software engineers today, (mostly in big tech) are not interested in software engineering. They studied it to make money. This happened before LLMs. Add the fact that software development isn’t deterministic. And you have a perfect storm of chaos.
But our discipline has been through similar disruptions in the past. I think give it a few years then maybe we’ll settle on something sane again.
I do think the shift is permanent though. Either you adapt to use these LLMs well or you will struggle to be competitive (in general)
suddenlybananas
>I do think the shift is permanent though. Either you adapt to use these LLMs well or you will struggle to be competitive (in general)
That certainly isn't true if what this article suggests is true.
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.
It didn't help that the LLM was confidently incorrect.
The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.
In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.
I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.
With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).