Skip to content(if available)orjump to list(if available)

The Myth of Developer Obsolescence

The Myth of Developer Obsolescence

358 comments

·May 27, 2025

whstl

> For agency work building disposable marketing sites

Funny, because I did some freelancing work fixing disposable vibe-coded landing pages recently. And if there's one thing we can count on is that the biggest control-freaks will always have that one extra stupid requirement that completely befuddles the AI and pushes it into making an even bigger mess, and then I'll have to come fix it.

It doesn't matter how smart the AI becomes, the problems we face with software are rarely technical. The problem is always the people creating accidental complexity and pushing it to the next person as if it was "essential".

The biggest asset of a developer is saying "no" to people. Perhaps AIs will learn that, but with competing AIs I'm pretty sure we'll always get one or the other to say yes, just like we have with people.

brookst

Excellent reformulation of the classic “requirement bug”: software can be implemented perfectly, but if the requirements don’t make sense including accounting for the realities of the technical systems, mayhem ensues.

I think AI will get there when it comes to “you asked for a gif but they don’t support transparency”, but I am 100% sure people will continue to write “make the logo a square where every point is equidistant from the center” requirments.

EDIT: yes jpg, not gif, naughty typo + autocorrect

fmbb

Mid 1800s computing classic.

> On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

From Passages from the Life of a Philosopher (1864), ch. 5 "Difference Engine No. 1"

immibis

They were just asking if he cheated.

You put in 2+2, and 4 comes out. That's the right answer.

If you put in 1+1, which are the wrong figures for the question of 2+2, will 4 still come out? It's easy to make a machine that always says 4.

mjburgess

Babbage was one of those smug oblivious types. The confusion was his alone, and is exactly the same as that sort of confusion which arises when an engineer claims to have built a "thinking machine" but has no notion of what thought is, has never made any study of the topic, and nevertheless claims to have produced it.

They are either asking: is the machine capable of genuine thought, and therefore capable of proactively spotting an error in the input and fixing it? Or, they were asking: how sensitive is the output to incorrect permutations in the input (ie., how reliable is it)?

I sometimes take them to be asking the former question, as when someone asks, "Is the capital of France, paree?" and one responds, "Yes, it's spoken by the french like paree, but written Paris"

But they could equally mean, "is the output merely a probable consequence of the input, or is the machine deductively reliable"

Babbage, understanding the machine as a pure mechanism is oblivious to either possibility, yet very much inclined to sell it as a kind of thinking engine -- which would require, at least, both capacities

anonymars

I believe the example you're looking for is, "seven perpendicular red lines": https://www.youtube.com/watch?v=BKorP55Aqvg

The task has been set; the soul weeps

oniony

Just draw them in seven dimensional space.

ghssds

> a square where every point is equidistant from the center

Given those requirements, I would draw a square on the surface of a sphere, making each point of the square equidistant from the sphere's center.

brookst

“We’re going to need a new type of curved monitor…”

sceptic123

Is this trolling or a suggestion that AI doesn't understand that transparent gif is 100% a thing?

Izkata

I think they just got gif mixed up with jpg. The letters have the same pattern in qwerty and one could have autocorrected to the other.

MrDarcy

The original GIF format did not support transparency. 89A added support for fully transparent pixels. There is still no support for alpha channels, so a partially opaque drop shadow is not supported for example.

Depends on what “transparent” means.

staunton

> people will continue to write “make the logo a square where every point is equidistant from the center” requirments.

Why wouldn't the AI deal with that the same way human developers do? Follow up with questions, or iterate requirements?

AlotOfReading

Go ahead and try it with your favorite LLMs. They're too deferential to push back consistently or set up a dialectic and they struggle to hold onto lists of requirements reliably.

Lutger

Between "no" and "yes sure" also lie 50 shades of "is this what you meant?". For example, this older guy asked me to create a webpage where people could "download the database". He meant a very limited csv export of course. I an wondering if chatgpt would have understood his prompts, and this was one of the more obvious ones to me.

pempem

OMG this reminds me of a client (enterprise) I had who had been pushed into the role of product and he requested we build a website that "lets you bookmark every page"

rypskar

>> lets you bookmark every page

In today's world with all the SPAs that don't push to history or don't manage to open the correct page based on history, this seems like a valid requirement

whstl

Definitely. Saying no is not really denying, it's negotiating.

bombcar

I call it "nogotiating" - the problem is that inexperienced devs emphasize the "no" part and that's all the client hears.

What you have to do is dig into the REASONS they want X or Y or Z (all of which are either expensive, impossible, or both) - then show them a way to get to their destination or close to it.

rawgabbit

A pet peeve of mind is the word negotiation in the context of user requirements.

In the business user’s mind, negotiation means the developer can do X but the developer is lazy. Usually, it is requirement X doesn’t make any sense because a meeting was held where the business decided to pivot to a new direction and decided the new technical solution. The product owner simply gives out the new requirement without the context. If an architect or senior developer was involved in the meeting, they would have told the business you just trashed six months of development and we will now start over.

blooalien

> Saying no is not really denying, it's negotiating.

Sometimes. I have often had to say "no" because the customer request is genuinely impossible. Then comes the fun bit of explaining why the thing they want simply cannot exist, because often they'll try "But what if you just ... ?" – "No! It doesn't work that way, and here's why..."

immibis

I think he actually had a clearer vision of the requirements than you do. In web dev jargon land (and many jargon lands) "the database" means "the instance of Postgres" etc.

But fundamentally it just means "the base of data", the same way "a codebase" doesn't just mean a Git repository. "Downloading the database" just means there's a way to download all the data, and CSV is a reasonable export format. Don't get confused into thinking it means a way to download the Postgres data folder.

JeremyNT

> The biggest asset of a developer is saying "no" to people. Perhaps AIs will learn that, but with competing AIs I'm pretty sure we'll always get one or the other to say yes, just like we have with people.

In my experience this is always the hardest part of the job, but it's definitely not what a lot of developers enjoy (or even consider to be their responsibility).

I think it's true that there will always be room for developers-who-are-also-basically-product-managers, because success for a lot of projects will boil down to really understanding the stakeholders on a personal level.

bicx

I think the biggest skill I've developed is the "Yes, but..." Japanese style of saying "no" without directly saying "no." Essentially you're saying anything is possible, but you may need to expand restraints (budget, time, complexity). If your company culture expects the engineering team to evaluate and have an equal weight into making feature decisions, then a flat "no" is more acceptable. If you're in a non-tech-first company like I am, simply saying "no" makes _you_ look like the roadblock unless you give more context and allow others to weigh in on what they're willing to pay.

suzzer99

We merged with a group from Israel and I had to explain to them that our engineers had given them the "Hollywood no" on something they'd asked for. Basically "Yes, that sounds like a great idea" which actually means "Hell no" unless it's immediately followed up with an actionable path forward. The Israeli engineers found this very amusing and started asking if it was really a Hollywood no anytime they'd get a yes answer on something.

suzzer99

Saying no is the hardest part of the job, and it's only possible after you've been around a few years and already delivered a lot of value.

There's also an art to it in how you frame the response, figuring out what the clients really want, and coming up with something that gets them 90% there w/o mushrooming app complexity. Good luck with AI on that.

jvanderbot

There's a whole book about this called people ware. It's why I'm fond of saying "may all your problems be technical".

It's just never been that hard to solve technical problems with code except for an infinitesimal percentage of bleeding edge cases.

whstl

Yep. With a good enough team, even the technical problems are almost 100% caused by people issues.

osigurdson

I've heard this many times. It isn't clear what it means however. If nearly 100% of problems are "people problems", what are some examples of "people solutions"? That may help clarify.

bcrosby95

Technical problems are what cause the people problems though. You can't completely blame one or the other.

xnickb

This is actually a very good point. Although it's indeed not hard to imagine AI being far better at estimating the complexity of a potential solution and warning the user about it.

For example in chess AI is already far better than humans. Including on tasks like evaluating positions.

Admittedly, I use "AI" in a broad sense here, despite the article being mostly focused on LLMs.

evantbyrne

Agency work seems to be a blind spot for individuals within the startup world, with many not realizing that it goes way beyond theme chop shops. The biggest companies on the planet not only contract with agencies all the time, external contractors do some of their best work. e.g., Huge has won 3 Webby awards for Google products.

whstl

Oh I agree. I don't really have a problem with agencies, the topic of them is not really related to my reply. My focus was more on the "disposable" part.

mjlangiii

I agree, a stark difference between AI and me is knowing when to say, "no", and/or to dig deeper for the unspoken story/need/value.

robocat

> accidental complexity

Haven't we found a better term for that yet?

It is intentional, designed complexity...

There's no accident about it: engineers or management chose it.

Recent discussion on accidental versus essential (kicked off by a flagged article): https://news.ycombinator.com/item?id=44090302 (choosing good dichotomies is difficult, since there's always exceptions to both categories)

jstummbillig

I think the article is mostly wrong about why it is right.

> It's architecting systems. And that's the one thing AI can't do.

Why do people insist on this? AI absolutely will be able to do that, because it increasingly can do that already, and we are now goalposting around what "architecting systems" means.

What it cannot do, even in theory, is decide for you to want to do something and decide for you what that should be. (It can certainly provide ideas, but the context space is so large that I don't see how it would realistically be better at seeing an issue that exists in your world, including what you can do, who you know, and what interests you.)

For the foreseeable future, we will need people who want to make something happen. Being a developer will mean something else, but that does not mean that you are not the person most equipped to handle that task and deal with the complexities involved.

GuB-42

> Being a developer will mean something else

Not really. Programming means explaining the machine what to do. How you do it has changed over the years. From writing machine language and punching cards to gluing frameworks and drawing boxes. But the core is always the same: take approximative and ambiguous requirements from someone who doesn't really knows what he wants and turn it into something precise the machine can execute reliably, without supervision.

Over the years, programmers have figured out that the best way to do it is with code. GUIs are usually not expressive enough, and English is too ambiguous and/or too verbose, that's why we have programming languages. There are fields that had specialized languages before electronic computers existed, like maths, and for the same reason.

LLMs are just the current step in the evolution of programming, but the role of the programmer is still the same: getting the machine to do what people want, be it by prompting, drawing, or writing code, and I suspect code will still prevail. LLMs are quite good at repeating what has been done before, but having them write something original using natural language descriptions is quite a frustrating experience, and if you are programming, there is a good chance there is at least something original to it, otherwise, why not use an off-the-shelf product?

We are at the peak of the hype cycle now, but things will settle down. Some things will change for sure, as always when some new technology emerges.

yesco

I like to joke with people that us programmers automated our jobs away decades ago, we just tell our fancy compilers what we want and they magically generate all the code for us!

I don't see LLMs as much different really, our jobs becoming easier just means there's more things we can do now and with more capabilities comes more demand. Not right away of course.

dehrmann

What's different is compilers do deterministic, repetitive work that's correct practically every time. AI takes the hard part, the ambiguity, and gets it sorta ok some of the time.

ethbr1

100% agree with this thread, because it's the discussion about why no code (and cloud/SaaS to a lesser degree) failed to deliver on their utopian promises.

Largely, because there were still upstream blockers that constrained throughput.

Typically imprecise business requirements (because someone hadn't thought sufficiently about the problem) or operation at scale issues (poorly generalizing architecture).

> our jobs becoming easier just means there's more things we can do now and with more capabilities comes more demand

This is the repeatedly forgotten lesson from the computing / digitization revolution!

The reason they changed the world wasn't because they were more capable (versus their manual precursors) but because they were economically cheaper.

Consequently, they enabled an entire class of problems to be worked on that were previously uneconomical.

E.g. there's no company on the planet that wouldn't be interested in more realtime detail of its financial operations... but that wasn't worth enough to pay bodies to continually tabulate it.

>> The NoCode movement didn't eliminate developers; it created NoCode specialists and backend integrators. The cloud didn't eliminate system administrators; it transformed them into DevOps engineers at double the salary.

Similarly, the article feels around the issue here but loses two important takeaways:

1) Technologies that revolutionize the world decrease total cost to deliver preexisting value.

2) Salary ~= value, for as many positions as demand supports.

Whether are more or fewer backend integrators, devops engineers, etc. post-transformation isn't foretold.

In recent history, those who upskill their productivity reap larger salaries, while others' positions disappear. I.e. the cloud engineer supporting millions of users, instead of the many bodies that used to take to deliver less efficiently.

It remains to be seen whether AI coding will stimulate more demand or simply increase the value of the same / fewer positions.

PS: If I were career plotting today, there's no way in hell I'd be aiming for anything that didn't have a customer-interactive component. Those business solution formulation skills are going to be a key differentiator any way it goes. The "locked in a closet" coder, no matter how good, is going to be a valuable addition for fewer and fewer positions.

surgical_fire

Those are very good points.

I am finding LLMs useful for coding in that it can do a lot of heavy lifting for me, and then I jump in and do some finishing touches.

It is also sort of decent at reviewing my code and suggesting improvements, writing unit tests etc.

Hidden in all that is I have to describe all of those things, in detail, for the LLM to do a decent job. I can of course do a "Write unit tests for me", but I notice it does a much better job if I describe what are the test cases, and even how I want things tested.

lubujackson

I agree, I see AI as just a level of abstraction. Make a function to do X, Y, Z? Works great. Even architect a DAG, pretty good. Integrate everything smoothly? Call in the devs.

On the bright side, the element of development that is LEAST represented in teaching and interviewing (how to structure large codebases) will be the new frontier and differentiator. But much as scripting language removed the focus on pointers and memory management, AI will abstract away discrete blocks of code.

It is kind of the dream of open source software, but advanced - don't rebuild standard functions. But also, don't bother searching for them or work out how to integrate them. Just request what you need and keep going.

bluefirebrand

> I agree, I see AI as just a level of abstraction. Make a function to do X, Y, Z? Works great. Even architect a DAG, pretty good. Integrate everything smoothly? Call in the devs.

"Your job is now to integrate all of this AI generated slop together smoothly" is a thought that is going to keep me up at night and probably remove years from my life from stress

I don't mean to sound flippant. What you are describing sounds like a nightmare. Plumbing libraries together is just such a boring, miserable chore. Have AI solve all the fun challenging parts and then personally do the gruntwork of wiring it all together?

I wish I were closer to retirement. Or death

zamalek

My way of thinking about this has been: code is to a developer as bricks are to a builder. Writing a line of code is merely the final 10% of the work, there's a whole bunch of cognitive effort that precedes it. Just like a builder has already established a blueprint, set up straight lines, mixed cement, and what-have-you, prioir to laying a brick.

catigula

The problem with this idea is that the current systems have gone from being completely incapable of taking the developer role in this equation to somewhat capable of taking the developer role (i.e. newer agents).

At this clip it isn't very hard to imagine the developer layer becoming obsolete or reduced down to one architect directing many agents.

In fact, this is probably already somewhat possible. I don't really write code anymore, I direct claude code to make the edits. This is a much faster workflow than the old one.

aaronblohowiak

+100.

I feel like a lot of people need to go re-read moon is a harsh mistress.

johnnyanmac

>Why do people insist on this?

Because it isn't even close to entry level architecture. And the structure of LLM's makes it hard to iterate the way architecture requires. It basically gets a little bit of context only to essentially tear down and rebuild at blistering speeds. You can't really architect that way.

>What it cannot do, even in theory, is decide for you to want to do something and decide for you what that should be.

Something sadly common among clients, even large ones.

suyash

The article is mostly wrong, companies are already not recruiting as many junior/fresh college graduates as before. If AI is doing everything but architecting (which is a false argument but let's roll with it), naturally companies will need fewer engineers to architect and supervise AI systems.

VBprogrammer

I suspect that any reduction in hiring is more a function of market sentiment than jobs being replaced by AI. Many companies are cutting costs rather than expanding as rapidly as possible during the capture the flag years.

ghaff

And correcting for a hiring bubble.

Depends on where too. Was just talking to a friend yesterday who works for a military sub (so not just software) and they said their projects are basically bottlenecked by hiring engineers.

bradlys

People keep forgetting that the hiring cuts were happening before AI was hyped up. AI is merely the justification right now because it helps stock price.

We’ve been seeing layoffs for over 3 years…

JohnMakin

They're not hiring juniors and now my roles consist of 10x as much busywork as they used to. People are expanding to fill these gaps; I'm not seeing much evidence that AI is "replacing" these people as much as businesses think they now don't need to hire junior developers. The thing is though, in 5 years there is not going to be as many seniors and if AI doesn't close that gap, businesses are going to feel it a lot more than whatever they think they're gaining by not hiring now.

burningChrome

>> The thing is though, in 5 years there is not going to be as many seniors.

This is already happening. Over the past 4-5 years I've known more than 30 senior devs either transition into areas other than development, or in many case, completely leave development all together. Most have left because they're getting stuck in situations like you describe. Having to pick up more managerial stuff and AI isn't capable of even doing junior level work so many just gave up and left.

Yes, AI is helping in a lot of different ways to reduce development times, but the offloading of specific knowledge to these tools is hampering actual skill development.

We're in for a real bumpy ride over the next decade as the industry comes to gripes with how to deal with a lot of bad things all happening at the same time.

uludag

There's the software factory hypothesis though that states that LLMs will bring down the level of skill required to produce software, reducing the skill bar required to produce the same software (i.e. automation leads to SWE being like working on factory line). In this scenario, unskilled cheap labor would be desired, making juniors more preferable.

My guess though is that the lack of hiring is simply a result of the over saturation of the market. Just looking at the growth of CS degrees awarded you have to conclude that we'd be in such a situation eventually.

roenxi

The equilibriums wouldn't quite work out that way. The companies would still hire the most capable software engineers (why not?), but the threat of being replaced by cheap juniors means that they don't have much leverage and their wages drop. It'll still be grizzled veterans and competitive hiring processes looking for people with lots of experience.

These things don't happen overnight though, it'll probably take a few years yet for the shock of whatever is going on right now to really play out.

worldsayshi

The amount and sophistication of sailing ships increased considerably as steam ships entered the market. Only once steam ships were considerably better in almost every regard that mattered to the market did the sailing ships truly get phased out to become a mere curiosity.

I think the demand for developers will similarly fluctuate wildly while LLM:s are still being improved towards the point of being better programmers than most programmers. Then programmers will go and do other stuff.

Being able to make important decisions about what to build should be one of those things that should increase in demand as the price of building stuff goes down. Then again, making important technical decisions and understand their consequences have always been part of what developers do. So we should be good at that.

skydhash

The advantages of steam over sails were clear to everyone. The only issues left was engineering, solving each mini problem as they went and make the engine more efficient. Since the advent of ChatGPT, hallucinations were pointed out as a problem. Today we're no way close to even a hint on how to correct it.

ncruces

People apparently can't decide if AI is killing juniors, or if it's lowering the bar of what laymen can achieve.

whstl

Anecdotal but:

Fintech unicorn that has AI in its name, but still forbids usage of LLMs for coding (my previous job) --> no hiring of juniors since 2023.

YC startup funded in 2024 heavily invested in AI (my current job) --> half the staff is junior.

ghaff

There's definitely this broader argument and you can even find it in academic papers. Is AI best at complementing expertise or just replacing base-level skills? Probably a bit of both but an open question.

reality2025

[flagged]

jghn

This does not explain why so many software professionals are finding themselves out of a job as well as finding it hard to acquire a new one.

cheschire

Maybe I misunderstood your phrasing, butI think with enough context an AI could determine what you want to do with reasonable accuracy.

In fact, I think this is the scary thing that people are ringing the alarm bells about. With enough surveillance, organizations will be able to identify you reliably out in the world enough to build significant amount of context, even if you aren’t wearing a pair of AI glasses.

And with all that context, it will become a reasonable task for AI to guess what you want. Perhaps even guess a string of events or actions or activities that would lead you towards an end state that is desirable by that organization.

This is primarily responding to that one assertion though and is perhaps tangential to your actual overall point.

jstummbillig

Take a look at your life and the signals you use to operate. If you are anything like me, summarizing them in a somewhat reasonable fashion feels basically impossible.

For example, my mother calls and asks if I want to come over.

How is an AI ever going to have the context to decide that for me? Given the right amount and quality of sensors starting from birth or soon after – sure, it's not theoretically impossible.

But as a grown up person that has knowledge about the things we share, and don't share, the conflicts in our present and past, the things I never talked about to anyone and that I would find hard to verbalize if I wanted to, or admit to myself that I don't.

It can check my calendar. But it can't understand that I have been thinking about doing something for a while, and I just heard someone randomly talking about something else, that resurfaced that idea and now I would really rather do that. How would the AI know? (Again, not theoretically impossible given the right sensors, but it seems fairly far away.)

I could try and explain of course. But where to start? And how would I explain how to explain this to mum? It's really fucking complicated. I am not saying that llm's would not be helpful here by generalization monsters, actually it's both insane and sobering how helpful they can be giving the amount of context that they do not have about us.

prmph

Exactly, even AGI would not be able to answer that question on my behalf.

Which means it cannot architect a software solution just by itself, unless it could read people's minds and know what they might want.

dakiol

> but I think with enough context [...]

I think that's the key. The only ones who can provide enough accurate context are software developers. No POs or managers can handle such levels of detail (or abstraction) to hand them over via prompts to a chatbot; engineers are doing this on a daily basis.

I laugh at the image of a non-technical person like my PO or the manager of my manager giving "orders" to an LLM to design a high-scalable tiny component for handling payments. There are dozens of details that can go wrong if not-enough details are provided: from security, to versioning, to resilience, to deployment, to maintainability...

null

[deleted]

malnourish

Until LLMs are hooked directly into business and market data and making decisions without, or with nominal, human intervention.

mnky9800n

I think this is already what happens in social media advertising. It’s not hard to develop a pattern of behaviours for a subset of people that lead to conversion and then build a model that delivers information to people that leads them on those paths. And conversion doesn’t mean they need to buy a product it could also be accept an idea, vote for a candidate, etc. The scary thing, as you point out, is that this could happen in the real world given the massive amount of data that is passively collected about everything and everybody.

psychoslave

I want peace and thrive for all members of humanity¹ to the largest, starting where it makes reciprocal florishing and staying free of excluding anyone by favoring someone else.

See, "AI" don't even have to guess it, I make full public disclosure of it. If anything can help with such a goal, including automated inference (AI) devices, there is no major concern with such a tool per se.

The leviathan monopolizing the tool for its own benefit in a detrimental way for human beings is an orthogonal issue.

¹ this is a bit anthropocentric statement, but it's a good way to favor human agreement, and I believe still actually implicitely require living in harmony with the rest of our follow earth inhabitants

austin-cheney

No, AI is not yet able to architect. The confusion here is the inability to discern architecture from planning.

Planning is the ability to map concerns to solutions and project solution delivery according to resources available. I am not convinced AI is anywhere near getting that right. It’s not straightforward even when your human assets are commodities.

Acting on plans is called task execution.

Architecture is the design and art of interrelated systems. This involves layers of competing and/or cooperative plans. AI absolutely cannot do this. A gross hallucination at one layer potentially destroys or displaces other layers and that is catastrophically expensive. That is why real people do this work and why they are constantly audited.

Dumblydorr

It can’t actively become a coding agent and make the changes, but it doesn’t do that for individual scripts now, and yet we say it can code.

And yet I can ask it how to architect my database in a logical way, and it clearly has solid ideas that again, it doesn’t script itself.

So really it teaches us or instructs us one way to do things, it’s not executing in any realm…yet

austin-cheney

None of my parent comment had anything to do with writing code. Architects in physical engineering don’t hammer nails or pour concrete. Architects in software, likewise, aren’t concerned with islands of code.

elzbardico

AI can't architect. AI can simulate architecting. A lot of times AI can't even code.

conartist6

But that's what it's sold as. The decide-for-you bot.

If people still had to think for themselves, what would be the point?

bonoboTP

What's the point of a lever if I still have to move it?

tonyhart7

Yeah we just need Amazon release their aws sdk mcp then wait until few years when the rough parts get smoothed and then it would be possible

I mean we literally have a industry that just do that (Vercel,Netfly etc)

vinceguidry

This article makes a fundamental mistake where the author thinks that business values quality. Business has never valued quality. Customers can value quality, but business only values profit margins. If customers will only buy quality, then that's what business will deliver. But customers don't value quality either, most of the time. They value bang-for-buck. They'll buy the cheapest tools on Amazon and happily vibe code their way into a hole, then throw the broke code out and vibe code some more.

The only people that value quality are engineers. Any predictions of the future by engineers that rely on other people suddenly valuing quality can safely be ignored.

sanderjd

This doesn't resonate with me at all.

First of all, all the most successful software products have had very high quality. Google search won because it was good and fast. All the successful web browsers work incredibly well. Ditto the big operating systems. The iPhone is an amazing product. Facebook, Instagram, TikTok; whatever else you think, these are not buggy or sluggish products, (especially in their prime). Stripe grew by making a great product. The successful B2B products are also very high quality. I don't have much love for Databricks, but it works well. I have found Okta to be extremely impressive. Notion works really well. (There are some counterexamples: I'm not too impressed by Rippling, for instance.)

Where are all these examples of products that have succeeded despite not valuing quality?

zharknado

> Where are all these examples of products that have succeeded despite not valuing quality?

Windows products since the 2000s. They may have won on quality early on but today succeed mainly via compliance controls and switching costs IMO.

Also big legacy B2B digital systems of record. Pretty much any ERP. Can’t say firsthand but this is my impression of SAP products and Oracle products. Also Encompass, the system of record for a large majority of the U.S. mortgage market. Most medical software.

There are a lot of recordkeeping systems that have a massive moat from handling decades worth of nuance. Their “quality” by modern UX and performance standards is very poor but they handle all the nooks and crannies of their industry.

johnnyanmac

You're correct, so maybe there's a caveat. You need to have quality in the beginning during market capture mode. Once the customer is entrenched, you can then slack or even enshittify your product to some point. You're playing with friction that may lose the business, but comfortable customers can tolerate quite a bit for the familiar.

oldandboring

Thanks for calling out Rippling. Pretty poor experience for me as well.

enraged_camel

>> Where are all these examples of products that have succeeded despite not valuing quality?

Salesforce. Quickbooks. Any Oracle product.

simoncion

Blackboard's software and systems.

Fucking Windows.

jajko

Sorry but facebook a "high quality product"? It was a bug infested shitshow from beginning to this day, across multiple computers, spanning more than decade and a half. Not just me. Literally their only value is social graph, which they have by luck of being first, nothing more.

These days when site crashes I welcome it as a gentle reminder to not spend there even that 1 minute I sometimes do. Anyway its now mostly fake ai generated ads to obscure groups I have 0 interest in, I keep reporting them to FB but even for outrighr fraud or scams FB comes back to me with resolution in maybe 2% of the cases. EU on you you cheap scammers.

But in the past I used it for ie photo sharing with family and friends, since I was super active in adventuring and travelling around the world. Up to 10k photos over a decade.

Photo albums uploads randomly failed, or uploaded some subset, some photos twice. On stable fiber optic, while flickr or google photos never ever had such issue. Cannot comment, some internal gibberish error. Comment posted twice. Page reloads to error. Links to profiles or photos go to empty page. Sometimes even main page just empty feed or some internal error. I saw the sentence "Something went wrong" hundreds or maybe even thousands of times, it became such a classic 500 variant. And so on and on, I dont keep list around. Always on Firefox and ublock origin.

I would be properly ashamed to be ever profesionally linked with such, by huge margin, worst technical product that I ever came across. That is, if I could somehow ignore what a cancer to society I would be helping to build, but that would require advanced sociopathical mental tricks on myself I am simply neither capable nor willing to do.

Nah, FB doesnt deserve to be mentioned in same category as the rest, on any reasonable basis.

whatnow37373

People will start caring when their devices start bricking, loading websites takes 12sec and registering for medicaid is only possible between 9 and 11AM and then only if lucky.

We are in this weird twilight zone where everything is still relativity high quality and stuff sort of works but in a few decades shit will start degrading faster than you can say “OpenAI”.

Weird thing will start happening like tax systems for the government not being able to be upgraded while consuming billions, infrastructure failing for unknown reasons, simple non or low-power devices that are now ubiquitous will become rare. Everything will require subscriptions and internet access and nothing will work right. You will have to talk to LLMs all day.

wiseowise

> People will start caring when their devices start bricking, loading websites takes 12sec and registering for medicaid is only possible between 9 and 11AM and then only if lucky.

I don’t know about Medicaid, but the other two are already true right now.

yoyohello13

I'm convinced the Microsoft Teams team has gone all in on vibe coding. I have never seen so many broken features released in such a short time frame as the last couple months. This is the future as more companies go all in on AI coding.

JB_Dev

Nah this is just microsofts quality bar in general. AI will only accelerate the decline.

herpdyderp

Nothing really new then, just faster enshittification timelines.

const_cast

> registering for medicaid is only possible between 9 and 11AM and then only if lucky.

When we got healthcare.gov it was pretty much this, maybe worse actually. Website was unusable and delivered like 5% of the requirements. It was pretty bad and people were pissed.

Of course in typical American government fashion, the task was outsourced to some companies in the private sector. Which then took 2 years to do it, went way over budget, and still delivered nothing.

whatnow37373

While this issue is complicated and caused by a variety of factors I believe it is indicative of the quality we are going to be seeing in the coming decades. Well, that plus ads. The ads will always work.

chairhairair

If the current tech plateaus (but continues to come down in price, as expected) then this is a good prediction.

But, then there will be a demand for "all-in-one" reliable mega apps to replace everything else. These apps will usher in the megacorp reality William Gibson described.

johnnyanmac

>but continues to come down in price, as expected

Is it? AI very much seems to be in market capture mode. And IIRC, very few businesses actually report profits.

I can only predict AI models ramping up the cost like crazy once the victor captures the market. Same as every other tech trend in the last 20 years.

prmph

I don't know where you get the impression that customer's don't value quality. They value quality, a lot.

If customers didn't value quality, then every startup would have succeeded, just by providing the most barely functioning product at the cheapest prices, and making enough revenue by volume.

vinceguidry

> startup would have succeeded, just by providing the most barely functioning product at the cheapest prices, and making enough revenue by volume.

You've just described hustle culture. And yes, it does lead to business success. Engineers don't like hustle.

prmph

Yep, but most hustles fail, the number of startups that succeed is like, what, 5 or 10%?

ndiddy

I think quality can be a differentiator in some cases. When the iPhone came out, there were other phones running Windows Phone and Symbian that had more features and cost less. However, the iPhone was successful anyway because it ran smoother and had a more polished UI than its competitors.

mattgreenrocks

A few facts about my favorite quality-centric company: https://www.fractalaudio.com/

They build hardware-based amp/pedal modelers (e.g. virtual pedalboards + amp) for guitars that get a very steady stream of updates. From a feel and accuracy perspective, they outcompete pretty much everyone else, even much bigger companies such as Line 6 (part of Yamaha). Pretty small company AFAIK, maybe less than 20 people or so. Most of the improvements stem from the CEO's ever-improving understanding of how to model what are very analog systems accurately.

They do almost everything you shouldn't do as a startup:

* mostly a hardware company

* direct sales instead of going through somewhere like Sweetwater

* they don't pay artists to endorse them

* no subscriptions

* lots of free, sometimes substantial updates to the modeling algorithms

* didn't use AI to build their product quickly

Quality is how they differentiate themselves in a crowded market.

This isn't an edge case, either. This is how parts of the market function. Not every part of every market is trapped in a race to the bottom.

vinceguidry

You love to see it. Nothing beats a labor of love.

throw234234234

Depends solely on the domain IMO. There are domains where stakeholders absolutely value quality (e.g. loss of revenue, fines and other consequences). This isn't universally true.

caseysoftware

> Business has never valued quality. Customers can value quality, but business only values profit margins.

If think you're really close with one nuance.

Business does not value CODE quality. Their primary goal is to ship product quickly enough that they can close customers. If you're in a fast moving or competitive space, quality matters more because you need to ship differentiating features. If the space is slow moving, not prone to migration, etc, then the shipping schedule can be slower and quality is less important.

That said, customers care about "quality" but they likely define it very differently.. primarily as "usability"

They don't care about the code behind the scenes, what framework you used, etc as long as the software a) does what they want and b) does it "quick enough" in their opinion.

kerkeslager

> They don't care about the code behind the scenes, what framework you used, etc as long as the software a) does what they want and b) does it "quick enough" in their opinion.

Business folks love to say this, but a lot of this time this is glossing over a pretty inherent coupling between code quality and doing what users want quick enough. I've worked on a lot of projects with messy code, and that mess always translated into problems which users cared about. There isn't some magical case where the code is bad and the software is great for the users--that's not a thing that exists, at least not for very long.

ayrtondesozzla

I'd change that line near the end to:

The only people that often value quality are engineers.

I might even add that the overwhelming majority of engineers are happy to sacrifice quality - and ethics generally - when the price is right. Not all, maybe.

It's a strange culture we have, one which readily produces engineer types capable of complex logic in their work, and at the same time, "the overarching concern of business is always profit" seems to sometimes cause difficulty.

fhd2

I think a few revolutions are missing in the list, that weren't technical, but organisational:

1. The push for "software architects" to create plans and specifications for those pesky developers to simply follow. I remember around 2005, there was some hype around generating code from UML and having developers "just" fill in the blanks. The result I've observed were insanely over engineered systems where even just adding a new field to be stored required touching like 8 files across four different layers.

2. The "agile transformation" era that followed shortly after, where a (possibly deliberate) misunderstanding of agile principles lead to lots of off-the-shelf processes, roles, and some degree of acceptance for micro managing developers. From what I've seen, this mostly eroded trust, motivation and creativity. Best case scenario, it would create a functioning feature factory that efficiently builds the wrong thing. More often than not, it just made entire teams unproductive real fast.

What I've always liked to see is non-developers showing genuine interest in the work of developers, trying to participate or at least support, embracing the complexity and clarifying problems to solve. No matter what tools teams use and what processes they follow, I've always seen this result in success. Any effort around reducing the complexity inherent in software development, did not.

dinfinity

> The most valuable skill in software isn't writing code, it's architecting systems.

> And as we'll see, that's the one skill AI isn't close to replacing.

Yet we never 'see' this in the article. It just restates it a few times without providing any proof.

I'd argue the opposite: specifically asking AI for designing an architecture already yields better results than what a good 30% of 'architects' I've encountered could ever come up with. It's just that a lot of people using AI don't explicitly ask for these things.

jandrewrogers

I’d frame it a bit differently. LLMs are pretty good at generating the midwit solution to problems because that is the bulk of the available training corpus. It is a generic “best practices” generator. You would expect it to be better than a third of human architects almost by definition.

On the other hand, they are pretty poor at reasoning from first principles to solve problems that are far outside their training corpus. In some domains, like performance-sensitive platforms, the midwit solution is usually the wrong one and you need highly skilled people to do the design work using context and knowledge that isn’t always available to LLMs. You could probably use an LLM to design a database kernel but it will be a relatively naive one because the training data isn’t available to do anything close to the state-of-the-art.

nekochanwork

> Yet we never 'see' this in the article. It just restates it a few times without providing any proof.

I'm honestly shocked by the number of upvotes this article has on Hacker News. It's extremely low quality. It's obviously written with ChatGPT. The tells are:

(1) Incorrect technology "hype cycle". It shows "Trigger, Disillusionment, Englightnment Productivity". It's missing the very important "Inflated Expectations".

(2) Too many pauses that disrupt the flow of ideas:

- Lots of em-dashes. ChatGPT loves to break up sentences with em-dashes.

- Lots of short sentences to sound pithy and profound. Example: "The executives get excited. The consultants circle like sharks. PowerPoint decks multiply. Budgets shift."

(3) "It isn't just X, it's X+1", where X is a normal descriptor, where X+1 is a more emphatic rephrasing of X. ChatGPT uses this construct a lot. Here are some from the article:

- "What actually happens isn't replacement, it's transformation"

- "For [...] disposable marketing sites, this doesn't matter. For systems that need to evolve over years, it's catastrophic."

Similarly, "It's not X, it's inverse-X", resulting in the same repetitive phrasing:

- "The NoCode movement didn't eliminate developers; it created NoCode specialists and backend integrators."

- "The cloud didn't eliminate system administrators; it transformed them into DevOps engineers"

- "The most valuable skill in software isn't writing code, it's architecting systems."

- "The result wasn't fewer developers—it was the birth of "NoCode specialists""

- "The sysadmins weren't eliminated; they were reborn as DevOps engineers"

- "the work didn't disappear; it evolved into infrastructure-as-code,"

- "the technology doesn't replace the skill, it elevates it to a higher level of abstraction."

- "code is not an asset—it's a liability."

---------

I wish people stopped using ChatGPT. Every article is written in the same wordy, try-to-hard-to-sound-profound, ChatGPT mannerisms.

Nobody writes in their own voice anymore.

raincole

This is what wishful thinking looks like. The author is probably proud of their architecting skill so they think it's irreplaceable. If they were good at, say, optimization, they would think optimization is irreplaceable.

dgb23

I think that's just in the nature of these tools. They are better at doing things you can't do (most of the things), but worse at the things you can do (very few things).

Ex: If you're a lazy typist like most, then a code assistant can speed you up significantly, when you use it as an autocomplete plus. But if you're a very practiced vim user and your fingers fly over the keyboard, or a wizard lisp hacker who uses structural editing, then a code assistant slows you down or distracts you even.

dist-epoch

Or as Marc Andreessen said, being a VC is the last job AI's will be able to replace :)))

> Andreessen said that venture capital might be one of the few jobs that will survive the rise of AI automation. He said this was partly because the job required several “intangible” skills and was more of an art than a science.

https://fortune.com/article/mark-andreessen-venture-capitali...

verbify

> the job required several “intangible” skills and was more of an art than a science.

I've seen a lot more ai-generated art than ai-generated science.

chasing

No, AI has proven quite adept at generating self-aggrandizing bullshit.

ta1243

90% of the problem an architect has is being able to understand and thus express the requirement and limitations of a system, and understanding how it interacts with everything else.

I.e writing the prompt, understanding the answers, pushing back etc.

theyinwhy

90% of the problem an architect has are people.

majkinetor

This!

Unless AI is introduced as a regular coworker and stakeholder wants to communicate with it regularly, I don't see this changing anytime soon.

Reverse engineering stuff when non-cooperative stakeholders dominate the project has its limits too and requires "god mod" type of access to internal infrastructure, which is not something anybody gets.

mrweasel

That's because a large percentage of "architects" aren't really all that great. We interviewed a candidate that didn't knew much of anything, in terms of actually operating IT infrastructure, but in their mind that didn't really matter because they where looking for more of an architects role and didn't want to touch things like terminals, YAML, databases and all that stuff. They completely serious just sat there and told us that they really just wanted to work in diagram tools and maybe Excel....

Architects are like managers, it's way harder than people imagine and very few people can actually do the work.

whstl

Yep. It's indeed like "management", where people expect to just slide into it as a reward for staying at a company for a few extra years.

Also I hate that "architect" is used as a synonym of "cloud architect". There is much more to software architecture than cloud.

mrweasel

Precisely. I noticed that a previous intern of mine is now an "Enterprise Architect". He's a smart dude, no doubt about it, but from zero to architect in 3.5 years? There's no way this person has the experience to be an architect. That is a promotion because the company either needed someone with that title, or because he was "paid off" to stay onboard.

exceptione

> We interviewed a candidate ... didn't want to touch things like terminals ... they where looking for more of an architects role

I don't know whether you were actually looking for an architect? There are different types of architects. For example, you have got enterprise architects that indeed will never touch yaml, you have got solution architects who have a more narrow focus, and you have got engineers with a plate of overbearing work and team responsibilities. The latter are better called lead engineer. In my experience, being a good (lead) engineer doesn't make one a good architect, but companies try to make their job posting more sexy by titling it with "architect". One would imho do better by taking lead engineers seriously just in their own right.

Architects in general need to be very skilled in abstract reasoning, information processing & conceptual thinking. However, the people hiring for an "architect" often look for a 2x/nx engineer, who is able to code x widgets per hour. That is a stupid mismatch.

I would agree however that someone without previous practical experience would be rather unsuitable, especially "below" enterprise architect level.

hcfman

Architects what are they?

Ohhh, you mean power point writers. Sorry, lost you for a minute there.

crakhamster01

I'm increasingly certain that companies leaning too far into the AI hype are opening themselves up to disruption.

The author of this post is right, code is a liability, but AI leaders have somehow convinced the market that code generation on demand is a massive win. They're selling the industry on a future where companies can maintain "productivity" with a fraction of the headcount.

Surprisingly, no one seems to ask (or care) about how product quality fares in the vibe code era. Last month Satya Nadella famously claimed that 30% of Microsoft's code was written by AI. Is it a coincidence that Github has been averaging 20 incidents a month this year?[1] That's basically once a work day...

Nothing comes for free. My prediction is that companies over-prioritizing efficiency through LLMs will pay for it with quality. I'm not going to bet that this will bring down any giants, but not every company buying this snake oil is Microsoft. There are plenty of hungry entrepreneurs out there that will swarm if businesses fumble their core value prop.

[1] https://www.githubstatus.com/history

cheema33

> I'm increasingly certain that companies leaning too far into the AI hype are opening themselves up to disruption.

I am in the other camp. Companies ignoring AI are in for a bad time.

crakhamster01

Haha, I tried to couch this by adding "too far", but I agree. Companies should let their teams try out relevant tools in their workflows.

My point was more of a response to the inflated expectations that people have about AI. The current generation of AI tech is rife with gotchas and pitfalls. Many companies seem to be making decisions with the hope that they will out-innovate any consequences.

DaSHacka

How so? Not enough art slop logos so they don't have to pay an artist? Other than in maximizing shareholder return I fail to see how foregoing AI is putting them "behind".

AI, especially for programming, is essentially no better than your typical foriegn offshore programming firm, with nonsensical comments and sprawling conflicting code styles.

If it eventually becomes everything the proponents say it will, they could always just start using it more.

sltr

I agree with this. "Companies which overuse AI now will inherit a long tail of costs" [1]

[1] AI: Accelerated Incompetence. https://www.slater.dev/accelerated-incompetence/

nhumrich

> code is not an asset—it's a liability

Yes, this. 100% this. The goal is for a program to serve a goal/purpose with the least a amount of code possible. AI does the exact opposite. Now that code generation is easy, there is no more natural constraint preventing too much liability.

artrockalter

An answer to the productivity paradox (https://en.m.wikipedia.org/wiki/Productivity_paradox) could be that increased technology causes increased complexity of systems, offsetting efficiency gains from the technology itself.

RankingMember

Reminds me a lot of the old days where people were using MS FrontPage to create websites and the html was like 90% cruft.

1shooner

>html was like 90% cruft.

Have you looked at much top-tier website code lately?

DaSHacka

I visited stallman.org just the other day, yes.

coliveira

But if code can be easily replaced, why it needs to be a liability? If something goes wrong, the next generation of "programmers" will ask the ai to generate the code again.

dakiol

Code can't easily be replaced. It's "soft"ware sure, but why do you think banks are still using Cobol? Why do you think my old company is still running a deprecated version of Zend (PHP framework)?

skydhash

Also the reason the term technical debt has been used. Every decision you enforce with the code you write is something that you may need to revert later. And the cost of doing so can be really high. So high that in the parent comment, you just don’t bother.

coliveira

Because, until recently, it was very costly to replace the code. AI "programmers" will create completely new code in a few minutes so there's no need to maintain it. If there are new problems tomorrow, they'll generate the code again.

jollyllama

>can be easily replaced

I guess the question is "replaced with what?" How can you be sure it's a 1:1 replacement?

Bostonian

If you have for example a sorting routine, coded slightly differently, in 50 different files, which one should you use? It's better to have a single file with a sorting routine that you trust.

westoque

Such a great quote. Mostly true if viewed especially from a business standpoint. I for one also see code as creative expression, a form of art. I like coding because I can express a solution in a way that is elegant and nice to read for myself and others. A bit shallow but If you've read code that is written elegantly, you'll know that immediately.

a_imho

My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD103...

mpweiher

“Since FORTRAN should virtually eliminate coding and debugging…” -- FORTRAN Preliminary report, 1954

http://www.softwarepreservation.org/projects/FORTRAN/BackusE...

cauliflower99

This is brilliant.

ogogmad

You can be wrong a lot and then suddenly be right. Look up "being a Turkey": https://en.wikipedia.org/wiki/Turkey_illusion

mpweiher

I think you are missing the fact that it did, in fact, eliminate “coding”.

And that’s where it starts to get interesting.

janalsncm

I think for the most part the layoffs in software are layoffs because of uncertainty, not because of technology. They are being justified after the fact with technobabble. If there wasn’t economic uncertainty companies would gladly accept the extra productivity.

Think about it this way: five years ago plenty of companies hired more SWEs to increase productivity, gladly accepting additional cost. So it’s not about cost imo.

I might be wrong, but perhaps a useful way to look at all of this is to ignore stated reasons for layoffs and look at the companies themselves.

BugheadTorpeda6

What extra productivity? Why doesn't it show up in the economic figures?

overflow897

I think articles like this have the big assumption under them that we are going to plateau with progress. If that assumption is true, then sure.

But if it's false, there's no saying you can't eventually have an ai model that can read your entire aws/infra account, look at logs, financials, look at docs and have a coherent picture of an entire business. At that point the idea that it might be able to handle architecture and long term planning seems plausible.

Usually when I read about developer replacement, it's with the underlying assumption that the agents/models will just keep getting bigger, better and cheaper, not that today's models will do it.

layer8

There is a high risk that the systems that AIs build, and their reasoning, will become inscrutable with time, as if built by aliens. There is a huge social aspect to software development and the tech stack and practices we have, that ensures that (despite all disagreements) we as developers are roughly on the same page as to how to go about contemporary software development (which now for example is different than, say, 20 or 40 years ago).

When AIs are largely on their own, their practices will evolve as well, but without there being a population of software developers who participate and follow those changes in concepts and practices. There will still have to be a smaller number of specialists who follow and steer how AI is doing software development, so that the inevitable failure cases can be analyzed and fixed, and to keep the AI way of doing things on a track that is still intelligible to humans.

Assuming that AI will become that capable, this will be a long and complex transition.

IshKebab

These kinds of articles are arguing against nothing. Anybody can see that AI can't really replace developers today (though it can certainly save you huge chunks of time in some situations). But what about in 5 years? 10 years? Things are changing rapidly and nobody knows what's going to happen.

It's entirely possible that in 5 or 10 years at least some developers will be fully replaced.

(And probably a lot of people in HR, finance, marketing, etc. too.)

SketchySeaBeast

Well, we're on year 3 of developers being out of jobs in 6 months.

Maybe 5 or 10 years things will change, but at this point I can't see myself being replaced without some sort of paradigm shift, which is not what the current brand of AI improvements are offering - it seems like they are offering iterations of the same thing over and over, each generation slightly more refined or with more ability to generate output based upon its own output - so I see no reason to assume my job is in jeopardy just because it might be at some later date.

Someone needs to tell me what exactly is going to change to cause this sudden shift in what AI can do, because right now I don't see it. It seems to have given people a licence to suggest science fiction be treated like a business plan.

IshKebab

Nothing fundamental needs to change. It just needs to get smarter and more reliable.

And don't think that because the crazy "everything will be AI in 6 months" predictions predictably haven't come to pass that that means it won't ever happen.

I'm old enough to remember the failure of online clothes shopping in the dot-com era. Sometimes things just take a while.

ripe

> Nothing fundamental needs to change. It just needs to get smarter and more reliable.

But these are the precise improvements that require a fundamental change to how these systems work.

So far, no one has figured out how to make AI systems achieve this. And yet, we're supposed to believe that tinkering with LLMs will get us there Real Soon Now.

SketchySeaBeast

If you're old enough to remember dot-com you're old enough to remember when low code and WYSIWYG were both supposedly the death knell for developers.

Sure it not yet happening doesn't mean it won't ever happen, but it also no evidence that it will. When the latest apocalypse cult fails to predict the end of the world does that make you more or less convinced that the world will end the next time someone yells it? The longer this future developer apocalypse is delayed the less credible it seems.

monknomo

I figure if it can replace devs, any job that types is pretty much at risk, and we will all be in such trouble that there is no point in planning for that scenario

IshKebab

Coding jobs are maybe one of the easier ones to replace since there's so much public training material and it's fundamentally language based.

But yeah I think I agree. By the time my job is actually fully redundant, society is fucked anyway.

georgemcbay

I totally agree as a developer (who sometimes uses LLMs).

They can be a useful tool, but their current capabilities and (I personally believe) their ability to improve indefinitely are wildly overhyped. And the industry as a whole has some sort of blinders on, IMO, related to how progress made with them is lumpy and kind of goes in both directions in the sense that every time someone introduces their grand new model and I play around with it I'll find some things it is better at than the previous version and some things it is worse at than the previous version. But number go up, so progress... I guess?

On one hand I can laugh this all off as yet another management fad (and to be clear, I don't think LLM usage is a fad, just the idea that this is going to be world-changing technology rather than just another tool), but what scares me most about the current AI hype isn't whether LLMs will take all of our jobs, but rather the very real damage that is likely to be caused by the cadre of rich and now politically powerful people who are pushing for massive amounts of energy production to power all of this "AI".

Some of them are practically a religious cult in that they believe in human-caused climate change, but still want to drastically ramp up power production to absurd levels by any means necessary while handwaving away the obvious impact this will have by claiming that whatever damage is caused by the ramp up in power production will be solved when the benevolent godlike AI that comes out on the other side will fix it for us.

Yeah, I uh don't see it working out that way. At all.

AnimalMuppet

Seems to me that, if "make the decisions that will save us" is handed over to AI-in-the-present-form, it would be somewhere between damaging and catastrophic, with or without climate damage from the power generation.

owebmaster

> It's entirely possible that in 5 or 10 years at least some developers will be fully replaced.

Some were entirely replaced already, like landing page developers. But the amount of AI/nocode developers is much bigger and growing fast so no dev roles were eliminated. That is just of more of the same in tech, keeping up with it.

Izkata

Landing page developers were still a thing? I thought they were replaced decades ago with FrontPage and Dreamweaver.

(only a bit /s)

bilbo0s

If you're gonna give AI a decade of improvements, then I'll go ahead and bet on a whole lot of developers being replaced. Not just some developers being replaced.

I think you hit on something with finance as well. Give Microsoft a decade of improving AI's understanding of Excel and I'm thinking a whole lot of business analyst types would be unnecessary. Today, in an organization of 25 or 50 thousand employees, you may have dozens to hundreds depending on the industry. Ten years from now? Well, let's just say no one is gonna willingly carry hundreds of business analysts salaries on their books while paying the Microsoft 365AI license anyway. Only the best of those analysts will remain. And not many of them.

owebmaster

> Well, let's just say no one is gonna willingly carry hundreds of business analysts salaries on their books while paying the Microsoft 365AI license anyway.

But also thousands of companies are going to be able to implement with a team of 1-10 people what before was only available to organizations of 25 or 50 thousand employees.

lezojeda

[dead]

gherkinnn

> The most valuable skill in software isn't writing code, it's architecting systems.

I don't quite agree. I see the skill in translating the real world with all its inconsistencies in to something a computer understands.

And this is where all the no/lo-code platforms fall apart. At some point that translation step needs to happen and most people absolutely hate it. And now you hire a dev anyway. As helpful as they may be, I haven't seen LLMs do this translation step any better.

Maybe there is a possibility that LLMs/AI remove the moron out of "extremely fast moron" that are computers in ways I haven't yet seen.

cheema33

If you look at the history of programming languages, we have been moving in the direction of "natural language" over time. We started at 1s and 0s. And then moved up to assembly language. Which I imagine was considered a higher level language back then.

I suspect that if current trends continue, today's higher level languages will eventually become lower level languages in the not so distant future. It will be less important to know them, just like it is not critical to know assembly language to write a useful application today.

System architecture will remain critical.

skydhash

We have move towards higher abstractions, not natural languages. It looks like natural language because we name those abstractions from natural languages, but their semantics can be quite different.

Building a software was always about using those abstractions to solve a problem. But what clients give us are mostly wishes and wants. We turn those into a problem, then we solve that problem. It goes from "I want $this" (requirement) to "How can $this be done?" (analysis), then to "$this can be done that way" (design). We translate the last part into code. But there's still "Is $this done correctly?" (answered by testing) and "$this is no longer working" (maintenance).

So we're not moving to natural language, because the whole point of code is to ossify design. We're moving towards better representation of common design elements.