Skip to content(if available)orjump to list(if available)

What could have been

What could have been

98 comments

·August 18, 2025

pixl97

>There isn’t a single day where I don’t have to deal with software that’s broken but no one cares to fix

Since when does this have anything to do with AI? Commercial/enterprise software has always been this way. If it's not going to cost the company in some measurable way issues can get ignored for years. This kind of stuff was occurring before the internet exists. It boomed with the massive growth of personal computers. It continues to today.

GenAI has almost nothing to do with it.

hoytie

I think the point the author is trying to make is that there are many problems in plain sight we could be spending our efforts on, and instead we are chasing illusory profits by putting so many resources into developing AI features. AI is not the source of the issues, but rather a distraction of great magnitude.

socalgal2

> Commercial/enterprise software has always been this way

All software is this way. The only way something gets fixed is if someone decides it's a priority to fix it over all the other things they could be doing. Plenty of open source project have tons of issues. In both commercial and open source software they don't get fixed because the stack of things to do is larger than the amount of time there is to do them.

IcyWindows

It's worth pointing it that the "priority" in both open source and closed isn't just "business priority".

Things that are easy, fun, or "cool" are done before other things no matter what kind of software it is.

acdha

Thought exercise: has any of the money Apple has spent integrating AI features produced as much customer good-will as fixing iOS text entry would? One reason for paying attention to quality is that if you don't, over time it tarnishes your brand and makes it easier for competitors to start cutting into your core business.

dougdonohoe

The point is that money that is going into GenAI or adding GenAI-related features to software should be going to fix existing broken software.

pixl97

Then you missed the point of my post. That money never did. It went back into the hands of the investors, the investors that are now putting money into genAI.

ipsin

I wonder about the world where, instead of investing in AI, everyone invested in API.

Like, surfacing APIs, fostering interoperability... I don't want an AI agent, but I might be interested in an agent operating with fixed rules, and with a limited set of capabilities.

Instead we're trying to train systems to move a mouse in a browser and praying it doesn't accidentally send 60 pairs of shoes to a random address in Topeka.

simonw

LLMs offer the single biggest advance in interoperability I've ever seen.

We don't need to figure out the one true perfect design for standardized APIs for a given domain any more.

Instead, we need to build APIs with just enough documentation (and/or one or two illustrative examples) that an LLM can help spit out the glue code needed to hook them together.

roxolotl

The problem with LLMs as interoperability is they only work sub 100% of the time. Yes they help but the point of the article is what if we spent 100billion on APIs? We absolutely could build something way more interoperable and that’s 100% accurate.

I think about code generation in this space a lot because I’ve been writing Gleam. The LSP code actions are incredible. There’s no “oh sorry I meant to do it the other way” you get with LLMs because everything is strongly typed. What if we spent 100billion on a programming language?

We’ve now spent many hundreds of billions on tools which are powerful but we’ve also chosen to ignore many other ways to spend that money.

simonw

If you gave me $100 billion to spend on API interoperability, knowing what I know today, I would spend that money inventing LLMs.

PaulDavisThe1st

As if the challenges in writing software are how to hook APIs together.

I get that in the webdev space, that is true to a much larger degree than has been true in the past. But it's still not really the central problem there, and is almost peripheral when it comes to desktop/native/embedded.

eastbound

Today I’ve compiled a few thousand classes of Javadocs in .978 second. I was so impressed, with a build over 2 minutes, each byte of code we write takes a second to execute, computing is actually lightening fast, just now when it’s awfully written.

Time of executing bytecode << REST APIs << launching a full JVM for each file you want to compile << launching an LLM to call an API (each << is above x10).

kylemaxwell

The point is that you call the LLM to generate the code that lets you talk to the API, rather than writing that glue code yourself. Not that you call the LLM to talk to that API every time.

Gigachad

Basically the opposite has happened. Not only has every API either been removed or restricted. Every company is investing a lot of resources in making their platforms impossible to automate even with browser automation tools.

Mix of open platforms facing immense abuse from bad actors, and companies realising their platform has more value closed. Reddit for example doesn't want you scraping their site to train AIs when they could sell you that data. And they certainly don't want bots spamming up the platform when they could sell you ad space.

medhir

I feel like it’s not technically difficult to achieve this outcome… but the incentives just aren’t there to make this interoperable dream a reality.

Like, we already had a perfectly reasonable decentralized protocol with the internet itself. But ultimately businesses with a profit motive made it such that the internet became a handful of giant silos, none of which play nice with each other.

tyre

We work with American health insurance companies and their portals are the only API you’re going to get. They have negative incentive to build a true API.

LLMs are 10x better than the existing state of the art (scraping with hardcodes selectors). LLMs making voice calls are at least that compared to the existing state of the art (humans sitting on hold.)

The beauty of LLMs is that they can (can! not perfectly!) turn something without an API into one.

I’m 100% with you that an API would be better. But they’re not going to make one.

1oooqooq

the appeal of investors to AI is anti API/access.

wavemode

> What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?

This is a bit like the question "what if we spent our time developing technology to help people rather than developing weapons for war?"

The answer is that, the only reason you were able to get so many people working on the same thing at once, was because of the pressing need at hand (that "need" could be real or merely perceived). Without that, everyone would have their own various ideas about what projects are the best use of their time, and would be progressing in much smaller steps in a bunch of different directions.

To put it another way - instead of building the Great Pyramids, those thousands of workers (likely slaves) could have all individually spent that time building homes for their families. But, those homes wouldn't still be around and remembered millenia later.

bji9jhff

> instead of building the Great Pyramids, those thousands of workers (likely slaves) could have all individually spent that time building homes for their families. But, those homes wouldn't still be around and remembered millenia later.

They would have been better off. Those pyramids are epitomes of white elephants.

nine_k

Consider the income from tourists coming to see the pyramids. People traveled to Giza for this purpose for millennia, too.

This was not the original intent of the construction though.

stavros

> But, those homes wouldn't still be around and remembered millenia later.

Yes, but they'd have homes. Who's to say if a massive monument is better than ten thousand happy families?

wavemode

> Who's to say if a massive monument is better than ten thousand happy families?

It's not. The pyramids have never been of any use to anyone (except as a tourist attraction).

I'm referring merely to the magnitude of the project, not to whether it was good for mankind.

zoeey

I've been using an app recently that added a bunch of AI features, but the basic search is still slow and often doesn't work. Every time I open it, I kind of brace myself, but it still disappoints me.

It feels like more and more products are focused on looking impressive, when all I really want is for the everyday features to just work well.

Glyptodon

More less you could say similar things about most of the crypto space too. I think maybe it's because we're at the point where a lot of things that tech can do, it's more than capable of doing, but they're just not easy to do out of a dorm room and without a lot of domain knowledge.

kjkjadksj

There is still so much one can build and do in a dorm room. The hardest part is still the hardest part in every business, which is getting sufficient money to get sufficient runway for things to be self sufficient.

nine_k

Humans are fundamentally irrational. Not devoid of rationality, but not limited by it. Many social phenomena are downstream from that fact.

Humans have fashions. If something is considered cool, many people start doing that thing, because it automatically gives them a bit of appreciation from most other people. It is often rational to follow a fashion and reap the social benefits it brings.

People are bad at estimating probabilities. They heavily discount the future, and want everything now, hence FOMO. At the same time, they tend to believe in glowing future prospects uncritically, because it helps build social cohesion and power structures.

This is why fads periodically flush all over our industry, and our society, and the whole civilization. And again, it becomes rational to follow the trend and ride the wave. Say the magic word (OOP, XML, Agile, Social, Mobile, Cloud, SaaS, ML, more to come), and it becomes easier to get a job, press coverage, conference invites, investments.

Then the bubble deflates, the useful parts remain (often quite a bit), the fascination, hype, attention, and FOMO find a new worthy object.

So companies add "AI features" partly because it's cool (news coverage, promotions), partly because of the FOMO (uncertainty is high, but what if we'd be missing a billion-dollar opportunity?), partly because of social cohesion (following fashion is natural, being a contrarian may be respectable, but looking ignorant is unacceptable). It's not about carefully calculated material returns on a a carefully measured investment. It may look inane, but it's not always stupidity, much like sacrificing some far-future perspectives in exchange of stock growing this quarter is not about stupidity.

epistasis

While I'm somewhat sympathetic to this view, there's another angle here too. The largesse of investment on a vague idea means that lots of other ideas get funding, incidentally.

Every VC pitch is about some ground-breaking tech or unassailable moat that will be built around a massive SAM; in reality early traction is all about solving that annoying and stupid problem your customers hate doing but that you can do for them. The disconnect between the extraordinary pitch and the mundane shipped solution is the core of so much business.

That same disconnect also means that a lot of real and good problems will be solved with money that was meant for AGI but ends up developing other, good technology.

My biggest fear is that we are not investing in the basic, atoms-based tech that we need in the US to not be left behind in the cheap energy future: batteries, solar, and wind is being gutted right now due to chaotic government behavior, the actions of madmen that are incapable of understanding the economy today, much less where tech will take it in 5-10 years. We are also underinvesting in basics like housing, or construction tech. Hopefully some of the AI money goes to fixing those gaping holes in the country's capital allocation.

nicoburns

It would be much better if we invested in meaningful things directly. So much time and effort is being put into making things AI shaped for investors.

The elephant in the room is that capital would likely be better directed if it was less concentrated.

gorpy7

it’s peculiar because i love to use chat gpt to fill my knowledge gaps as i work through solutions to building and energy problems that i want to solve. i wonder how many people are doing something similar and, although i haven’t* read through all the comments, i doubt much is being said let alone giving credence to that simple but potentially profound idea. learning amplified.

thaumasiotes

> Every VC pitch is about some ground-breaking tech or unassailable moat that will be built around a massive SAM

A surface-to-air missile?

As funny as that would be, maybe you should define your terms before you try to use them.

busterarm

This shouldn't really need to be a venue needing to explain Business 101...This board is about startups and these terms are raison d'être for startups existing, but here you go:

TAM or Total Available Market is the total market demand for a product or service. SAM or Serviceable Available Market is the segment of the TAM targeted by your products and services which is within your geographical reach. SOM or Serviceable Obtainable Market is the portion of SAM that you can capture.

Philpax

Hacker News may be hosted as part of the Y Combinator website, but as the name suggests, the primary audience is hackers, not entrepreneurs. Your answer is good, but could have done without the condescension.

MyOutfitIsVague

Been here for years (across many different accounts), and this is the first time I've heard of these terms. I am here for programming content, not business.

_carbyau_

Your definitions provided immediate clarity. Thank you!

user3939382

I've been watching this my whole life. UML, SOA, Mongo, cloud, blockchain, now LLMs, probably 10 others in between. When tools are new there's a collective mania between VCs, execs, and engineers that this tool unlike literally every other one doesn't have trade offs that make it only an appropriate choice in some situations. Sometimes the trade offs aren't discoverable in the nascent stage, a lot of it is monkey-see-monkey-do which is the case even today with React and cloud as default IMHO. LLMs are great but they're just a tool.

wnc3141

you forgot IoT

Gigachad

IoT wasn't exactly a waste of money. If anything, the problem was that companies didn't spend enough doing it properly or securely. People genuinely do want their security cameras online with an app they can view away from home. It just needs to be done securely and privately.

_carbyau_

I want a Wireguard-like solution - preferably with an open source Home Assistant plugin - rather than yet-another-subscriber-lockin-on-company-servers.

Investors want otherwise.

ern

I have 4 cameras, a home security system, a remotely monitored smoke detector, a smart plug, 4 leak sensors, smart bulbs, a car whose location and state of charge I can track remotely, a smart garage door opener, a smart doorbell, and 7 smart speakers.

I think IoT was more than just hype.

tokioyoyo

The big difference is LLMs are as big as Social Media and Google in the pop culture, but with a promise of automation and job replacement. My 70 year parents use it every day for work and general stuff (with generally understanding the limitations), and they’re not even that tech savvy.

user3939382

We haven’t mapped the hard limitations of LLMs yet but they’re energy bound like everything else. Their context capacity is a fraction of a human’s. What they’ll replace isn’t known yet. Probabilistic answers are unacceptable in many domains. They’re going to remain amazingly helpful for a certain class of tasks but marketing is way ahead of the engineering, again.

justonceokay

Wait until the kids find out about LAMP

jh00ker

>What could have been if instead of spending so much energy and resources on developing “AI features” we focused on making our existing technology better?

I think we'd still be talking about Web 3.0 DeFi.

cadamsdotcom

Greenfield development is infinitely easier than brownfield.

It's damn hard work to dig in, uncover what's wrong, and fix something broken - especially if someone's workflow depends on the breakage.

Flashy AI features get attention and even if they piss you off, they make you believe the thing is fresh. Sorry but you're human.

aidenn0

TFA kind of assumes that the companies involved would have improved their software in a world in which those resources weren't spent on AI. Since much software contained long-unfixed bugs well before the GenAI boom, I'm not convinced.

simonw

This raises an interesting question.

The amount of money that's been spent on AI related investments over the past 2-5 years really has been astonishing - like single digit percentage points of GDP astonishing.

I think it's clear to at there are productivity boosts to be had from applying this technology to fields like programming. If you completely disagree with that statement I have a hunch that nothing could convince you otherwise at this point.

But at what point could those productivity boosts offset the overall spend? (If we assume we don't get to some weird AGI that upturns all forms of economics.)

Two points of comparison. Open source has been credibly estimated to have provided over 8 trillion dollars of value to the global economy over the past few decades: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148 - could AI-assisted programming provide a similar boost over the next decade or so?

The buildout of railways in the 1800s took resources comparable to the AI buildout of the past few years and lost a lot of investors a lot of money, but are regarded as a huge economic boost despite those losses.

nicoburns

Green energy, including generation, but also storage, transmission, ev chargers, smart grid technology, etc would be the obvious thing to invest in that I would expect to have a much higher payoff.

busterarm

Are they really? Is this one of those "just because people say so" beliefs?

The countries adopting these the most are declining economies. It's places that are looking for something to do after there's no more oil left to drill up and export.

You know where fossil fuel use is booming? Emerging (e.g., growing) economies. Future scarcity of such resources will only make them more valuable and more profitable.

Yes, this is a dim view on the world, but until those alternatives are fundamentally more attractive than petrochemicals, these efforts will always be charity/subsidy.

If you're expecting that to be the area of strong and safe returns on investment, I've got some dodgy property to sell you.

nicoburns

Isn't it China that adopting renewable more rapidly than anywhere else and also has a booming economy? Although they're also investing in non-renewable energy sources.

My understanding is that "green" investment portfolios which were intended as "ethics over return on investment" have actually outperformed petrochemical stocks for years now, and it's more ideology than economics that's preventing further investment (hence why you see so much renewable energy in Texas which is famously money driven)

1oooqooq

railways only lost investor money because everyone was investing in a national Monopoly, so the when we did get the Monopoly everyone else lost everything. sounds like a skill problem. plenty of value was created and remain in use for decades, completely different from Slop today.

windexh8er

Not to mention that rail only got better as more was built out. With LLMs the more you allow them to create, to scrape, and to replace deterministic platforms that can do the same thing better and faster - the further down the rabbit hole we all go.

I look around and the only people that are shilling for AI seem to be selling it. There are those that are also in a bubble and that's all they hear day in and out. We keep hearing how far the 'intelligence' of these models has come (models aren't intelligent). There are some low hanging fruit edge cases, but just again today I spent an extra hour thinking I could shortcut a PoC by having LLMs bang out the framework. I leveraged all the latest versions of Opus, Kimi, GLM and Grok. For a very specific ask (happened to be building a quick testing setup for PaddleOCR) none of them got it right. Even when asking for very specific aspects of the solution I had in mind Opus was off the rails and "optimizing" within a turn or two.

I probably ended up using about 20% of the structure it gave me - but I could have easily gone back to another project that I've done where that framework actually had more thought put into it.

I really wish the state of the art was better. I don't use LLMs for searching much as I believe it's a waste of resources. But the polarization from the spin pieces by C-levels on top of the poor performance by general models for very specific asks looks nothing like the age of rail.

Do I believe that there are good use cases for small targeted models built on rich training data? I do. But that's not the look and feel from most of what we're seeing out there today. The bulk of it is prompt engineering on top of general models. And the AI slop from the frontier players is so recognizable and overused now that I can't believe anyone still isn't looking at any of this and immediately second guessing the validity. And these are not hallucinations we're seeing because these LLMs are not intelligent. They lack cognition - they are not truly thinking or reasoning.

Again - if LLMs were capable of mass replacement of workers today OpenAI wouldn't be selling anyone a $20/month subscription, or even a $200 one. They'd be selling directly to those C-levels the utopia of white collar replacements that doesn't exist today.