Skip to content(if available)orjump to list(if available)

America's top companies keep talking about AI – but can't explain the upsides

rebeccaskinner

Looking at my own use of AI, and at how I see other engineers use it, it often feels like two steps forward and two steps back, and overall not a lot of real progress yet.

I see people using agents to develop features, but the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves. I see people vibe coding their way to working features, but when the LLM gets stuck it takes long enough for even a good developer to realize it and re-engage their critical thinking that it can wipe out the time savings. Having an LLM do code and documentation review seems to usually be a net positive to quality, but that’s hard to sell as a benefit and most people seem to feel like just using the LLM to review things means they aren’t using it enough.

Even for engineers there are a lot of non-engineering benefits in companies that use LLMs heavily for things like searching email, ticketing systems, documentation sources, corporate policies, etc. A lot of that could have been done with traditional search methods if different systems had provided better standardized methods of indexing and searching data, but they never did and now LLMs are the best way to plug an interoperability gap that had been a huge problem for a long time.

My guess is that, like a lot of other technology driven transformations in how work gets done, AI is going to be a big win in the long term, but the win is going to come on gradually, take ongoing investment, and ultimately be the cumulative result of a lot of small improvements in efficiency across a huge number of processes rather than a single big win.

ernst_klim

> the amount of time they spend to actually make the agent do the work usually outweighs the time they’d have spent just building the feature themselves

Exactly my experience. I feel like LLMs have potential as Expert Systems/Smart websearch, but not as a generative tool, neither for code nor for text.

You spend more time understanding stuff than writing code, and you need to understand what you commit with or without LLM. But writing code is easier that reviewing, and understanding by doing is easier than understanding by reviewing (bc you get one particular thing at the time and don't have to understand the whole picture at once). So I have a feeling that agents do even have negative impact.

jraby3

As a small business owner in a non tech business (60 employees, $40M revenue), AI is definitely worth $20/month but not as I anticipated.

I thought we'd use it to reduce our graphics department but instead we've begun outsourcing designers to Colombia.

What I actually use it for is to save time and legal costs. For example a client in bankruptcy owes us $20k. Not worth hiring an attorney to walk us through bankruptcy filings. But can easily ask ChatGPT to summarize legal notices and advise us what to do next as a creditor.

flohofwoe

Which summarizes the one useful property of LLMs: a slightly better search engine which on top doesn't populate the first 5 result pages with advertisements - yet anyway ;)

CyberMacGyver

Our new CTO was remarking that our engineering teams AI spend is too low. I believe we have already committed a lot of money but only using 5% of the subscription.

This is likely why there is a lot of push from the top. They have already committed the money now having to justify it.

vjvjvjvjghv

Wish my company did this. I would love to learn more about AI but the company is too cheap to buy subscriptions

foogazi

Can you buy a subscription and see if it benefits you ?

hn_throwaway_99

> They have already committed the money now having to justify it.

As someone who has been in senior engineering management, it's helpful to understand the real reason, and this is definitely not it.

First, these AI subscriptions are usually month-to-month, and these days with the AI landscape changing so quickly, most companies would be reluctant to lock in a longer term even if there were a discount. So it's probably not hard to quickly cancel AI spend for SaaS products.

Second, the vast majority of companies understand sunk cost fallacy. If they truly believed AI wouldn't be a net benefit, they wouldn't force people to use it just because they already paid for it. Salaries for engineers are a hell of a lot more than their AI costs.

The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.

I'm not at all saying you have to buy into this "FOMO rationale", but just saying "they already paid the money so that's why they want us to use it" feels like a bad excuse and just broadcasts a lack of understanding of how the vast majority of businesses work.

empiko

Agreed. I think that many companies force people to use AI in hopes that somebody will stumble upon a killer use case. They don't want competitors to get there first.

nelox

> The main reason for the push from the top is probably because they believe companies that don't adopt AI strategies now and ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage. Note they may even believe that today's AI systems may not be much of a net benefit, but they probably see the state of the art advancing quickly so that companies who take a wait-and-see approach will be late to the game when AI is a substantial productivity enhancer.

Yes, this is the correct answer.

watwut

Companies do not necessarily understand sunk cost fallacy.

> ensure their programmers are familiar with AI toolsets will be at a competitive disadvantage

But more importantly, this is completely inconsistent with how banks approach any other programming tool or how they approach lifelong learning. They are 100% comfortable with people not learning on the job in just about any other situation.

dijit

yeah, I’ve been in so many companies where “sweetheart deals” force the use of some really shitty tech.

Both when the money has been actually committed and when it’s usage based.

I have found that companies are rarely rational and will not “leave money on the table”

ajcp

> this is definitely not it.

> is probably because

I don't mean to be contrary, but these statements stand in opposition, so I'm not sure why you are so confidently weighing in on this.

Also, while I'm sure you've "been in senior engineering management", it doesn't seem like you've been in an organization that doesn't do engineering as it's product offering. I think this article is addressing the 99% of companies that have some amount of engineers, but does not do engineering. That is to say: "My company does shoes. My senior leadership knows how to do shoes. I don't care about my engineering prowess, we do shoes. If someone says I can spend less on the thing that isn't my business (engineering) then yes, I want to do that."

hn_throwaway_99

>> this is definitely not it.

>> is probably because

> I don't mean to be contrary, but these statements stand in opposition

No, they don't. It's perfectly consistent to say one reason is certainly wrong without saying another much more likely reason is definitely right.

sschnei8

Do you have any data to backup the claim: “vast majority of companies understand suck cost fallacy.”

I’m assuming you meant “sunk” not “suck”. Not familiar with the suck fallacy.

viccis

>I’m assuming you meant “sunk” not “suck”. Not familiar with the suck fallacy.

There was no need to post this.

nelox

The claim that big US companies “cannot explain the upsides” of AI is misleading. Large firms are cautious in regulatory filings because they must disclose risks, not hype. SEC rules force them to emphasise legal and security issues, so those filings naturally look defensive. Earnings calls, on the other hand, are overwhelmingly positive about AI. The suggestion that companies only adopt AI out of fear of missing out ignores the concrete examples already in place. Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction. These are significant operational changes.

It is also wrong to frame limited stock outperformance as proof that AI has no benefit. Stock prices reflect broader market conditions, not just adoption of a single technology. Early deployments rarely transform earnings instantly. The internet looked commercially underwhelming in the mid-1990s too, before business models matured.

The article confuses the immaturity of current generative AI pilots with the broader potential of applied AI. Failures of workplace pilots usually result from integration challenges, not because the technology lacks value. The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.

julkali

The issue is that the examples you listed mostly rely on very specific machine learning tools (which are very much relevant and good use of this tech), while the term "AI" in layman terms is usually synonymous for LLMs.

Mentioning the mid-1990s' internet boom is somewhat ironic imo, given what happened next. The question is whether "business models mature" with or without a market crash, given that the vast majority of ML money is provided for LLM efforts.

comp_throw7

(You're responding to an LLM-generated comment, btw.)

Frieren

> Huntington Ingalls is using AI in battlefield decision tools, Zoetis in veterinary diagnostics, Caterpillar in energy systems, and Freeport-McMoran in mineral extraction.

But most AI push is for LLMs, and all the companies you talk about seem to be using other types of AI.

> Failures of workplace pilots usually result from integration challenges, not because the technology lacks value.

Bold claim. Toxic positivism seems to be too common in AI evangelists.

> The fact that 374 S&P 500 companies are openly discussing it shows the opposite of “no clear upside” — it shows wide strategic interest.

If the financial crisis taught me something is that if a company jumps of a bridge the rest will follow. Assuming that there must be some real value because capitalism is missing the main proposition of capitalism, companies will take stupid decisions and pay the price for it.

discordance

This comes to mind: "MIT Media Lab/Project NANDA released a new report that found that 95% of investments in gen AI have produced zero returns" [0]

Enterprise is way too cozy with the big cloud providers, who bought into it and sold it on so heavily.

0: https://fortune.com/2025/08/18/mit-report-95-percent-generat...

matwood

I wonder people ever read what they link.

> The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.

The 95% isn't a knock on the AI tools, but that enterprises are bad at integration. Large enterprises being bad at integration is a story as long as time. IMO, reading beyond the headline, the report highlights the value of today's AI tools because they are leading to enterprises trying to integrate faster than they normally would.

"AI tools found to be useful, but integration is hard like always" is a headline that would have gotten zero press.

bawolff

If the theory is that 1% will be a unicorns that will make you a trillionaire, i think investors would be ok with that.

The real question is do those unicorns exist or is it all worthless.

orionblastar

Have to pay the power bill for the data centers for GAI. Might not be profitable.

thenaturalist

Fun fact, the report was/ is so controversial, that the link to the NANDA paper linked in fortune has been put behind a Google Form you now need to complete prior to being able to access it.

losteric

Doubt the form has anything to do with how "controversial" it is. NANDA is using the paper's popularity to collect marketing data.

vjvjvjvjghv

This reminds me of the internet in 2000. Lots of companies doing .COM stuff but many didn’t understand what they were doing and why they were doing it. But in the end the internet was a huge game changer. I see the same with AI. There will be a lot of money wasted but in the end AI will be huge transformation.

01100011

AI isn't about what you are able to do with it. AI is about the fear of what your competitors can do with it.

I said a couple years ago that the big companies would have trouble monetizing it, but they'd still be forced to spend for fear of becoming obsolete.

firefoxd

For most companies AI is a subscription service you sign up for. Because of great marketing campaigns, it has become a necessary tax. If you don't pay, the penalty is you lose value because it doesn't look like you are embracing the future. If you pay, well it's just a tax that you hope your employees will somehow benefit from.

sydbarrett74

AI provides cover to lay people off, or else commit constructive dismissal.

lotsofpulp

Constructive dismissal and layoffs are mutually exclusive.

https://en.wikipedia.org/wiki/Constructive_dismissal

>In employment law, constructive dismissal occurs when an employee resigns due to the employer creating a hostile work environment.

No employee is resigning when an employer tells the employee they are terminated due to AI replacing them.

lmm

> No employee is resigning when an employer tells the employee they are terminated due to AI replacing them.

No, but some are resigning when they're told their bonus is being cut because they didn't use enough AI.

heavyset_go

AI is what you tell the board/investors is the reason for layoffs and attrition.

Layoffs and attrition happen for reasons that are not positive, AI provides a positive spin.

TheDong

Using AI makes me want to resign from life, it removes all the fun and joy from coding.

I absolutely will resign if my job becomes 100% generating and reviewing AI generated slop, having to review my coworker's AI slop has already made my job way less fun.

zippyman55

Agreed! The people who did not work hard but were kept employed ala “bullshit work” are being removed.

bravetraveler

Eh, I have plenty of "bullshit work". Only that, actually, for the foreseeable future.

Building clusters six servers at a time... that last the order of weeks, appeasing "stakeholders" that are closer to steaks.

Whole lot of empty movement and minds behind these 'investments'. FTE that amounts to contracted, disposed, labor to support The Hype.

apexalpha

Computers being able to digest vision, audio and other input into text and back has tremendous value.

You can’t convince me otherwise, we just haven’t found a ‘killer app’ yet.

wg0

Sounds like blockchain all over. Reminds me of an essay from two product managers in AWS that talked to clients all over US and couldn't get any business to clearly articulate why they need blockchain.

Note: AWS has a hosted blockchain that you can use. [1]

PS: If anyone has read that essay, please do share the link. I can't really locate it but that's a wonderful read.

[1]. https://aws.amazon.com/managed-blockchain/

cmckn

wg0

That's exactly it is! Thank you!