Skip to content(if available)orjump to list(if available)

You can’t build a moat with AI (redux)

You can’t build a moat with AI (redux)

102 comments

·February 20, 2025

gnabgib

Related (40 points, 10 months ago, 45 comments) https://news.ycombinator.com/item?id=40005775

light_triad

AI enables new tools & features but in itself is not a product.

There's a good essay from Andrew Chen on this topic: Revenge of the GPT Wrappers: Defensibility in a world of commoditized AI models

"Network effects are what defended consumer products, in particular, but we will also see moats develop from the same places they came from the past decades: B2B-specific moats (workflow, compliance, security, etc), brand/UX, growth/distribution advantages, proprietary data, etc etc." [1]

Also check out the podcast with the team at Cursor/Anysphere for details into how they integrate models into workflows [2]

[1] https://andrewchen.substack.com/p/revenge-of-the-gpt-wrapper...

[2] https://www.youtube.com/watch?v=oFfVt3S51T4&t=1398s

sgt101

Yeah - these are not moats though.

Moats are the logistic network that Amazon has.. ok spend $10bn over 5 years and then come at me - if I didn't sit still...

Moats are what Google has in advertising... ok, pull 3% of the market for more money than god and see if it works..

brand/ux is not a moat, it's table stakes.

light_triad

Agreed UX can be easily copied, but brands are a moat for a number of (granted, psychological) reasons:

1. Status symbols - my Lambo signifies that my disposable income is greater than your disposable income

2. Fan clubs - I buy Nikes because they do a better job at promoting great athleticism, and an iPhone to pay double for hardware from 3 years ago

3. Visibility bias - As a late adopter I use whatever the category leader is (i.e. ChatGPT = AI, Facebook = the Internet)

What you describe sounds more like market power resulting from a monopoly

chrisin2d

I think that UX cannot always be easily copied.

Technology enables UX. When the underlying technology is commodity—which is often the case—it's easy for competitors to copy the UX. But sometimes UX arises from the tight marriage of design and proprietary technology.

Good UX also arises from good organization design and culture, which aren't easy to copy. Think about a good customer support experience where the first agent you talk with is empowered to solve your issue on the spot, or there's perfect handoff between agents where each one has full context of your customer issue so you don't have to repeat yourself.

carlmr

>brand/ux is not a moat, it's table stakes.

Except for the technical advantage of M-series macs that's like all of the Apple moat. Apple brand and UX is what is selling the hardware.

They make the UX depend on the number of Apple devices you have, so a little bit of network effect. But that's mostly still UX.

trash_cat

One could aregue the "non-moats" together can accumulate into something considerable making it moat. A brand is definately a moat but in the minds of consumers. This is not something you can overcome easily, even if the product is inferior.

redeux

Kleenex is probably a good example of this. Tissues are a commodity but nevertheless people will still pay more for Kleenex branded tissues. That feels like a moat to me.

osigurdson

>> Moats are what Google has in advertising

Lots of people, these days, just use ChatGPT to search the web these days. I've actually never understood google search ads, as I have never clicked on one, even by accident 10+ years. If I want to buy something I search within Amazon for it.

YouTube however, yeah, that is a stellar advertising platform.

genewitch

unless you adblock. But i concede if you don't block, you do get advertised to in an amount that i would describe as "astronomical", so there's even a parallel there.

mtkd

An enterprise using RAG, fine tuning etc. to leverage their data and rethinking how RL and vector DBs etc. can improve existing ops ... is likely going to make some existing moats much better moats

If your visibility on current state of AI is limited to hallucinogenic LLM prompts -- it's worth digging a bit deeper, there is a lot going on right now

ptx

What specifically is going on right now in AI that's not based on "hallucinogenic LLM prompts"?

rv3392

ML is still a thing. I believe that most AI research is still non-LLM ML-related - things like CNN+Computer Vision, RL, etc. In my opinion, the hype around LLMs has a lot to do with its accessibility to the general public compared to existing ML techniques which are highly specialised.

flessner

To be fair, I remember that some 5 years ago a lot of ML was quite accessible to programmers as it was often just a couple lines of python using tensorflow, or later pytorch.

I am almost in disbelief that LLMs are the thing that reached the "tipping point" for most companies to magically care for ML. The amount of products, that could have been built properly 5 years ago, that exist now in a slower form because of "reasoning" LLMs, is likely astonishing.

falcor84

Figure.ai's Helix: A Vision-Language-Action Model for Generalist Humanoid Control

https://news.ycombinator.com/item?id=43115079

milesrout

Convolutional neural networks for image recognition and more generally image processing. They are much better than they were a few years ago, when they were all the rage, but the hype has disappeared. These systems improve the performance of radiologists at detecting clinically significant cancers. They can be used to detect invasive predators or endangered native wildlife using cameras in the bush, in order to monitor populations, allocate resources for trapping of pests, etc.

ML generally is for pattern recognition in data. That includes anomaly detection in financial data, for example. It is used in fraud detection.

Image ML/AI is used in your phone's facial recognition, in various image filtering and analysis algorithms in your phone's camera to improve picture quality or allow you to edit images to make them look better (to your taste, anyway).

AI image recognition is used to find missing children by analysing child pornography without requiring human reviewers to trawl through it - they can just check the much fewer flagged images.

AI can be used to generate captions on videos for the deaf or in text to speech for the blind.

There are tons of uses of AI/ML. Another example: video game AI. Video game upscaling. Chess and Go AI: NNUE makes Chess AI far stronger and in really cool creative ways which have changed high level chess and made it less drawish.

fijiaarone

Bad image generation

jrflowers

Chat bots that tell you to kill yourself

janalsncm

Well “AI” is a lot more than just generic text generators. ML (read: AI that makes money) is the bread and butter of all of the largest internet companies. There’s no LLM that can accurately predict user behavior.

And even if there was, the fast follower to the Bitter Lesson is the Lottery Ticket Hypothesis: if you build a huge, general model to perform my task, I can quickly distill and fine tune it to be better, cheaper, and faster.

Jerrrrrry

> if you build a huge, general model to perform my task, I can quickly distill and fine tune it to be better, cheaper, and faster.

"better" = "smaller, more specialized, domain-specific"

epistasis

> In short, unless you’re building or hosting foundation models, saying that AI is a differentiating factor is sort of like saying that your choice of database is what differentiated your SaaS product: No one cares.

I still think of AI as the analogy of databases is perfect. No database is set up for the necessary applications where they get deployed. The same is true for LLMs except for some very broad chatbot stuff where the big players already own everything.

And if AI is just these chat bots, the technology is going to be pretty minor in comparison to database technology.

Narciss

The choice of AI does matter, and older models can potentially do your task much better than newer ones.

In the course of building my side project (storytveller), I’ve found that newer models tend to do worse at storytelling. I’ve tested basically every model under the sun that is available for use in production, and one stands out - now it may be that others will come along and not do the research that I’ve done to choose the same model, and thus my application will be better than theirs in part because of the AI model I choose.

Having “AI” as part of your application will not matter as much, that I agree with, but having “the right AI” will.

Of course,the user experience will definitely matter as well, as will the marketing and other criteria - another point of agreement with the article. But that does not diminish the fact that, if your application does not involve testing benchmarks, there is a good chance that a model that may not be the newest could still be the best for your particular use case, so you should not just blindly choose the latest shiniest model as this article sort of implies.

The hammer does matter.

didip

Hasn't it always been the case since the beginning of AI/ML trends?

Once an algorithm/technique is discovered, it becomes a free library to install.

Data and user-base are still the moat. The traffic that ChatGPT stole from Google is the valuable part.

goatlover

Is everything that OpenAI or these other proprietary companies do with their models known?

arminiusreturns

No it is not, especially the filter controls, and a few other "add-ons" that are not actually core parts of LLMs. As for what they actually do with it, we don't know that either unless by leaks.

BobbyJo

I now see AI as part 2 of the CPU evolution. I think there are lots of correlations we can draw on looking at it that way:

1) Lots of players enter at the start because there are no giant walled gardens yet. 2) Being best in class will require greater and greater capex (like new process nodes) as things progress. 3) New classes of products will be enabled over time depending on how performance improves.

There is more there, but, with regard to this post, what I want to point out is that CPUs were pretty basic commodities in the beginning, and it was only as their complexity and scale exploded that margins improved and moats were possible. I think it will play out similarly with AI. The field will narrow as efficiency and performance improves, and moats will become possible at that point.

HellDunkel

When will people realize that the use of AI art in any piece of content is almost as bad as bad typo or ads. It devalues the content and even adds a barrier to acceptance and produces the feeling that the creator does not value my time and attention.

csallen

That depends entirely on the design of the context around the content. There is no hard-and-fast rule that AI is bad.

tartoran

> There is no hard-and-fast rule that AI is bad.

That is true, but the context around it currently, the fast rush to appropriate freely from others and of course without contributing/crediting anything back, not to mention the sludge that has exploded all over the internet is one of agreement with the OP. I'm more than sure that AI could be used in very intelligent ways by artists themselves though, and I don't mean in a lazy way to cut corners and pump out content but a more deliberate way where the effort is visible (but I don't just mean visual arts).

jimmaswell

Depends. I like when it's used artistically and you're intended to notice. I've been listening to a lot of AI covers and some of them lean into the artifacts to a high degree in different ways, akin to noise music. First track here is a great example:

https://youtu.be/HgfsKS-Ux_A

outworlder

But this piece is talking about AI. It seems fitting.

HellDunkel

Imagine i wrote a history piece on the medieval ages and i decided to use a medieval font. :)

serial_dev

The moat is the 500 billion dollar investments we got along the way! (Just partly joking)

coliveira

That's something these companies don't seem to understand. Any model that is smart enough to be considered a true AI is also smart enough to teach what it knows to other AI models. So the process of creating a complex AI is commoditized. It just takes another group with access to the original AI to train other models with similar knowledge.

I also believe that, just like humans, AI models will be specialized so we'll have companies creating all kinds of special purpose models that have been trained with all knowledge from particular domains and are much better in particular fields. Generic AI models cannot compete there either.

mcharawi

This article didn't really say all too much, essentialy you can't differentiate your product with prompts alone, and you need deeper integrations with workflows, ok thats pretty clear - what else?