Skip to content(if available)orjump to list(if available)

Revenge of the GPT Wrappers: Defensibility in a world of commoditized AI models

CharlieDigital

Having worked with AI and LLMs for quite a bit now as a "wrapper", I think the real key is that doing well (fast, accurate, relevant) requires a really, really good ETL process in front of the actual LLM.

A "wrapper" will always be better than the the foundation models so long as it can do the domain-specific pre-generation ETL and data aggregation better; that is the true moat for any startup delivery solutions using AI.

Your moat as a startup is really how good your domain-specific ETL is (ease of use and integration, comprehensiveness, speed, etc.)

imjonse

'In recent years, innovative AI products that didn’t build their own models were derided as low-tech “GPT wrappers.” '

The ones derided were those claiming to be 'open-source XY' while being a standard tailwind template over an OpenAI call or those claiming revolutionay XY while 90% being the proprietary model underneath. I am not sure how many were truly innovative that weren't cloneable in a very short time. Using models to empower your app is great, having the model be all of your app while you pitch it otherwise is to be derided.

muzani

I was mentoring at a hackathon this weekend. Someone asked how they could integrate a certain open source pentesting agent into their tool.

I asked them, "Well, it's open source. Instead of making a bunch of adapters, couldn't you just copy the code you want?"

Turns out the whole agent was 11 files or so. The files were about 200 lines. Over half were just different personas to do the same thing. They just needed to copy one of the prompts and have a mechanism to break the loop.

The funny part with open source is nobody reads the code even though it's literally open. The AI pundits don't read what they criticize. The grifters just chain it forward. It's left-pad all over again.

iamwil

Is this not consensus yet that people in the model layer are fighting commoditization and so-called wrappers have all the moats? I'd written something similar back in Nov of last year, and I thought I was late in writing it down.

https://interjectedfuture.com/the-moats-are-in-the-gpt-wrapp...

ramesh31

>Is this not consensus yet that people in the model layer are fighting commoditization and so-called wrappers have all the moats?

Yes. It will become a duopoly, where the leading frontier model holds >90% market share, and most useful products will be built around it. With the remaining 10% being made up by large portions of the other big vendors, and then everyone else for niche cases.

The idea of picking and choosing between individual models for each specific use case is going away rapidly as the top ones pull away from the pack, and inference prices are falling exponentially.

t_mann

Contrary take: "AI founders will learn the bitter lesson" (263 comments): https://news.ycombinator.com/item?id=42672790 the gist: "Better AI models will enable general purpose AI applications. At the same time, the added value of the software around the AI model will diminish."

Both essays make convincing points, I guess we'll have to see. I like the Uber analogy here, maybe the winners will be some who use the tech in innovative ways that only leverage the underlying tech.

bearjaws

Not to mention, if you have a good idea, OAI, Anthropic, Google, will implement it.

e.g. OAI Operator, Anthropic Computer Use, and Google NotebookLM.

deepsquirrelnet

The differentiator is whether or not your company operates with domain specific data and subject matter experts that those big companies don’t have (which is quite common).

There’s plenty of applications to build that won’t easily get disrupted by big AI. But it’s important to think about what they are, rather than chase after duplication of the shiny objects the big companies are showing off.

kridsdale3

And they don't have to pay the margin on the API calls. So an equal UX on the same model API will be twice as profitable when operated by the first-party.

danenania

They may implement it, but it's questionable whether they'll have the best implementation in any particular category.

glooglork

> Imagine it becomes truly trivial to copy cat another product — something as simple as, “hey AI, build me an app that does what productxyz.com does, and host it at productabc.com!” In the past, a new product might have taken a few months to copy, and enjoyed a bit of time to build its lead. But soon, perhaps it will be fast-followed nearly instantly. How will products hold onto their users?

It's actually not that easy to copy/paste AI agents, prompts take quite a lot of tweaking and it's a rather slow and manual process because it's not that easy to verify that they're working for all the possible inputs. This gets even more complicated when you get a number of agents in the same application and they need to interact with each other.

tossandthrow

Have you tried using Ai to write your prompts? It is quite efficient.

Besides that, you quote "imagine it becomes...", it is a fair to assume that these technologies will become better.

glooglork

Yeah, I'm using it and I agree it will probably become a lot better, but I don't think we're really close to a point where AI itself will be able to just write an app that has 100s of prompts that interact with one other. Even if it does, you'll probably be able to get it running better by manually optimizing a bunch of stuff (when I say manually, I'm also including iterating over a prompt in a chat with LLM).

It's capable of creating CRUD apps from scratch more or less by itself, and I can see how in this area we soon might get to a point where you can get your own clone of a lot of apps up and running in 30 minutes.

But I imagine a lot of future value we might see created will come from:

1) specialized prompts - looks simple but I don't think it is, especially if you have 100s of them in your application and you have complex logic on how they interact between each other, you're using different models for different parts of your application based on their strengths, etc

2) access to structured data you can connect your agents to

3) network effects - app that is mostly used gets better just by using the usage data (the article did talk about network effects)

I don't think it's really easy to replicate these 3 factors. The article is also mentioning some of this, I'm not really arguing with that, just pointing out that I don't think it will be that simple to c/p full applications.

satisfice

“Have you tried…”

It’s not “trying” that matters. What matters is testing. But nobody is testing LLMs… Or what they call testing is mostly shrugging and smiling and running dubious benchmarks.

deepsquirrelnet

Stanford NLPs framework DSPy really encourages a traditional ML development process. It’s about the only one I’d consider to be a true ML framework.

kridsdale3

If the imperative-code based apps that I've been shipping my whole career had failure rates on par with the *best* LLM prompts (think 10 to 35 percent), I'd not have a career.

lacker

If everyone has incredibly good AI, then perhaps the unique asset will be training data.

Not everyone will have the training data that demonstrates precisely the behavior that your customers want. As you grow, you'll generate more training data. Others can clone your product immediately... but the clone just won't work as well. In your internal evals, you'll see why. It misses a lot of stuff. But they won't understand, because their evals don't cover this case.

(This is quite similar to why Bing had trouble surpassing Google in search quality. Bing had great engineers, but they never had the same data, because they never had the same userbase.)

EternalFury

Business as usual. While electricity is remarkable, no one gets extremely rich selling it. End-user value is the only value that can be sold at a profit.

mohsen1

And guess who has a grip of the end user? Operation System owners. Now that you might not need an app for most things, OS vendors are in even more powerful position. Gone the days of "this amazing app can do X", now it's going to be "have you noticed you can ask Siri to do X?" They have all of the context that app developers are going to miss about the user.

Both Apple and Google are doing a poor job of integrating AI capabilities into their Operation Systems today. Maybe there is room for a new player to make a real AI-first Operation system.

koakuma-chan

Does anyone actually use Siri?

nicewood

I agree that the OS vendors are in a great position to add value via broad, general purpose features. But they cannot cover it all - it's breadth over depth. So I think the innovation for niches and specific business processes will be still owned by specialized 'GPT Wrappers'.

oarsinsync

> Operation System

OS is generally expanded to Operating System, not Operation System, in English

null

[deleted]

echelon

> AI-first Operation system.

An AI-first pane of glass (OS, browser, phone, etc.) with an agent that acts in my behalf to nuke ads, rage bait, click bait, rude people on the internet, spam, sales calls and emails, marketing materials, commercials, and more.

If you want to market to me, you need to pay me directly. If you want to waste my time, goodbye.

raincole

No one gets extremely rich selling food, water and electricity because these fields attract government intervention all the time.

(Not saying it's a bad or good thing, nor saying AI is comparable)

abrichr

Food:

- Ray Kroc – Turned McDonald's into a global fast-food empire.

- Howard Schultz – Scaled Starbucks into an international giant.

- Michele Ferrero – Created Nutella, Kinder, and Ferrero Rocher, making his family billionaires.

Water:

- François-Henri Pinault – Controlled Evian via Danone.

- Antoine Riboud – Expanded Danone into a bottled water empire (Evian, Volvic).

- Peter Brabeck-Letmathe – Former Nestlé CEO; Nestlé owns Perrier, Pure Life, Poland Spring, etc.

Electricity:

- Warren Buffett – Berkshire Hathaway Energy owns multiple utilities.

- Li Ka-shing – Built major energy holdings through CK Infrastructure.

- David Tepper – Invested heavily in power utilities via Appaloosa Management.

null

[deleted]

kgwgk

> While electricity is remarkable, no one gets extremely rich selling it.

Enron did!

esafak

I better buy some shares in them!

wcrossbow

I read this and of course couldn't believe it. Isn't 14.7B enough to be considered extremely rich these days[1]? In the the Forbes real-time billionaires list is quite easy to find _many_ such examples.

[1] https://www.forbes.com/profile/sarath-ratanavadi/?list=rtb/

DebtDeflation

Probably worth thinking more about what we mean by "wrapper". A year or so ago, it often meant a prompt builder UI. There's no moat for that. But if in 2025 a "wrapper" means a proprietary data source with a pipeline to deliver it along with some proprietary orchestration along with the UI (and the LLM API being called), then it likely warrants looking at it differently.

KaoruAoiShiho

I predict this article to be embarrassingly wrong. The moat of models is compute, wrappers are just software engineering, one of the first things to be commoditized by AI in general.

pchristensen

Software engineering followed by product research and product market fit. Those are less at risk.

KaoruAoiShiho

Idea guys are a dime in a dozen.

null

[deleted]

delifue

Software can take a freeride of hardware improvements. GPT wrappers also can take a freeride of foundation model improvements.

kridsdale3

I always roll my eyes when someone makes a "Show HN" post that their wrapper app has amazing new capabilities. All they did was push a commit where they typed "gpt4o-ultra-fancy-1234" in to some array.

daxfohl

The real question is how do they achieve vendor lock in? My bets are on Microsoft to figure that out.