Skip to content(if available)orjump to list(if available)

Wrapping my head around AI wrappers

Wrapping my head around AI wrappers

11 comments

·November 15, 2025

_fat_santa

Currently working on a SaaS app that could be called an "AI Wrapper". One thing I picked up on is once you start using AI tools programmatically, you can start doing far more complex things than what you can with ChatGPT or Claude.

One thing we've leaned heavily into was using Langgraph for agentic workflows and it's really opened the door to cool ways you can use AI. These days the way I tell apart an AI "Wrappers" vs "Tools" is what is the underlying paradigm. Most "wrappers" just copy the paradigm of ChatGPT/Claude where you have a conversation with an agent, the "tools" are where you take the ability to generate content and then plug that into a broader workflow.

embedding-shape

> One thing we've leaned heavily into was using Langgraph for agentic workflows

Probably my single biggest mistake so far with developing LLM tooling so far has been to try to use Langgraph even after inspecting the codebase, because people I thought were smarter than me hyped it up.

Do yourself a favor and just write the plumbing yourself, it's a lot easier than one might think before digging into it, and tool calling is literally a loop passing tool requests and responses back and forth until the model responds, and having your own abstractions will make it a lot easier to build proper workflows. Plus you get to use whatever language you want and don't have to deal with Python.

mentalgear

> But I think the insight lies between these positions. Even if a new application starts as a wrapper, it can endure if it lives where work is done, writes to proprietary systems of record, builds proprietary data and learns from usage, and/or captures distribution before incumbents bundle the feature.

Basically the same as MS & Social Media did: build a proprietary silo around data, amass enough data, so it will become too big an inconvenience to move away from the first provider.

It's good that the EU has laws now to ensure data interoperability, export & ownership.

jrvarela56

I agree with you in spirit but this harms the potential for these new products to emerge. You’re saying you don’t want them to be able to accrue a data moat. It sounds good for user privacy and optionality later on but it makes it harder for these services to get started as they dont see that model as possible.

swyx

i recently framed this as "agent labs" vs "model labs" - https://latent.space/p/agent-labs - definitely far from proven or given that they are a lasting business model, but i think the dynamic is at least more evident now than it was a year ago and even that is notable as we are slowly figuring out what the new ai economy looks like

jgalt212

> But Cursor and other such tools depend almost entirely on accessing Anthropic, OpenAI and Gemini models, until open-source open-weight and in-house models match or exceed frontier models in quality.

I'm not sure I agree with this because even though Cursor is pay north of 100% of revenues to Athropic, Anthropic is selling inference at a loss. So if Cursor builds and hosts its own models it still has the marginal costs > marginal revenues problem.

The way out for Cursor could be a self-hosted much smaller model that focuses on code, and not the world. This could have inference costs lower than marginal revenues.

xnx

Can you have a useful code model that doesn't understand the world? It seems like such a model would be limited to little more than auto complete.

esafak

I imagine so, through distillation. Start with an all-knowing model, then extract the coding part.

bogzz

I suppose supermaven is d'oing something to that effect.

yahoozoo

[dead]

Barry-Perkins

AI wrappers are fascinating! They simplify working with complex AI models, allowing easier integration, customization, and scaling—making AI more accessible for developers and businesses alike