Skip to content(if available)orjump to list(if available)

The New Moat: Memory

The New Moat: Memory

24 comments

·April 13, 2025

bentt

This is a great reason to learn from our mistakes of the 2010s and not give ourselves away to OpenAI and other cloud AI providers.

I would like to see a memory provider/system that allows us to own this data and put OpenAI et al on the customer end. They should be paying US for that.

sdsd

>learn from our mistakes

Oof, I wish I had the optimism to even consider this a realistic option. If we thought social media power was a threat to democracy, wait til we see what AI companies do.

bentt

You only have control over yourself.

kadushka

But I want their models to know as much as possible about me - it should improve my experience using them.

bentt

While I agree the experience of using the model will be improved with that knowledge, the memory you build should belong to YOU and not them. In fact, they shouldn't even be able to look at it for any other purpose than to serve you. It needs to be encrypted and secure and private. They will do anything they can to prevent this because there is immense value in owning someone's memory/identity.

xnx

Can't speak for anyone else, but my own AI chat history has low/no relevance to the quality of response to the next question I ask. This is not a moat any more than search history is.

My email and work documents are obviously important if I'm querying for information about them, but that is self evident and also not a moat (I could grant another tool access to these things).

Computational efficiency is a moat. If Google can provide an AI response for $0.05 of infrastructure and electricity, but it takes OpenAI $0.57, that's bad news for OpenAI.

keiferski

Does that mean you never engage in multi-question dialogues exploring / trying to solve a particular problem? In other words, every query you have is unrelated to all others?

If that’s the case, it mostly just seems like you’re not working on sufficiently complex problems to find the AI useful. Or are just keeping that complexity in your head, and just bringing in the AI “as a consultant,” as it were.

If that’s the case, I recommend trying to organize your project with the AI from the start. I’ve had a lot of productive benefits from treating ChatGPT folders as ongoing conversations about a particular project, questions I have on it, random ideas, etc. Memory is absolutely crucial for my use case.

xnx

> Does that mean you never engage in multi-question dialogues exploring / trying to solve a particular problem?

No. Iterative interrogation is the main way these tools are used, hence "Chat" GPT. It is rare that I'm revising queries from a week ago.

More useful AI context comes from permanent (and portable) artifacts like a code repo. Having a 2 million token context window is much more useful than being able to continue a chat session from a week or more ago.

hobs

Also that memory (and your conversations, your interactions) are the actual moat. There's plenty of code out there, but there's not a lot of "how does a developer interact with a code base" outside of commits.

The interaction data is the actual interesting but, but there's no guarantee that its the refinement that's best needed.

calvinmorrison

I run into this all the time with chatgpt, we hit a roadblock on A, go down path B, we solve B which solves A, now it cant rewind to keep solving A so i have to start over.

OR it keeps telling me one thing or another

"X didnt work, heres the output"

OK try "X"

ok buddy.

cadamsdotcom

Solid prediction.

You can see this in the reddit memes that say things like “open chatgpt and ask it for your 5 biggest blind spots right now. Mind. Blown.”

Those who know it’s a tool call - plus some clever algorithms governing what the tool returns - could not be rolling their eyes harder. People who know what’s up will keep pasting things into new chats, and keep using delete and “forget memories” buttons. Maybe even multiple accounts.

But increasingly that’ll be “the old slow way”. You can see it in the comments here - people are grateful not to have to explain the stack again. They don’t want a blank unprimed conversation - and rather than copy-pasting a priming prompt (or having the model write a Cursor rule) they’d rather abdicate control over the AI’s behavior to an opaque priming process and a tool with unknown recall.

But everyone else is doing it, so a great many eye-rollers will give up and be swept up too.

AI memory has already captured the type of person who obeys instructions in reddit memes. Next is normies (your parents) who will find it pleasant the AI seems to know them well. They won’t understand how creepy it is, nor how much power is in the hands of someone who can train an AI on their chats. And experts will do their best to make the AI forget with delete buttons and the like; but even they will need to let the tools remember their patterns just to keep up with society.

Ergo, lock-in & network effects.

So yes, it’s a pretty reasonable prediction.

natrius

I haven't been able to figure out how there's a moat for AI products that, if they work as advertised, can build a bridge over any most with near zero user effort.

rco8786

The article is about exactly that.

The moat for AI products will be, as is so often the case, user data. In this case, your personal history of interactions with a given AI.

The author predicts a land grab where AI companies try to scoop up as much personal data on you as they can as fast as they can, which renders them significantly more personalized to you than other AIs. That's the moat.

Analogous to Facebook managing to scoop up your entire social graph. Other social networks popped up, but there was no incentive to use them because you didn't have your social graph setup there and it was really hard to rebuild.

natrius

When I want to try a new AI, it can offer to import my data by using my computer and reading the screen.

rco8786

> import my data by using my computer

You're under the mistaken impression that you will own this data. What you are describing is akin to saying you will just export your friends list, posts, messages, pictures, etc from facebook to some other social network.

Maybe you are technically able to run your own AIs (and I guess they would somehow be supported by these other platforms?). But most people are not.

cs702

Sorry, but the OP is all fluffy hype, zero substance. There are no explanations, no links to research, and no links to code.

When the author mentions "memory," what does that mean? Is this about RAG-style memory? I'm not sure that's a "moat."

rco8786

Was there supposed to be? The article isn't technical in nature. It's predicting upcoming business shifts in the AI industry.

null

[deleted]

etaioinshrdlu

Does anyone really like and enjoy LLM products with memory at this point? To me this seems to be a case where the technical ability to do memory vastly exceeds its actual usefulness (for most people).

keiferski

Absolutely, to the point where I don’t know if I’d use AI tools at all without memory. It is critical for working on long-term projects whose complexity I don’t want to repeat every time I ask a question or explore an idea.

But I am using ChatGpt mostly as a way to flesh out ideas, question my assumptions, and do similar things, so YMMV.

empath75

Yeah, it keeps me having to explain a lot of context every time i ask a question, particularly what tech stack i use

WaltPurvis

In my experience, if I create a ChatGPT project or a Gemini Gem, it seems to remember everything I put in the instructions, which includes my tech stack, specific packages I'm using, preferences, and so on. ChatGPT remembers some of these things across chats that aren't within a project, but it's hit or miss. Gemini remembers absolutely nothing from one chat to another unless you're working in a Gem, but inside a Gem it's been very good at remembering quite a lot of stuff, particularly when using Gem 2.5. YMMV.

null

[deleted]