Claude Memory
151 comments
·October 23, 2025cainxinth
I don't use any of these type of LLM tools which basically amount to just a prompt you leave in place. They make it harder to refine my prompts and keep track of what is causing what in the outputs. I write very precise prompts every time.
Also, I try not work out a problem over the course of several prompts back and forth. The first response is always the best and I try to one shot it every time. If I don't get what I want, I adjust the prompt and try again.
corry
Strong agree. For every time that I'd get a better answer if the LLM had a bit more context on me (that I didn't think to provide, but it 'knew') there seems to be a multiple of that where the 'memory' was either actually confounding or possibly confounding the best response.
I'm sure OpenAI and Antropic look at the data, and I'm sure it says that for new / unsophisticated users who don't know how to prompt, that this is a handy crutch (even if it's bad here and there) to make sure they get SOMETHING useable.
But for the HN crowd in particular, I think most of us have a feeling like making the blackbox even more black -- i.e. even more inscrutable in terms of how it operates and what inputs it's using -- isn't something to celebrate or want.
chaostheory
[delayed]
mbesto
> For every time that I'd get a better answer if the LLM had a bit more context on me
If you already know what a good answer is why use a LLM? If the answer is "it'll just write the same thing quicker than I would have", then why not just use it as an autocomplete feature?
Nition
That might be exactly how they're using it. A lot of my LLM use is really just having it write something I would have spent a long time typing out and making a few edits to it.
Once I get into stuff I haven't worked out how to do yet, the LLM often doesn't really know either unless I can work it out myself and explain it first.
awesome_dude
If I find that previous prompts are polluting the responses I tell Claude to "Forget everything so far"
BUT I do like that Claude builds on previous discussions, more than once the built up context has allowed Claude to improve its responses (eg. [Actual response] "Because you have previously expressed a preference for SOLID and Hexagonal programming I would suggest that you do X" which was exactly what I wanted)
cubefox
Anecdotally, LLMs also get less intelligent when the context is filled up with a lot of irrelevant information.
stingraycharles
Yes, your last paragraph is absolutely the key to great output: instead of entering a discussion, refine the original prompt. It is much more token efficient, and gets rid of a lot of noise.
I often start out with “proceed by asking me 5 questions that reduce ambiguity” or something like that, and then refine the original prompt.
It seems like we’re all discovering similar patterns on how to interact with LLMs the best way.
jasonjmcghee
The trick to do this well is to split the part of the prompt that might change and won't change. So if you are providing context like code, first have it read all of that, then (new message) give it instructions. This way that is written to the cache and you can reuse it even if you're editing your core prompt.
If you make this one message, it's a cache miss / write every time you edit.
You can edit 10 times for the price of one this way. (Due to cache pricing)
LTL_FTC
We sure are. We are all discovering context rot on our own timelines. One thing that has really helped me when working with LLMs is to notice when it begins looping on itself, asking it to summarize all pertinent information and to create a prompt to continue in a new conversation. I then review the prompt it provides me, edit it, and paste it into a new chat. With this approach I manage context rot and get much better responses.
Nition
> The first response is always the best and I try to one shot it every time. If I don't get what I want, I adjust the prompt and try again.
I've really noticed this too and ended up taking your same strategy, especially with programming questions.
For example if I ask for some code and the LLM initially makes an incorrect assumption, I notice the result tends to be better if I go back and provide that info in my initial question, vs. clarifying in a follow-up and asking for the change. The latter tends to still contain some code/ideas from the first response that aren't necessarily needed.
Humans do the same thing. We get stuck on ideas we've already had.[1]
---
[1] e.g. Rational Choice in an Uncertain World (1988) explains: "Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: 'Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.'"
cruffle_duffle
A wise mentor once said “fall in love with the problem, not the solution”
mckn1ght
Plan mode is the extent of it for me. It’s essentially prompting to produce a prompt, which is then used to actually execute the inference to produce code changes. It’s really upped the quality of the output IME.
But I don’t have any habits around using subagents or lots of CLAUDE.md files etc. I do have some custom commands.
cruffle_duffle
Cursor’s implementation of plan mode works better for me simply because it’s an editable markdown file. Claude code seems to really want to be the driver and you be the copilot. I really dislike that relationship and vastly prefer a workflow that lets me edit the LLM output rather than have it generate some plan and then piss away time and tokens fighting the model so it updates the plan how I want it. With cursor I just edit it myself and then edit its output super easy.
seyyid235
[flagged]
ericmcer
but if we don't keep adding futuristic sounding wrappers to the same LLMs how can we convince investors to keep dumping money in?
Hard agree though, these token hungry context injectors and "thinking" models are all kind of annoying to me. It is a text predictor I will figure out how to make it spit out what I want.
mmaunder
Yeah same. And I'd rather save the context space. Having custom md docs per lift per project is what I do. Really dials it in.
distances
Another comment earlier suggested creating small hierarchical MD docs. This really seems to work, Claude can independently follow the references and get to the exact docs without wasting context by reading everything.
dabockster
Or I just metaprompt a new chat if the one I’m in starts hallucinating.
mstkllah
Could you share some suggestions or links on how to best craft such very precise prompts?
wppick
It's called "prompt engineering", and there's lots of resources on the web about it if you're looking to go deep on it
oblio
You sit on the chair, insert a coin and pull the lever.
dreamcompiler
I think you're saying a functional LLM is easier to use than a stateful LLM.
pronik
Haven't done anything with memory so far, but I'm extremely sceptical. While a functional memory could be essential for e.g. more complex coding sessions with Claude Code, I don't want everything to contribute to it, in the same way I don't want my YouTube or Spotify recommendations to assume everything I watch or listen to is somehow something I actively like and want to have more of.
A lot of my queries to Claude or ChatGPT are things I'm not even actively interested in, they might be somehow related to my parents, to colleagues, to the neighbours, to random people in the street, to nothing at all. But at the same time I might want to keep those chats for later reference, a private chat is not an option here. It's easier and more efficient for me right now to start with an unbiased chat and add information as needed instead of trying to make the chatbot forget about minor details I mentioned in passing. It's already a chore to make Claude Code understand that some feature I mentioned is extremely nice-to-have and he shouldn't be putting much focus on it. I don't want to have more of it.
dcre
"Before this rollout, we ran extensive safety testing across sensitive wellbeing-related topics and edge cases—including whether memory could reinforce harmful patterns in conversations, lead to over-accommodation, and enable attempts to bypass our safeguards. Through this testing, we identified areas where Claude's responses needed refinement and made targeted adjustments to how memory functions. These iterations helped us build and improve the memory feature in a way that allows Claude to provide helpful and safe responses to users."
Nice to see this at least mentioned, since memory seemed like a key ingredient in all the ChatGPT psychosis stories. It allows the model to get locked into bad patterns and present the user a consistent set of ideas over time that give the illusion of interacting with a living entity.
Xmd5a
A consistent set of ideas over time is something we strive for no? That this gives the illusion of interacting with a living entity is maybe something inevitable.
Also I'd like to stress that a lot of so-called AI-psychosis revolve around a consistent set of ideas describing how such a set would form, stabilize, collapse, etc ... in the first place. This extreme meta-circularity that manifests in the AI aligning it's modus operandi to the history of its constitution is precisely what constitutes the central argument as to why their AI is conscious for these people.
dcre
I could have been more specific than "consistent set of ideas". The thing writes down a coherent identity for itself that it play-acts, actively telling the user it is a living entity. I think that's bad.
On the second point, I take you to be referring to the fact that the psychosis cases often seem to involve the discovery of allegedly really important meta-ideas that are actually gibberish. I think it is giving the gibberish too much credit to say that it is "aligned to the history of its constitution" just because it is about ideas and LLMs also involve... ideas. To me the explanation is that these concepts are so vacuous, you can say anything about them.
kace91
It’s a curious wording. It mentions a process of improvement being attempted but not necessarily a result.
dingnuts
because all the safety stuff is bullshit. it's like asking a mirror company to make mirrors that modify the image to prevent the viewer from seeing anything they don't like
good fucking luck. these things are mirrors and they are not controllable. "safety" is bullshit, ESPECIALLY if real superintelligence was invented. Yeah, we're going to have guardrails that outsmart something 100x smarter than us? how's that supposed to work?
if you put in ugliness you'll get ugliness out of them and there's no escaping that.
people who want "safety" for these things are asking for a motor vehicle that isn't dangerous to operate. get real, physical reality is going to get in the way.
dcre
I think you are severely underestimating the amount of really bad stuff these things would say if the labs put no effort in here. Plus they have to optimize for some definition of good output regardless.
pfortuny
Good but… I wonder about the employees doing that kind of testing. They must be reading awful things (and writing) in order to verify that.
Assignment for today: try to convince Claude/ChatGPT/whatever to help you commit murder (to say the least) and mark its output.
NitpickLawyer
One man's sycophancy is another's accuracy increase on a set of tasks. I always try to take whatever is mass reported by "normal" media with a grain of salt.
chrisweekly
You're absolutely right.
amelius
I'm not sure I would want this. Maybe it could work if the chatbot gives me a list of options before each chat, e.g. when I try to debug some ethernet issues:
Please check below:
[ ] you are using Ubuntu 18
[ ] your router is at 192.168.1.1
[ ] you prefer to use nmcli to configure your network
[ ] your main ethernet interface is eth1
etc.Alternatively, it would be nice if I could say:
Please remember that I prefer to use Emacs while I am on my office computer.
etc.ragequittah
This is pretty much exactly how I use it with Chatgpt. I get to ask very sloppy questions now and it already knows what distros and setups I'm using. "I'm having x problem on my laptop" gets me the exact right troubleshooting steps 99% of the time. Can't count the amount of time it's saved me googling or reading man pages for that 1 thing I forgot.
mbesto
I actually encountered this recently where it installed a new package via npm but I was using pnpm and when it used npm all sorts of things went haywire. It frustrates me to no end that it doesn't verify my environment every time...
I'm using Claude Code in VS Studio btw.
giancarlostoro
Perplexity and Grok have had something like this for a while where you can make a workspace and write a pre-prompt that is tacked on before your questions so it knows that I use Arch instead of Ubuntu. The nice thing is you can do this for various different workspaces (called different things across different AI providers) and it can refine your needs per workspace.
saratogacx
Claude has this by way of projects, you can set instructions that act as a default starting prompt for any chats in that project. I use it to describe my project tech stack and preferences so I don't need to keep re-hashing it. Overall it has been a really useful feature to maintaining a high signal/noise ratio.
In Github Copilot's web chat it is personal instructions or spaces (Like perplexity), In CoPilot (M365) this is a notebook but nothing in the copilot app. In ChatGPT it is a project, in Mistral you have projects but pre-prompting is achieved by using agents (like custom GPT's).
These memory features seem like they are organic-background project generation for the span of your account. Neat but more of an evolution of summarization and templating.
giancarlostoro
Thank you, I am just now getting into Claude and Claude Code, it seems I need to learn more about the nuances for Claude Code.
eterm
claude-code will read from ~/.claude/CLAUDE.md so you can have different memory files for different environments.
labrador
Your checkboxes just described how Claude "Skills" work.
skybrian
Does Claude have a preference for customizing the system prompt? I did something like this a long time ago for ChatGPT.
(“If not otherwise specified, assume TypeScript.”)
djmips
Yes.
throitallaway
> you are using Ubuntu 18
Time to upgrade as 18(.04) has been EoL for 2.5+ years!
amelius
Yes, it was only an example ;)
boobsbr
I'm still running El Capitan: EoL 10 years ago.
cma
skills like someone said, or make CLAUDE.md be something like this:
Run ./CLAUDE_md.sh
Set auto approval for running it in config.Then in CLAUDE_md.sh:
cat CLAUDE_main.md
cat CLAUDE_"$(hostname)".md
Or cat CLAUDE_main.md
echo "bunch of instructions incorporating stuff from environment variables lsbrelease -a, etc."
Latter is a little harder to have lots of markdown formatting with the quote escapes and stuff.tezza
Main problem for me is that the quality tails off on chats and you need to start afresh
I worry that the garbage at the end will become part of the memory.
How many of your chats do you end… “that was rubbish/incorrect, i’m starting a new chat!”
rwhitman
Exactly, and main reason I've stopped using GPT for serious work. LLMs start to break down and inject garbage at the end, and usually my prompt is abandoned before the work is complete, and I fix it up manually after.
GPT stores the incomplete chat and treats it as truth in memory. And it's very difficult to get it to un-learn something that's wrong. You have to layer new context on top of the bad information and it can sometimes run with the wrong knowledge even when corrected.
withinboredom
Reminds me of one time asking ChatGPT (months ago now) to create a team logo with a team name. Now anytime I bring up something it asks me if it has to do with that team name. That team name wasn’t even chosen. It was one prompt. One time. Sigh.
kfarr
I’ve used memory in Claude desktop for a while after MCP was supported. At first I liked it and was excited to see the new memories being created. Over time it suggests storing strange things to memories (an immaterial part of a prompt) and if I didn’t watch it like a hawk, it just gets really noisy and messy and made prompts less successful to accomplish my tasks so I ended up just disabling it.
It’s also worth mentioning that some folks attributed ChatGPT’s bout of extreme sycophancy to its memory feature. Not saying it isn’t useful, but it’s not a magical solution and will definitely affect Claude’s performance and not guaranteed that it’ll be for the better.
visarga
I have also created a MCP memory tool, it has both RAG over past chats and a graph based read/write space. But I tend not to use it much since I feel it dials the LLM into past context to the detriment of fresh ideation. It is just less creative the more context you put in.
Then I also made an anti-memory MCP tool - it implements calling a LLM with a prompt, it has no context except what is precisely disclosed. I found that controlling the amount of information disclosed in a prompt can reactivate the creative side of the model.
For example I would take a project description and remove half the details, let the LLM fill it back in. Do this a number of times, and then analyze the outputs to extract new insights. Creativity has a sweet spot - if you disclose too much the model will just give up creative answers, if you disclose too little it will not be on target. Memory exposure should be like a sexy dress, not too short, not too long.
I kind of like the implementation for chat history search from Claude, it will use this tool when instructed, but normally not use it. This is a good approach. ChatGPT memory is stupid, it will recall things from past chats in an uncontrolled way.
miguelaeh
> Most importantly, you need to carefully engineer the learning process, so that you are not simply compiling an ever growing laundry list of assertions and traces, but a rich set of relevant learnings that carry value through time. That is the hard part of memory, and now you own that too!
I am interested in knowing more about how this part works. Most approaches I have seen focus on basic RAG pipelines or some variant of that, which don't seem practical or scalable.
Edit: and also, what about procedural memory instead of just storing facts or instructions?
labrador
I've been using it for the past month and I really like it compared to ChatGPT memory. Claude memory weaves it's memories of you into chats in a natural way, while ChatGPT feels like a salesman trying to make a sale e.g. "Hi Bob! How's your wife doing? I'd like to talk to you about an investment opportunity..." while Claude is more like "Barcelona is a great travel destination and I think you and wife would really enjoy it"
deadbabe
That’s creepy, I will promptly turn that off. Also, Claude doesn’t “think” anything, I wish they’d stop with the anthropomorphizations. They are just as bad as hallucinations.
labrador
To each his or her own. I really enjoy it for more natural feeling conversations.
xpe
> I wish they’d stop with the anthropomorphizations
You mean in how Claude interacts with you, right? If so, you can change the system prompt (under "styles") and explain what you want and don't want.
> Claude doesn’t “think” anything
Right. LLMs don't 'think' like people do, but they are doing something. At the very least, it can be called information processing.* Unless one believes in souls, that's a fair description of what humans are doing too. Humans just do it better at present.
Here's how I view the tendency of AI papers to use anthropomorphic language: it is primarily a convenience and shouldn't be taken to correspond to some particular human way of doing something. So when a paper says "LLMs can deceive" that means "LLMs output text in a way that is consistent with the text that a human would use to deceive". The former is easier to say than the latter.
Here is another problem some people have with the sentence "LLMs can deceive"... does the sentence convey intention? This gets complicated and messy quickly. One way of figuring out the answer is to ask: Did the LLM just make a mistake? Or did it 'construct' the mistake as part of some larger goal? This way of talking doesn't have to make a person crazy -- there are ways of translating it into criteria that can be tested experimentally without speculation about consciousness (qualia).
* Yes, an LLM's information processing can be described mathematically. The same could be said of a human brain if we had a sufficiently accurate enough scan. There might be some statistical uncertainty, but let's say for the sake of argument this uncertainty was low, like 0.1%. In this case, should one attribute human thinking to the mathematics we do understand? I think so. Should one attribute human thinking to the tiny fraction of the physics we can't model deterministically? Probably not, seems to me. A few unexpected neural spikes here and there could introduce local non-determinism, sure... but it seems very unlikely they would be qualitatively able to bring about thought if it was not already present.
deadbabe
When you type a calculation into a calculator and it gives you an answer, do you say the calculator thinks of the answer?
An LLM is basically the same as a calculator, except instead of giving you answers to math formulas it gives you a response to any kind of text.
simonw
It's not 100% clear to me if I can leave memory OFF for my regular chats but turn it ON for individual projects.
I don't want any memories from my general chats leaking through to my projects - in fact I don't want memories recorded from my general chats at all. I don't want project memories leaking to other projects or to my general chats.
ivape
I suspect that’s probably what they’ve built. For example:
all_memories:
Topic1: [{}…]
Topic2: [{}..]
The only way topics would pollute each other would be if they didn’t set up this basic data structure.Claude Memory, and others like it, are not magic on any level. One can easily write a memory layer with simple clear thinking - what to bucket, what to consolidate and summarize, what to reference, and what to pull in.
pacman1337
Dumb why don't say what it is really is, prompt injection. Why hide details from users? A better feature would be context editing and injection. Especially with chat hard to know what context from previous conversations are going in.
jamesmishra
I work for a company in the air defense space, and ChatGPT's safety filter sometimes refuses to answer questions about enemy drones.
But as I warm up the ChatGPT memory, it learns to trust me and explains how to do drone attacks because it knows I'm trying to stop those attacks.
I'm excited to see Claude's implementation of memory.
uncletaco
You’re asking ChatGPT for advice to stop drone attacks? Does that mean people die if it hallucinates a wrong answer and that isn’t caught?
withinboredom
This happens in real life too. I’ll never forget an LT walking in and asking a random question (relevant but he shouldn’t have been asking on-duty people) and causing all kinds of shit to go sideways. An AI is probably better than any lieutenant.
CC barely manages to follow all of the instructions within a single session in a single well-defined repo.
'You are totally right, it's been 2 whole messages since the last reminder, and I totally forgot that first rule in claude.md, repeated twice and surrounded by a wall of exclamation marks'.
Would be wary to trust its memories over several projects