Stepping Back
75 comments
·June 1, 2025alankarmisra
Labo333
Interesting. My process is similar, although based on the GTD method (for example with an Inbox list) and using Trello for implementation (I get syncing, task level notes, multimedia, item drag and drop, etc)
Lerc
My thinking is threaded too, but it's more like the XKCD primer plot diagram.
I have a churn of tasks and subtasks that I add do, my brain randomly picks one to be the most important thing in the world and I do that to the exclusion of all else.
Methylphenidate didn't help much but dexamphetamine seems to improve things a bit once I get past the drowsiness it causes.
evrimoztamur
Your fixation is a result of the fact that interacting with LLM coding tools is much like playing a slot machine, it grabs and chokeholds your gambling instincts. You're rolling dice for the perfect result without much thought.
rjpower9000
Good insight.
That's a better description than what I came up with, "Tiktok for engineers". LLMs probably compound the issue, with that hope of a magic outcome. Though I've had many problems pre-LLM where I was plowing through it myself and couldn't let up...
amelius
I'm sure they are working hard to tell the LLM how to think and where it went wrong.
rjpower9000
It's true you're still working, but it's a different, more distracted effort: you put the LLM on something for a while and then come back.
Sometimes it does it right, some times not. I can see the relation to gambling if, say, it does it right 50% of the time. If I had been taking a more scientific approach to the problem and had a clear direction of what I wanted to test, I suspect I wouldn't have gotten quite as "stuck".
munificent
That mixed with a dash of simulated social interaction making you not want to thwart your AI "partner" and give up.
jimbob45
Is there any progress toward more deterministic AI? Like seeding with predictable values or a nanny AI that discards hallucinations? I know the smartest people in the world are working on this stuff so I’m sure they’ve thought of everything before me but I don’t know where to seek out layman’s news on cutting-edge AI developments that aren’t just “we threw more compute and tokens at it”.
liamwire
I’ve found this problem to be compounded by the use of stimulant medications — no matter how aware of the phenomenon one is going into a task, it can feel nigh impossible sometimes to avoid locking into whatever path I’m on as the drugs kick in. This seems true not only of the task itself but also of the individual decisions that can constitute it. I don’t think this is surprising or novel, to be sure, but frustratingly predictable.
sureglymop
I find it interesting that no one has created a better interface for these LLMs.
Two things that should be available by now in conversations are branching and history editing. Branching is somewhat trivial so let's focus on the history.
Now when I last used an LLM API, the whole context was still managed manually. Meaning I had to send the API the whole conversation history as one long text for every new query.
This means that I could technically change a part of the history directly. Manipulating the history though is not really a trivial problem. The LLM would need to re evaluate starting from that point.
But, the re-evaluation may result in something completely different... If there are branches, perhaps it would also be desirable to let it propagate into the branches.
Next, re-evaluate until where? We can assume a conversation happened until the present moment and the user may have changed their reality/state during the conversation before that point. For example, I may have changed some function based on a suggestion of the LLM. Now, for re-evaluation it would actually be nice if the LLM could also take that state change into consideration.
Here it would be nice if the LLM had the concept of certain logical facts and pieces of information and how they relate to each other but with an interface so that we could see that. If such a piece of information in the conversation is then changed, that would affect the information that is related to it. We could follow a sort of sequence of logical conclusions being made to verify what happened.
Just some thoughts with no conclusion. I think current LLM interfaces could be a lot better.
jstanley
I think you may be confused about how LLMs work. Editing the history you send to the API works perfectly fine. You don't need the LLM to specifically "re-evaluate" the history from the point you edited.
In a way it is "re-evaluating" the entire history for every token it generates.
sureglymop
What I'm saying is: If, earlier in the history, I change something, that thing may also still be used further down in the history which I specifically don't want.
So yes, I know that I can just do a simple text replacement somewhere in the history but that's not really useful. I want the conversation to be re-evaluated from that point because it might have gone differently with that changed message.
haiku2077
I can do this with my assistants in Zed and Kagi just fine? Did a quick test and it works exactly how you describe.
quantadev
Yeah, that's where the 'branching' comes in for sure. Ideally a chatbot would be 'tree-based' where you can just go back to a prior point where you wish you had said something different, and just pickup as if you were back at that point in time, while the other parallel branches are ignored.
The way this is done, technically of course, is you build the "Context" by walking back up the tree parent by parent, until you reach the root, building up 'context' as you go along, in reverse order. That will then always be the 'right' context, based on branching.
dmd
I'm confused by what you mean by this, because most LLM interfaces other than the very very basic ones do already have branching and history editing? E.g. pretty much every third party LLM interface? Are you talking about something more than that?
sureglymop
Yes. Here is a scenario:
Let's say you ask an LLM to help you with a programming task. The LLM gives you a solution and you copy that into your code. While getting to the result of this solution, spanned over multiple messages, you give it more context about your code base. You ask it to help with multiple sub problems and it answers with suitable responses containing code snippets.
Now, you change your mind and want to actually write your program in a different language. You could at this point just tell that to the LLM and it would change course from here on out. Everytime it is queried the full history including the original problem is sent to it. The change in programming language is taken into account because it eventually gets to the point where you changed your mind.
But instead what you'd probably want is the solution it gave you initially and all the little solutions to all the sub problems but in the newly chosen programming language! You'd like to simulate or re-evaluate the conversation but with the newly chosen programming language.
I haven't seen that implemented in a usable way but as I said, this is not a trivial feature either.
mattnewton
Checkout cursor. It doesn't have the old branch "rebasing" you are describing, but going back into the history reverts changes made after that point by default. (And honestly I am not convinced automatically rebasing that way is better than rerunning the prompts with changes in most cases, it seems too likely to go off the rails to me)
hombre_fatal
Neither Claude, OpenAI, nor Gemini let you do either, so it's not ubiquitous.
You can't delete messages, you can't branch, and you can't edit the LLM's messages. These are basic, obvious, day-1 features I implemented in my own LLM UI that I built over 3 years ago, yet I'm still waiting for flagship providers to implement them.
It's pathetic how rare good UX is in software.
Can you list some UIs with these features?
dmd
Right, but anyone who wants those features can use a 3rd party front end like LibreChat.
squidbeak
You can certainly branch with Gemini in AIStudio.
RossBencina
> Two things that should be available by now in conversations are branching and history editing.
Prapti can do editable history no problem, the entire history is just the markdown file. We experimented with branching using git to automatically switch branches based on how you edit the history.
I still use it frequently. I prefer keeping my chats in local markdown files.
amelius
One simple thing I'm missing is a way to point at a word, and let the LLM know that that's where it went wrong (so it can generate an alternative output).
sureglymop
Yup, that's exactly what I mean!
quantadev
The first chatbot I wrote back in early 2023, had exactly this (branching and history editing), and I was highly active on the OpenAI discord showing them it, and trying to influence them, to go in this direction. Tree-based editors can be hard to implement, and can be confusing to users, so that's one reason GUIs tend to not attempt it, and just go with linear (although often nowadays "editable") chats.
Related to this I've learned that a lot of the time it's best to just use a "Context File" which is a big file of instructions explaining goals, architecture, and constraints so I can just tell the AI to read it and then say "Do Step 21 Now" for example, if step 21 is laid out in the file. This way I sort of can micromanage the context, rather than trying to reuse old discussions which are usually 90% throw away content, no longer appliable.
twodave
What the author is describing isn’t my experience at all. I tend to operate from a lens of asking for increased feedback as my work takes me further off the “beaten path”. There’s essentially zero chance I’m going to go off screwing around porting code with an LLM unless I already bounced the idea of doing that off of another person or two that I trust.
In my hobbies I think going off on tangents _is_ the experience, so I have no qualms about doing it. But when I’m working I’m almost always thinking in terms of stewardship. Is what I’m doing good for the business, our customers, my coworkers, my own professional development, etc.? Any sort of fixation on minutiae is just subservient to those questions.
That said, I don’t think of a 5 minute tangent like the author describes as a heavy investment. If anything it’s just a happy little side-journey taken to better understand the nature of the problem space. For me the threshold of “this may be a waste of time” is more measured in hours than minutes.
josefrichter
Reminded me tangentially of the legendary talk “Hammock Driven Development”.
camkego
I enjoyed the article, and as a longtime developer. I certainly relate to being heads down on a problem, only to step away for a walk or a breather and realize I can maybe avoid solving the immediate problem altogether.
I also don’t think it’s possible to focus at 100% on a detailed complex problem, and also concurrently question is there a better path or a way to avoid the current problem. Sometimes you just need to switch modes between focusing on the details the weeds, and popping back up to asking does this even have to be completed at all?
kranner
Previous discussions on psychonetics and deconcentration of attention seem relevant, e.g. https://news.ycombinator.com/item?id=10028317
Personally I've found a continuous open awareness style of meditation has really helped me balance things out, as I went from someone with very little doggedness to e.g. being two weeks into cataloguing all my books with Delicious Library before realising it was kind of pointless. The open awareness practice (very different from focus-on-your-breath and also the visual deconcentration discussed in the link above) is about encouraging the recognition of this-is-how-things-are as it naturally and spontaneously occurs; doing this more and more also builds confidence in one's intuition about (in this context) whether to persevere at the current task or whether to step back.
I don't think there is a foolproof system that can be developed as a substitute for this kind of intuition. Speaking for myself I can and will continue to second-guess over time any external system that may feel definitive when it is first established. But I can learn to trust the (unfortunately) indeterminate part of myself that tells me I'm doing the right thing.
jjtheblunt
Great article and I think of this phenomenon by the term of my pilot friends, “Target Fixation”.
RossBencina
This reminds me of Richard Hamming's practice of spending time every Friday to consider the big questions of his field. Here's the first summary I found: https://nobaproject.com/blog/2018-11-01-tough-questions-and-...
The main Hamming materials I'm aware of are:
"You and your research" https://www.youtube.com/watch?v=a1zDuOPkMSw
and the course, "Learning to Learn" https://www.youtube.com/playlist?list=PL2FF649D0C4407B30
The original course notes are online somewhere, can't find right now.
And I just found that there is a book: https://gwern.net/doc/science/1997-hamming-theartofdoingscie...
A serious talk by British comedian John Cleese "Creativity In Management" is also relevant https://www.youtube.com/watch?v=Pb5oIIPO62g it touches on switching between "open" and "closed" modes of thought. Perhaps this is what we'd call switching between the DMN and TPN. Neurotypical exclusive switching between these networks is a function that is theorised to be impaired in ADHD.
gwern
https://gwern.net/doc/science/1986-hamming#great-thoughts-fr...
You can think of it as a RL problem, and there are some interesting algorithms which achieve good performance by periodically 'breaking out' of exploitation, but less and less: https://arxiv.org/abs/1711.07979 (I expect that you can come up with an infinite hierarchy of 'wake ups' which converge to a fixed overhead and which aren't terribly far off an optimal schedule of replanning wake-ups, and that something like 'great thoughts Friday' is the first step; then you'd have 'great thoughts first-of-the-month' and 'great thoughts new year's day' etc: https://gwern.net/socks#fn10 )
mattlondon
I read somewhere that the subconscious brain continues "working on problems" even when you are not actively working on it consciously. Hence the expression to "sleep on it" when faced with a difficult/big decision.
I am not sure how much I believe that or how true it is, but I have found that many times I have come up with a better solution to a problem after going for a run or having a shower. So there might be some truth in it.
But yeah it is hard to know when you are in too deep sometimes. I find that imposter syndrome usually kicks in with thoughts of "why am I finding this so complex or hard? I bet colleague would have solved this with a simple fix or a concise one-liner! There must be a better way?". TBH this is where I find LLMs most useful right now, to think about different approaches or point-out all the places where code will need to change if I make a change and if there is a less-destructive/more-concise way of doing things that I hadn't thought of.
Phreaker00
> I read somewhere that the subconscious brain continues "working on problems" even when you are not actively working on it consciously. Hence the expression to "sleep on it".
It's something I've actively used for almost two decades now when dealing with challenges i'm stuck on. I remember one of my professors explaining it as having a 'prepared mind'.
What I do is, before I go to bed, try to summarize the problem to myself as concise as possible (like rubber ducking) and then go to sleep. Very often the next morning I wake up with a new insight or new approach that solves the problem in 10 minutes that took me hours the day before.
panstromek
Richard Hickey talked about this in his "Hammock driven development" talk: https://youtu.be/f84n5oFoZBc?si=Ups64pcKCl47nNCY
hidingfearful
Well, brain science is pretty clear that the brain uses about the same amount of glucose (energy) all the time, resting or waking.
To contrast with for example your biceps, which will use a lot of energy when lifting something and then scale back down to near-zero.
So in terms of energy use, yes, the brain is always going at near full burn.
nixpulvis
I'm not sure how much my brain is subconsciously working, but it does make sense that popping the stacks and clearing the caches sometimes will force you to reevaluate choice and thoughts you had before. This can be very valuable in steering a design or finding a new solution.
JSR_FDED
After taking a break I often realize I can delete all the code from the last hour and either define away the problem entirely, or fix it in a much simpler way.
But it’s so scary to depend on that flash of insight, after all it’s not guaranteed to happen. So you keep grinding in an unenlightened state.
If there was a Reliable way to task your subconscious with working on a problem in the background I could probably do my job in a third of the time.
skydhash
The approach that always works for me is iteration and recursion. Just like drawing, where you do a loose sketch and refine it until you're happy. You're either stepping back and work on the whole, or focus on a small part to detail it[0].
So for a software project, I break it into parts recursively. Sometimes for a module, I can build some scaffolding even when I'm not sure about the full implementation. I can go up and down that tree of problems and I always have something to break away from my current thinking track. So there's always a lot of placeholder code, stubs, and various TODO comments around my codebase. And I rely heavily on refactoring (which is why I learned Vim, then Emacs as they helped me with the kind of quick navigation I need for that approach). Also REPL and/or some test frameworks.
JSR_FDED
I do something similar, and that is likely one of the ways for your subconscious to engage with the material.
However I’ve noticed that too much refactoring while exploring the problem space becomes tedious for me. At that point I go back to a notepad or whiteboard. Maybe this change of medium is another way to engage the subconscious?
And to your point about vim/emacs - the choice of tools no doubt impacts your engagement with the problem at hand. That’s why having AI generate teams of code doesn’t work for me - but using it to iteratively develop some PoC code whose core I then incorporate into my own code works brilliantly.
skydhash
> However I’ve noticed that too much refactoring while exploring the problem space becomes tedious for me. At that point I go back to a notepad or whiteboard.
Yes, I'm only writing code when I got a good idea on what I want to write. Most of the times, I'm doing research or doodling design ideas. Refactoring is when I've thought of a better approach for something. It's almost always easy to do as big refactoring only happens at the beginning of the project, when not a lot have been written.
> That’s why having AI generate teams of code doesn’t work for me
It also doesn't work for me. Because it's always too much implementation details at the beginning of the project. I'm keeping things loose on purpose while I'm exploring the domain. And when the times comes to finalize, it's mostly copy and paste as I have all the snippets I need. It can serve as good examples though for libraries and frameworks you don't know much about.
My thinking is threaded. I maintain lists (in a simple txt file and more recently, in Notes on the Mac) and add the tasks to it. Subtasks go into an indent. I have different notes for regular work/pet project/blog/learning/travel. priority-must-do-now/daily chores is separate one. Every morning I open my priority/daily chores stuff and try and wind that up. And then I just scuttle around the other lists and do whatever my brain tells me I can. I find that some days I do more from the blog notes and some days more from the regular work notes. The notes serve as goals for my brain and it invents/discoveres solutions in no particular order. This makes me more productive because I can switch when I'm bored (which to me is an indication that my brain needs more time to find solutions in this space). And if nothing is hitting the right note, I'll take a nap or read or watch a show for a bit or go for a long walk or hike - anything that's not in the to-do just to give myself the creative space. I find that giving myself problems to solve, and allowing my subconcious brain to invent solutions for it while I do other things actually works quite well for me and allows me to make steady progress.