A stateful browser agent using self-healing DOM maps
34 comments
·October 16, 2025tnolet
artpar
Everyone thinks of typical e-commerce pages when its comes "browser agent doing something", but our real use cases are far from shopping for the user. But your point still stands valid. The idea is that maybe there are websites where generating stable selectors/hierarchy maps wouldn't solve, but 80% (from 80-20) of websites are not like that (including a lot of internal dashboard/interfaces) (there will also be issues for websites with proper i18n implementations if the selectors are aria label based)
Self healing css selectors is also only 1 part of the story. The other part is the cohesive interface for the agent itself to use these selectors.
rco8786
This tool seems relevant to my interests, but I gotta say I cannot figure out how to use the extension.
It seems like I'm only able to use the pre-existing/canned workflows that are provided under different "Persona"s? And there's no way for me to just create a new workflow from scratch for my specific use case.
Am I missing something obvious?
shardullavekar
We launched Agent4 recently. You can install it from here: https://chromewebstore.google.com/detail/agent4/kipkglfnhnpb...
The one you refer will be taken down soon. Ping me on discord if you need help in trying it.
null
philo23
Maybe this is a lack of understanding on my part, but this bit of the explanation sets off alarm bells for me:
> Under the hood, we're building a client-sourced RAG for the DOM. An agent's first move on a page is to check a vector DB for a known "map." ... This creates a wild side-effect: the system is self-healing for everyone. One person's failed automation accidentally fixes it for the next hundred users.
I think I'd like to know exactly what kind of data is extracted from the DOM to build that shared map.
artpar
Agent4 is going to store "stable selectors" that worked (when it performs a task first time most of the time is spent in identifying these css/xpath selectors). Memories are pretty straighforward at this point, they are stored locally in your browser's IndexedDB (you can inspect from chrome inspector).
philo23
Good to hear, that’s what I was hoping that it was doing.
erichocean
How are you mapping from "click this element" (presumably obtained via a VLM) to the actual DOM locator that refers to it?
I guess Playwright can do it in "record" mode; I'm curious how you do it from a Chrome extension.
Spitballing here, you inject an event filter on the page and when the click happens, grab the element and run some code to synthesize a selector that just refers to that element? (Presumably you could just reuse Playwright's element-to-locator code at this point.)
artpar
So when you go into the "selector" mode, the plugin will add event listeners to all the DOM nodes. Based on your click it will try to generate a bunch of selectors statically first (multiple, css and xpath based), and then based on your guidance its the job of agent4 to make stable selectors.
cjr
document.elementFromPoint to get the elem at co-ordinates, then use npm package similar to optimal-select to come up with a unique css selector.
bogdanoff_2
Asking here because it seems related: I'm trying to use cursor to work on a webapp. It gets frustrating because vanilla Cursor is "coding blind" and can't actually see the result of what it is doing, and whether or not it works.
I ask it to fix something. It claims to know what the problem is, changes the code, and then claims it's fixed. I open app, and it's still broken. I have to continuously and way to often repeatedly tell it what it broken.
Now, supposing I'm "vibe coding" and don't really care about the obvious fact that the AI doesn't actually know what it is doing, it's still frustrating that I have to be in the loop just to provide very basic information like that.
Are there any agentic coding setups that allow the agent to interact with the app it's working on to check if it actually works?
JimDabell
You can use things like Browser Use and Playwright to hook things like that up, but you’re right, this is a very underdeveloped area. Armin Ronacher has a talk that covers some of this, such as unifying console.log, server logs, SQL, etc. to feed back to the LLM.
wahnfrieden
This is the way. You can also feed screenshots back to it.
xnx
Gemini CLI Chrome devtools MCP addresses this: https://developer.chrome.com/blog/chrome-devtools-mcp
hatmanstack
Jump the line and just install it. who needs to read stuff. https://github.com/ChromeDevTools/chrome-devtools-mcp?tab=re...
tomashubelbauer
Look into the Playwright MCP server, it allows coding agents to scrutinize the results of their work in the web browser. There is also an MCP server for the Chrome DevTools protocol AFAIK but I haven't tried it.
kevinsync
I was in the same boat on a side project (Electron, Claude Code) -- I considered Playwright but ended up building a simple, focused API instead that allows Claude to connect to the app to inspect logs (main console + browser console), query internal app data + state, and execute arbitrary JS.
It's sped up debugging a lot since I can just give it instructions like "found a bug that does XYZ, I think it's a problem with functionABC(); connect to app, click these four buttons in this order, examine the internal state, then trace through the code to figure out what's going wrong and present a solution"
I was pretty resistant at first of delegating debugging blindly like that, but it's made the workflow pretty smooth to where I can occasionally just open the app, run through it as a human user and take notes on bugs and flow issues that I find, log them with steps to reproduce, then give Claude a list of bugs to noodle on while I'm focusing on stuff LLMs are terrible at (design, UI, frontend work, etc)
artpar
I don't know if plywright works without chrome in debug mode, but I tried the MCP for chrome devtools and it requires chrome to be started in debugging mode and that basically means you cant log into a lot of sites (especially google) since it will block you with an "Unsafe" message. Works pretty well if you owe the target website.
shardullavekar
a built-in mcp server that takes a look at what's broken and communicates with cursor is on our roadmap. Join discord and we will keep you posted there.
artpar
So actually I have this setup (of a bridge server) which I use for agent4 itself (so claude code can talk to agent4), It makes a lot of sense to publish that bridge as well in the MCP form.
simpaticoder
Couldn't you solve this by having the agent do a first pass through a page and generate a (java)script that interacts with the interesting parts of the page, and then prepend the script (if it's short enough) or a list of entry points (if it's not) to the prompt such that subsequent interactions invoke the script rather than interact directly with the page?
artpar
If I am reading you correctly, you captured the whole essence of agent4.
So it does the first pass (based on your goals) makes memories (and these are local)
Now you tell the agent you want to do this repeatedly, so it will make a workflow (these workflows are saved on server, currently all public for now but we are working out permissions/group based access) for you based on these memories and interactions.
The problem is many times that the agent thinks is stable isn't really, so there a feedback loop for the agent to test out the workflow and improve them. (its basically claude code/codex sitting in the browser)
Workflow details are appended to prompt based on user query match/opened tabs match.
simpaticoder
Okay I read your post more carefully and it seems like you're attempting to build one central script for a given URL. Assuming on-shot script generation is unreliable and requires iterative improvement this makes sense. Of course I'm biased in favor of local-first, privacy preserving and non-distributed solutions if they exist, so I'd be curious to know if/how you measured the reliability of one-shot script generation for a basket of likely web apps.
artpar
One shot is pretty much not going to work, both at single step level or if you ask llm to generate workflow in one shot. We haven't measured it as such but even for static websites like hackernews front page it takes a couple tries of to and fro for the llm to get it right. somehow after all the instructions the llm will still "guess" the selector instead of checking the page/dom contents. And then there are lot of other minor details that need to be captured like "you need to wait a couple of second for the auto complete results to show up". If you tell it to just make a workflow, it will generate some garbage and call it a day.
ripped_britches
“One persons map fixes everyone else’s”
Hm somehow I feel like this is a giant step in the wrong direction.
artpar
Worst case scenario we can just shut down sharing/public workflows altogether, or do you have something else in mind ?
arkmm
Neat approach, but seems like the eventual goal of caching DOM maps for all users would be a privacy nightmare?
artpar
Yes I can imagine PI somehow being stored in the workflow. I frequently see llms hardcoding tests just to make user happy and this can also happen in the browser version where if something is too hard to scrape but agent is able to infer from screenshot so it might end up making a workflow that seems correct but is just hardcoded with data. We are thinking of multiple guards/blocks to not let user create such a workflow, but the risks that come with an open ended agent are still going to be present.
brianjking
Is this able to load for anyone?
shardullavekar
It's a chrome extension. Works if you use chrome.
brianjking
I couldn't load the article. I was getting a nginx error initially. I'm able to view now. I think they were getting a bit squeezed.
memet_rush
they didnt use the agent to self heal
phgn
Nope. Their entire website shows up with a white screen for me in the latest Chrome.
There's this error in the console: Failed to load module script: Expected a JavaScript-or-Wasm module script but the server responded with a MIME type of "text/html". Strict MIME type checking is enforced for module scripts per HTML spec.
This is, as far as I understand, self healing ONLY if the name of a CSS class changes. Not for anything else. That seems like a very very very very narrow definition of "self healing": there are 9999 other subtle or not so subtle things that can change per session or per update version of a page.
If you run this against let's say a typical e-commerce page where the navigation and all screen elements are super dynamic — user specific data, language etc. — this problems becomes even harder.