Show HN: Browser MCP – Automate your browser using Cursor, Claude, VS Code
227 comments
·April 7, 2025rmac
namukang
Hey, creator of Browser MCP here.
1. Yes, the extension uses an anonymous device ID and sends an analytics event when a tool call is used. You can inspect the network traffic to verify that zero personalized or identifying information is sent.
I collect anonymized usage data to get an idea of how often people are using the extension in the same way that websites count visitors. I split my time between many projects and having a sense of how many active users there are is helpful for deciding which ones to focus on.
2. The extension is completely written by me, and I wrote in this GitHub issue why the repo currently only contains the MCP server (in short, I use a monorepo that contains code used by all my extensions and extracting this extension and maintaining multiple monorepos while keeping them in sync would require quite a bit of work): https://github.com/BrowserMCP/mcp/issues/1#issuecomment-2784...
I understand that you're frustrated with the way I've built this project, but there's really nothing nefarious going on here. Cheers!
asaddhamani
Hey, as a maker, I get it. You spent time building something, and you want to understand how it gets used. If you're not collecting personal info, there is nothing wrong with this.
Knee-jerk reactions aren't helpful. Yes, too much tracking is not good, but some tracking is definitely important to improving a product over time and focusing your efforts.
nlarew
"detailed" is an anonymized deviceId and a counter of tool calls? Heaven forbid an app want to get some basic insights into how people use it.
tomrod
Correct. Telemetry should _always_ be opt-in and explicitly an easy choice to not engage.
Any other mode of operation is morally bankrupt.
nlarew
Really? The hyperbole does not help anyone here.
I don't sign a term sheet when I order at McDonalds but you can be damn sure they count how many big macs I order. Does that make them morally bankrupt? Or is it just a normal business operation that is actually totally reasonable?
observationist
This automatic sense of entitlement to surveil users is the absolute embodiment of the banality of evil.
It's 2025 - we want informed consent and voluntary participation with the default assumption that no, we do not want you watching over our shoulders, and no, you are not entitled to covertly harvest all the data you want and monetize that without notifying users or asking permissions. The whole ToS gotcha game is bullshit, and it's way past time for this behavior to stop.
Ignorance and inertia bolstering the status quo doesn't make it any less wrong to pile more bullshit like this onto the existing massive pile of bullshit we put up with. It's still bullshit.
nlarew
You're making a huge jump from "gathering anonymous counters to understand how many people use the thing" to "harvest all the data you want and monetize it".
If they were tracking my identity across sites and actually selling it to the highest bidder that's one thing that we'll definitely agree on. This is so so far from that.
You're welcome to build and use your own MCP browser automation if you're so hostile to the developer that built something cool and free for you to use.
bn-l
The only chrome extensions you should install are ones you can build yourself from source.
neycoda
... And have reviewed and understand completely
EGreg
So ... pretty much none
Keep in mind, extensions can update themselves at any time, including when they're bought out by someone else. In fact, I bet that's a huge draw... imagine buying an extension that "can read and modify data on all your websites" and then pushing an update that, oh I dunno, exfiltrates everyone's passwords from their gmail. How would most people even catch that?
DO NOT have any extensions running by default except "on click".
There should be at least some kind of static checker of extensions for their calls to fetch or other network APIs. The Web is just too permissive with updating code, you've got eval and much more. It would be great if browsers had only a narrow bottleneck through which code could be updated, and would ask the user first.
(That wouldn't really solve everything since there can be sleeper code that is "switched on" with certain data coming over the wire, but better than what we have now.)
bhouston
So the website claims:
"Avoids bot detection and CAPTCHAs by using your real browser fingerprint."
Yeah, not really.
I've used a similar system a few weeks back (one I wrote myself), having AI control my browser using my logged in session, and I started to get Captcha's during my human sessions in the browser and eventually I got blocked from a bunch of websites. Now that I've stopped using my browser session in that way, the blocks eventually went away, but be warned, you'll lose access yourself to websites doing this, it isn't a silver bullet.
tempest_
The caveat with these things is usually "when used with high quality proxies".
Also I assume this extension is pretty obvious so it wont take long for CF bot detection to see it the same as playwrite or whatever else.
unixfox
The extension enable debugging in your browser (a banner appears telling you about automation). It's possible to detect that in JavaScript.
Hence why projects like this exist: https://github.com/Kaliiiiiiiiii-Vinyzu/patchright. They hide the debugging part from JavaScript.
DeathArrow
It might depend on the speed with which you click on the elements on the website.
SSLy
it does, CF bans my own honest to God clicks if I do them too fast.
omgwtfbyobbq
About five years ago, maybe more, Google started sending me captchas if I ran too many repetitive searches. I could be wrong, but it feel like most large platforms have fairly sophisticated anti-bot/scraping stuff in place.
michaelbuckbee
I use Vimium (Chrome extension for using keyboard control of the browser) and this happens to me as well since the behavior looks "unnatural".
wordofx
I wish people would stop using CF. It’s just making the internet worse.
bombela
Same here. And I am also using vimium.
PantaloonFlames
SSLy the speed clicker
SkyBelow
What do you think they might be looking for that could be detected pretty quickly? I'm wondering if it is something like they can track mouse movement and calculate when a mouse is moving too cleanly, so adding some more human like noise to the mouse movement can better bypass the system. Others have mentioned doing too many actions too fast, but what about potential timing between actions. Even if every click isn't that fast, if they have a very consistent delay that would be another non-human sign.
tempoponet
Modern captchas use a number of tools including many of the approaches you mentioned. This why you might sometimes see a CloudFlare "I am not a robot" checkbox that checks itself and moves along before you have much time to even react. It's looking at a number of signals to determine that you're probably human before you've even checked the box.
dalemhurley
When I am using keyboard navigation, shortcuts and autofills, I seem to get mistaken for a bot a lot. These Captchas are really bad at detecting bots and really good at falsely labelling humans as bots.
kmacdough
> I'm wondering if it is something like they can track mouse movement
Yes, this is a big signal they use.
> adding some more human like noise to the mouse
Yes, this is a standard avoidance strategy. Easier said than done. For every new noise generation method, they work on detection. They also detect more global usage patterns and other signals, so you'd need to immitate the entire workflow of being human. At least within the noise of their current models.
econ
Have a lot of small things count towards the result. Users behave quite linearly, extra points if they act differently all of a sudden.
mrweasel
There's also the whole issue of captchas being in place because people cannot be trusted to behave appropriately with automation tools.
"Avoids bot detection and CAPTCHAs" - Sure asshole, but understand that's only in place because of people like you. If you truly need access to something, ask for an API, may you need to pay for it, maybe you don't. May you get it, maybe the site owner tells you to go pound sand and you should take that as you're behaviour and/or use case is not wanted.
TeMPOraL
Actually, the CAPTCHAs are in place mostly because of assholes like you abusing other assholes like you[0].
Most of the automated misbehavior is businesses doing it to other businesses - in many cases, it's direct competition, or a third party the competition outsources it to. Hell, your business is probably doing it to them too (ask the marketing agency you're outsourcing to).
> If you truly need access to something, ask for an API, may you need to pay for it, maybe you don't.
Like you'd give it to me when you know I want it to skip your ads, or plug it to some automation or a streamlined UI, so I don't have to waste minutes of my life navigating your bloated, dog-slow SPA? But no, can't have users be invisible in analytics and operate outside your carefully designed sales funnel.
> May you get it, maybe the site owner tells you to go pound sand and you should take that as you're behaviour and/or use case is not wanted.
Like they have a final say in this.
This is an evergreen discussion, and well-trodden ground. There is a reason the browser is also called "user agent"; there is a well-established separation between user's and server's zone of controls, so as a site owner, stop poking your nose where it doesn't belong.
--
[0] - Not "you" 'mrweasel personally, but "you" the imaginary speaker of your second paragraph.
mrweasel
It seems that we have very different types of businesses in mind. I really didn't consider tracking users and displaying ads, but I also don't think this is where these types of tools would be used. Well, they might, but that's as part of some content farm, undesirable bots and downright scams, so nothing of value is really lost if this didn't exist.
If you have a sales funnel, as in you take orders and ship something to a customer, consumer or business, I almost guarantee you that you can request an API, if the company you want to purchase from is large enough. They'll probably give you the API access for free, or as part of a signup fee and give you access to discounts. Sometimes that API might be an email, or a monthly Excel dump, but it's an API.
When we're talking site that purely survive on tracking users and reselling their data, then yes, they aren't going to give you API access. Some sites, like Reddit does offer it I think, but the price is going to be insane, reflecting their unwillingness to interact with users in this way.
> Not "you" 'mrweasel personally
Understood, but thank you :-)
StevenNunez
I feel like I slept for a day and now MCPs are everywhere... I don't know what MCPs are and at this point I'm too afraid to ask.
oulipo
It's just a way to provide a "library of methods" / API that the LLM models can "call", so basically giving them method names, their parameters, the type of the output, and what they are for,
and then the LLM model will ask the MCP server to call the functions, check the result, call the next function if needed, etc
Right now if you go to ChatGPT you can't really tell it "open Google maps with my account, search for bike shops near NYC, and grab their phone numbers", because all he can do is reply in text or make images
with a "browser MCP" it is now possible: ChatGPT has a way to tell your browser "open Google maps", "show me a screenshot", "click at that position", etc
mattfrommars
Isn't the idea of AI agent talking to each by telling LLM model to reply say in, JSON and with some parameter value map to, say function in Python code? That in retrospect, given context {prompt} to LLM will be able to call said function code?
Is this what 'calling' is?
oulipo
Yes exactly. MCP just formalize this a bit better
throwaway314155
> with a "browser MCP" it is now possible: ChatGPT has a way to tell your browser "open Google maps", "show me a screenshot", "click at that position", etc
It seems strange to me to focus on this sort of standard well in advance of models being reliable enough to, ya know, actually be able perform these operations on behalf of the user with any sort of strong reliability that you would need for widespread adoption to be successful.
Cryptocurrency "if you build it they'll come" vibes.
taberiand
I think MCPs compensate for the unreliability issue by providing a minimal and well defined interface to a controlled set of actions. That way, the llm doesn't have to be as reliable thinking what it needs to do and in acting, just in choosing what to do from a short list.
acedTrex
The speed that every major LLM foundational model provider has jumped on this bandwagon feels VERY artificial and astro turfy...
dimitri-vs
You actually can, its called Operator and its a complete waste of time, just like 99% of agents/MCPs.
oulipo
Operator is basically MCP...
jastuk
And the worst part is that it opens a pandora's box of potential exploits; https://elenacross7.medium.com/%EF%B8%8F-the-s-in-mcp-stands...
TeMPOraL
That's not fault of MCP though, that's the fault of vendors peddling their MCPs while clinging to the SaaS model.
Yes, MCP is a way to streamline giving LLMs ability to run arbitrary code on your machine, however indirectly. It's meant to be used on "your side of the airlock", where you trust the things that run. Obviously it's too powerful for it to be used with third-party tools you neither trust nor control; it's not that different than downloading random binaries from the Internet.
I suppose it's good to spell out the risks, but it doesn't make sense blaming MCP itself, because those risks are fundamental aspects of the features it provides.
kmacdough
It's not blame, but it's a striking reality that needs to be kept at the forefront.
It introduces a substantial set of novel failure modes, like cross-tool shadowing, which aren't obvious to most folks. Making use of any externally developed tooling — even open source tools on internal architecture — requires more careful consideration and analysis than most would expect. Despite the warnings, there will certainly be major breaches on these lines.
joshwarwick15
Most of these are not a real concern with remote servers with Oauth. If you install the PayPal MCP MCP server from im-deffo-not-hacking-you.com than https://mcp.paypal.com/sse its the same sec model as anything else online...
The article also reeks of LLM ironically
tuananh
it still is. if user has 1 bad tool, it's done!
https://invariantlabs.ai/blog/mcp-security-notification-tool...
halJordan
At the risk of it sounding like i support theft; the automobile, you know, enabled the likes of Bonnie and Clyde and that whole era of lawlessness. Until the fbi and crossing county lines became a thing.
So im not sure id give up the sum total progress of the automobile just because the first decade was a bad one
orbital-decay
MCP is a standard to plug useful tools into AI models so they can use them. The concept looks confusingly reversed and non-obvious to a normal person, although devs don't see this because it looks like their tooling.
hedgehog-ai
I know what you mean, I think MCP is being widely adopted but it's not grassroots.. its a quick entry to this market by an established AI company trying to dominate the mind/market share of developers before consensus can be reached developers.
whalesalad
It’s RPC specifically for an LLM. But yes it’s the new soup de jour trend sweeping the globe.
andy_ppp
When I go to a shopping website I want to be able to tell my browser "hey please go through all the sideboards on this list and filter out for the ones that are larger than 155cm and smaller than 100cm, prioritise the ones with dark wood and space for vinyl records which are 31.43cm tall" for example.
Is there any browser that can do this yet as it seems extremely useful to be able to extract details from the page!
null
mfkhalil
Hey, we’re working on MatterRank which is pretty similar to this but currently works on web search. (e.g. I want to prioritize results that talk about X and have Y bias and I want to deprioritize those that are trying to sell me something). Feel free to try it out at https://matterrank.ai
Would also be interested in hearing more about what you’re envisioning for your use case. Are you thinking a browser extension that acts on sites you’re already on, or some sort of shopping aggregator that lets you do this, or something else entirely?
Niksko
Not OP but I definitely sympathise with them. I don't know how practical it is to implement or how profitable it would be, but the problem I often have is this: * I have something I want to buy and have specific needs for it (height, color, shape, other properties) * I know that there's a good chance the website I'm on sells a product that meets those needs (or possibly several such that I'd want to choose from) * my criteria are more specific than the filters available on the site e.g. I want a specific length down to a few cm because I want the biggest thing that will fit in a fixed space * crucially for an AI use case: the information exists on the individual product pages. They all list dimensions and specifications. I just don't want to have to go through them all.
Example: find me all of the desks on IKEA that come in light coloured wood, are 55 inches wide, and rank them from deepest to shallowest. Oh, and make sure they're in stock at my nearest IKEA, or are delivering within the next week.
unixfox
You could do that with browser-use: https://browser-use.com/
bravura
When doing interior decoration, I am definitely interested in finding objects that fit very specific prompts.
neilellis
Well done, just tested on Claude Desktop and it worked smoothly and a lot less clunky than playwright. This is the right direction to go in.
I don't know if you've done it already, but it would be great to pause automation when you detect a captcha on the page and then notify the user that the automation needs attention. Playwright keeps trying to plough through captchas.
thenaturalist
Crazy, in looking up some info on the web and creating a Spreadsheet on Google Sheets to insert the results, it worked almost perfectly the first time and completely failed subsequently on 8-10 different tries.
Is there an issue with the lag between what is happening in the browser and the MCP app (in my case Claude Desktop)?
I have a feeling the first time I tried it, I was fast enough clicking the "Allow for this chat" permissions, whereas by the time I clicked the permission on subsequent chats, the LLM just reports "It seems we had an issue with the click. Let me try again with a different reference.".
Actions which worked flawlessly the first time (rename a Google spreadsheet by clicking on the title and inputting the name) fail 100% of subsequent attempts.
Same with identifying cells A1, B1, etc. and inserting into the rows.
Almost perfect on 1st try, not reproducible in 100% of attempts afterwards.
Kudos to how smooth this experience is though, very nice setup & execution!
EDIT 2: The lag & speed to click the allow action make it seemingly unusable in Claude Desktop. :(
otherayden
Such a rich UI like google sheets seems like a bad use case for such a general "browser automation" MCP server. Would be cool to see an MCP server like this, but with specific tools that let the LLM read and write to google sheets cells. I'm sure it would knock these tasks out of the park if it had a more specific abstraction instead of generally interacting with a webpage
mkummer
Agreed, I'd been working on a Google Sheets specific MCP last week – just got it published here: https://github.com/mkummer225/google-sheets-mcp
rahimnathwani
This is cool. You should submit this as a 'Show HN'.
Also consider publishing it so people can use it without having to use git.
xingwu
I have worked on a google sheets MCP, for data scraping it worked pretty well leveraging Claude's built-in search functionalities.
example: https://x.com/xing101/status/1903391600040083488 set up: https://github.com/xing5/mcp-google-sheets
throwaway314155
What you're experiencing is commonly referred to as "luck". It's the same reason people consistently think newer versions of ChatGPT are nerfed in some way. In reality, people just got lucky originally and have unrealistic expectations based on this originally positive outcome.
There's no bug or glitch happening. It's just statistically unlikely to perform the action you wanted and you landed a good dice roll on your first turn.
weq
haha yeh as someone who has built automation for years i can agree with this. You cant just click on something in a script, you need to reliably click on something. As a user, its very easy for you to make adjustments like clicking twice on a link if it doesnt load in time. Thats pretty much what your automation suite needs to end up with. A series of a functions to emulate user actions. You then combine that together with your scripts to create reliable scripts that can run in different conditions. LLMs wont do that for you, u need to instruct them specifically.
lizardking
For me it can't click anywhere on google sheets. I get the following error
--Error: Cannot access a chrome-extension:// URL of different extension
nonethewiser
Stuff like this makes me giddy for manual tasks like reimbursement requests. Its such a chore (and it doesnt help our process isnt great).
Every month, go to service providers, log in, find and download statement, create google doc with details filled in, download it, write new email and upload all the files. Maybe double chek the attachments are right but that requires downloading them again instead of being able to view in email).
Automating this is already possible (and a real expense tracking app can eliminate about half of this work) but I think AI tools have the potential to elminate a lot of the nittier-grittier specification of it. This is especially important because these sorts of workflows are often subject to little changes.
doug_life
This may be obvious to most here, but you need Node.js installed for the MCP server to run. This critical detail is not in the set up instructions.
wetpaws
[dead]
serverlessmania
Did something similar but controls a hardware synth, allowing me to do sound design without touching the physical knobs: https://github.com/zerubeus/elektron-mcp
dmix
Oh good idea.
Imagine it controlling plugins remotely, have an LLM do mastering and sound shaping with existing tools. The complex overly-graphical UIs of VSTs might be a barrier to performance there, but you could hook into those labeled midi mapping interfaces to control the knobs and levels.
Gehinnn
Would be nice if it could use the Accessibility Tree from chrome dev tools to navigate the page instead of relying on screenshots (https://developer.chrome.com/blog/full-accessibility-tree)
mgraczyk
In fact you have it backwards. It has no screenshots at the moment, only the accessibility tree
amendegree
So is MCP the new RPA (Robotics Process Automation)? Like generic yahoo pipes?
spmurrayzzz
I just view it as a relative minor convenience, but it's not some game-changer IMO.
The tool use / function calling thing far predates Anthropic releasing the MCP specification and it really wasn't that onerous to do before either. You could provide a json schema spec and tell the model to generate compliant json to pass to the API in question. MCP doesn't inherently solve any of the problems that come up in that sort of workflow, but it does provide an idiomatic approach for it (so there's a non-zero value there, but not much).
PantaloonFlames
It seems the benefit of MCP is for Anthropic to enlist the community in building integrations for Claude desktop, no?
And if other vendors sign on to support MCP, then it becomes a self reinforcing cycle of adoption.
spmurrayzzz
Yea it certainly does benefit Claude Desktop to some degree, but most MCP servers are a few hundred SLOC and the protocol schema itself is only ~400 SLOC. If that was the only major obstacle standing in the way of adoption, I'd be very surprised.
Coupled with the fact that any LLM trained for tool use can utilize the protocol, it doesn't feel like much of a moat that uniquely positions Claude Desktop in a meaningful way.
asabla
> And if other vendors sign on to support MCP, then it becomes a self reinforcing cycle of adoption
This is exactly what's happening now. A good portion of applications, frameworks and actors are starting to support it.
I've been reluctant on adopting MCP in applications until there was enough adoption.
However, depending on your use case it may also be too complex for your use case.
JackYoustra
MCP is useful because anthropic has a disproportionate share of API traffic relative to its valuation and a tiny share of first-party client traffic. The best way around this is to shift as much traffic to API as possible.
kmangutov
The interesting thing about MCP as a tool use protocol is the traction that it has garnered in terms of clients and servers supporting it.
wonderwhyer
I would probably call it shipping containers for LLM tool integrations.
Containers are not a big deal when viewed in isolation. But when its common size/standard for all kinds of ships, cranes and trucks, it is a big deal then.
In that sense its more about gathering community around one way to do things.
In theory there are REST APIs and OpenAPI standard, but those were not made for LLMs but code. So you usually need some kind of friendly wrapper(like for candy) on top of REST API.
It really starts to feel like a a big deal when you work in integrating LLMs with tools.
tmvphil
I'm a bit stuck on this, maybe you can explain why an LLM would have any difficulty writing REST API calls? Seems like it should be no problem.
ajcp
No, since MCP is just an interface layer it is to AI what REST API is to DPA and COM/App DLLs are to RPA.
APA (Agentic Process Automation) is the new RPA, and this is definitely one example of it.
XCSme
But AI already supported function calling, and you could describe them in various ways. Isn't this just a different way to define function calling?
cadence-
Doesn't work on Windows:
2025-04-07T18:43:26.537Z [browsermcp] [info] Initializing server... 2025-04-07T18:43:26.603Z [browsermcp] [info] Server started and connected successfully 2025-04-07T18:43:26.610Z [browsermcp] [info] Message from client: {"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"claude-ai","version":"0.1.0"}},"jsonrpc":"2.0","id":0} node:internal/errors:983 const err = new Error(message); ^
Error: Command failed: FOR /F "tokens=5" %a in ('netstat -ano ^| findstr :9009') do taskkill /F /PID %a at genericNodeError (node:internal/errors:983:15) at wrappedFn (node:internal/errors:537:14) at checkExecSyncError (node:child_process:882:11) at execSync (node:child_process:954:15)
namukang
Can you try again?
There was another comment that mentioned that there's an issue with port killing code on Windows: https://news.ycombinator.com/item?id=43614145
I just published a new version of the @browsermcp/mcp library (version 0.1.1) that handles the error better until I can investigate further so it should hopefully work now if you're using @browsermcp/mcp@latest.
FWIW, Claude Desktop currently has a bug where it tries to start the server twice, which is why the MCP server tries to kill the process from a previous invocation: https://github.com/modelcontextprotocol/servers/issues/812
cadence-
It's working now with the 0.1.0 for me. But I will let you know if I experience any issues once I get updated to 0.1.1.
Thanks, great job! I like it overall, but I noticed it has some issues entering text in forms, even on google.com. It's able to find a workaround and insert the searched text in the URL, but it would be nice if the entry into forms worked well for UI testing.
cadence-
I was able to make it work like this:
1. Kill your Claude Desktop app
2. Click "Connect" in the browser extension.
3. Quickly start your Calude Desktop app.
It will work 50% of the time - I guess the timing must be just right for it to work. Hopefully, the developers can improve this.
Now on to testing :)
[!warning!]
1) this projects' chrome extension sends detailed telemetry to posthog and amplitude:
- https://storage.googleapis.com/cobrowser-images/telemetry.pn...
- https://storage.googleapis.com/cobrowser-images/pings.png
2) this project includes source for the local mcp server, but not for its chrome extension, which is likely bundling https://github.com/ruifigueira/playwright-crx without attribution
super suss