Skip to content(if available)orjump to list(if available)

Closer to the Metal: Leaving Playwright for CDP

dataviz1000

I made this comment yesterday but really applies to this conversation.

> In the past 3 weeks I ported Playwright to run completely inside a Chrome extension without Chrome DevTools Protocol (CDP) using purely DOM APIs and Chrome extension APIs, I ported a TypeScript port of Browser Use to run in a Chrome extension side panel using my port of Playwright, in 2 days I ported Selenium ChromeDriver to run inside a Chrome Extension using chrome.debugger APIs which I call ChromeExtensionDriver, and today I'm porting Stagehand to also run in a Chrome extension using the Playwright port. This is following using VSCode's core libraries in a Chrome extension and having them drive a Chrome extension instead of an electron app.

The most difficult part is managing the lifecycle of Windows, Pages, and Frames and handling race conditions, in the case of automating a user's browser, where, for example, the user switches to another tab or closes the tab.

nikisweeting

Extensions are ok but they have limitations too, for example you cannot use extensions to automate other extensions.

We need the agent to be able to drive 1password, Privacy.com, etc. to request per-task credentials, change adblock settings, get 2fa codes, and more.

The holy grail really is CDP + control over browser launch flags + an extension bridge to get to the more ergonomic `chrome.*` APIs. We're also working on a custom Chromium fork.

wonger_

What is the benefit of porting all those tools to extensions? Have you ran into any other extension-based challenges besides lifecycles and race conditions?

dataviz1000

Some benefits (without using Chrome.debugger or Chrome DevTools Protocol):

1. There are 3,500,000,000 instances of Chrome desktop being used. [0]

2. A Chrome Extension can be installed with a click from the Chrome Web Store.

3. It is closer to the metal so runs extremely fast.

4. Can run completely contained on the users machine

5. It's just one user automating their web based workflows making it harder for bot protections to stop and with a human-in-the-loop any hang ups and snags can be solved by the human

6. Chrome extensions now have a side panel that is stationary in the window during navigation and tab switching. It is exactly like using the Cursor or VSCode side panel copilots

Some limitations:

1. Can't automate ChatGPT console because they check for user agent events by testing if the `isTrusted` property on event objects is true. (The bypass is using Chrome.debugger and the ChromeExtensionDriver I created.)

2. Can't take full page screen captions however it is possible to very quickly take visible scree captions of the viewport. Currently I scroll and stitch the images together if a full page screen is required. There are other APIs which allow this in a Chrome Extension and can capture video and audio but they require the user to click on some button so it isn't useful for computer vision automation. (The bypass is once again using the Chrome.debugger and ChromeExtensionDriver I created.)

3. Chrome DevTool Protocol allows intercepting and rewriting scripts and web pages before they are evaluated. With manifest v2 this was possible but they removed this ability in manifest v3 which we still hear about today with the adblock extensions.

I feel like with the limitations having a popup dialog that directs the user to do an action will work as long as it automates 98% of the user's workflows. Moreover, a lot of this automation should require explicit user acknowledgments before preceding.

[0] https://www.demandsage.com/chrome-statistics/

diggan

> What is the benefit of porting all those tools to extensions?

Personally, I have a browser extension running in my user/personal browser instance that my agent use (with rate-limits) in order to avoid all the captchas and blocks basically. Everything else I've tried ultimately ends up getting blocked. But then I'm also doing some heavy caching so most agent "browse" calls end up not even reaching out to the internet as it's finding and using stuff already stored locally.

Tsarp

Wouldnt having chrome.debugger=true also flag your requests?

steveklabnik

Describing "2011–2017" as "the dark ages" makes me feel so old.

There was a ton of this stuff before Chrome or WebKit even existed! Back in my day, we used Selenium and hated it. (I was lucky enough to start after Mercury...)

hugs

selenium creator here. hi!

steveklabnik

Hi! Sorry, I was trying to be a bit tongue in cheek here. This space, in my experience, has always been frustrating, because it's a hard problem. I myself am fighting with Playwright these days, just like I used to fight with Selenium. (And, to my understanding, you created Selenium due to frustrations with Mercury, hence the name... I'm curious if that's true or just something I heard!)

I still deeply appreciate these tools, even though I also find them a bit frustrating.

hugs

it's all good, man. if it makes you feel better, i don't like rust. ;-) my eldest son loves it, though!

fun-fact: i've never used mercury. when i came up with "selenium" -- it was because a colleague saw an early demo and said it had the potential to "kill mercury". (spoiler alert!)

but in that moment, i hadn't heard of mercury before, so i had to google it. i then also spent a few extra cycles googling around for a "cure for mercury poisoning" just so i could continue the conversation with that colleague with a proto-dad-joke... and landed on a page about selenium supplements. things obviously got out of hand.

i didn't even want to call the project "selenium". i preferred the name "check engine", but people started calling it "selenium" anyway. i only wish nice things for the mercury team -- the only thing i know about them is that hp acquired mercury for $4.5B. so i hope they blissfully don't care about me or my bad dad-jokes.

but again... i didn't even realize there was an entire testing tools industry at that moment. all i knew was that i had a testing problem for my complicated web app -- and the consensus professional advice at the time was "yeah, no. don't use javascript in the browser -- it's too hard to test". (another spoiler.) also, (if i'm remembering correctly) mercury was ie/windows only... and i needed something that supported apple and mozilla/firefox. it felt like zero vendors at the time cared about anything that wasn't internet explorer or wasn't windows. so i had to chart my own course pretty quickly.

long story long: "you either die a hero, or you live long enough to see yourself become the villain" - harvey dent

hu3

selenium helped my team so much back in the days. Thank you for it!

We had a complex user registration workflow that supported multiple nationalities and languages in a international bank website.

I setup selenium tests to detect breakages because it was almost humanly impossible to retest all workflows after every sprint.

It brought back sanity to the team and QA folks.

Tools that came after certainly benefitted from selenium lessons.

nikisweeting

Wow hi! Thanks so much for building selenium! I've used it many times in my career, and I looked at Selenium Grid for inspiration for browser devops in my last job.

Aurornis

I always enjoyed Selenium, for what it’s worth.

moi2388

I just wanted to say I absolutely love your product. Thank you!

gregpr07

Hi, the first version of Browser Use was actually built on Selenium but we quite quickly switched to Playwright

hugs

yeah, i noticed that. apologies if i missed a post about it... what do you wish didn't suck about selenium?

arm32

Uh, ahem, <clears throat>, we meant the _other_ Selenium.

hugs

that's what i thought. :) personal life accomplishment was seeing wikipedia add a disambiguation link on the element's page. you know, because it's right up there in importance as the periodic table, obviously.

null

[deleted]

benmmurphy

direct CDP has been used by the scraping community for a long time in order to have a cleaner browser environment that is harder to fingerprint. for example nodriver (https://github.com/ultrafunkamsterdam/nodriver) was started in Feb 2024 and I suspect this technique was popular before that project started.

gregpr07

I really like both nodriver and pydoll. I am definitely keeping the option of switching to them open, but we just wanted to have full control for now and see how painful CDP-use is to maintain first and then reconsider.

arm32

Ah, yes, the classic "Playwright isn't fast enough so we're reinventing Puppeteer" trope. I'd be lying if I haven't seen this done a few times already.

Now that I got my snarky remark out of the way:

Puppeteer uses CDP under the hood. Just use Puppeteer.

haolez

I've seen a team implement Go workers that would download the HTML from a target, then download some of the referenced JavaScript files, then run these JavaScript files in an embedded JavaScript engine so that they could consume less resources to get the specific things that they needed without using a full browser. It's like a browser homunculus! Of course, each new site would require custom code. This was for quant stuff. Quite cool!

odo1242

This exact homunculus is actually supported in Node.JS by the `jsdom` library: https://www.npmjs.com/package/jsdom

I don't know how well it would work for that use-case, but I've used it before, for example, to write a web-crawler that could handle client-side rendering.

nikisweeting

our use primary use-case with the AI stuff is not really scraping, we're mostly going after RPA

nikisweeting

sir we are a python library, puppeteer-python was abandoned, how exactly do you propose we use puppeteer?

hugs

yeah, i continue to be amazed at how google dropped the ball on this one.

epolanski

Playwright has Python bindings .

nikisweeting

yes I know, I wrote the post

boredtofears

Is the case for playwright over puppeteer just in it's crossbrowser support?

We're currently using Cypress for some automated testing on a recent project and its extremely brittle. Considering moving to playwright or puppeteer but not sure if that will fix the brittleness.

nikisweeting

I would definitely recommend puppeteer if you can, it's maintained by the Chrome team and always does things the "approved way". The only reason we did playwright is because we're a python library and pyppeteer was abandoned.

rising-sky

In my experience Playwright provided a much more stable or reliable experience with multiple browser support and asynchronous operations (which is the entire point) over Puppeteer. ymmv

arm32

Playwright also offers nice sugar like HTML test reports and trace viewing.

gregpr07

From my experience with Playwright RR-Web recordings are MUCH better than Playwright’s replay traces, so we usually just use those.

johnsmith1840

Why not cdp snapshot?

nikisweeting

What do you mean? We use CDP page snapshots extensively to get full html across frames but it's not nearly enough on its own, there are lots of checks still needed for individual OOPIFs or elements.

johnsmith1840

You can get all of that pure snapshot.

There are no extra checks needed it's by a significant margin the most reliable method to see current state.

I run snapshot at 10-20fps though plus the same for parallel image capture.

I've been wondering if I should release just this part of my system open source seems like I'm not alone in how complex this all is.

I could launch yet another automation framework!

patrickhogan1

Selenium was very usable before 2011.

This post is like saying Grafana and not mentioning Nagios

nikisweeting

It was, but I feel like the advent of headless browsers marked a step function explosion in browser automation. Also any earlier than 2010 is when I was like 13yo, so it's more like "the dark ages in my own memory" than "objectively dark ages in automation history".

patrickhogan1

Selenium offered headless mode and integrated with third-party SaaS providers like BrowserStack, which could run thousands of acceptance tests in parallel in the cloud. It seems like what browser-use.com is doing is a modern day version with more features of what BrowserStack implemented with Selenium.

I get that drawing historical boundaries is arbitrary, but just highlighting that Selenium is a really good prior.

spullara

this is exactly what I did when I wrote my first agent with scraping. later we switched to taking control of the users browser through a browser extension.

Robdel12

All of the approaches of driving the browser outside of the browser is going to be slow (webdriver, playwright, puppeteer, etc).

Karma like approaches are where I’m at (execute in the browser)

appcustodian2

> All of the approaches of driving the browser outside of the browser is going to be slow

Why? I would think any cross-process communication through the CDP websocket would have imperceptible overhead compared to what already takes long in the browser: a ton of HTTP I/O

What is Karma? What are you executing in the browser?

nikisweeting

CDP rountrip time on a local machine is 100µs (0.1ms), it's not slow haha

johnsmith1840

"Thousands of cdp calls" from the link.

Cdp does add a good chunk of latency. Depends on what your threshold is.

An image grab is around 60ms and a snapshot can range from 40ms -> 500ms

The latency is pure data movement. It's like the difference of using ram vs ssd vs data from the internet.

nikisweeting

yeah but good luck getting rid of that with a browser extension, you're just moving the latency around / moving it to chrome.runtime message passing.

null

[deleted]

saberience

Talk about "not built here" mentality. This is a project doomed to failure. Using VC money to re-write better built software which has been around for years.

Good luck guys!

dang

Can you please make your substantive points without snark or putdowns? Thoughtful criticism is fine, of course, but what you posted here goes against what we're trying for in this community.

https://news.ycombinator.com/newsguidelines.html

johnsmith1840

From their blog its not obvious the value but pure cdp as a framework is powerful for other reasons. If you have very high performace requirements it makes sense.

I build something like an automation system pure cdp to shave ms off. But I'm a real time user interaction system plus automation not pure ai automation.

Doesn't make much sense to shave ms when an LLM call is hundreds of ms ans that's the only "user"

Tostino

Exactly what I was thinking. Instead of attempting to contribute back to Playwright to fix those hangups, or even creating a private patch to do so as a POC, they went right to building their own framework from scratch.

That isn't how you launch a product.

nikisweeting

I've been trying to contribute to playwright for years! All of my issues have been closed / rejected without much consideration because they're not part of the core "QA testing" use-case that playwright is built for.

Personally have not found their team to be the easiest to work with on Github. I would've loved to use puppeteer instead, their team is quite reasonable but they abandoned their python bindings and we want to stay in python.

gregpr07

I mean... Playwright was built and is maintained by Microsoft, so I don't think VC money argument really makes sense here.

By the very nature of how Playwright is built we can't contribute to it - it runs inside a JS subprocess and does not expose a bunch of CDP apis that we NEED (for example to make cross origin iframes work).