Show HN: Simplex: Automate browser workflows using code and natural language
18 comments
·January 14, 2025jackienotchan
Would you mind sharing the story behind your pivot from on-demand photorealistic vision datasets[0] to browser automation?
[0] https://www.ycombinator.com/launches/Lbx-simplex-on-demand-p...
vitalets
I've tried the following code:
simplex.goto("github.com/mitchellh")
num_contribs = simplex.extract_text("blabla")
print(num_contribs)
It outputs all texts from the page. Is it expected? Maybe it should fail indicating element could not be found?skarpoor
extract_text returns all the lines of text it finds within an element. Looks like for your case it selected a majority of the page from the description, so it's returning most of the text on the page.
We currently don't return failure cases (just closest match) -- but good suggestion! We'll fine tune on some negative cases and see if we can catch them
sprobertson
Great demo, straight to the point. It might be nice to have some kind of feedback mechanism if it can't find the element on the page, or if it's partially cut off. For example, I changed the GitHub profile to my own (https://github.com/spro) in the example and it doesn't scroll down far enough for the whole image. I imagine in general it would be nice to scroll to an element ID (or even element description using the vision models) instead of a hardcoded value.
Side-note: The comment for the frequency graph is wrong, it mentions stars instead.
marcon680
RE: feedback mechanism -- yep, a feedback mechanism is definitely something we're thinking of adding. Since we use VLMs that are trained to always output coordinates (i.e., they don't have a way to say "not on the page"), we're probably going to try fine tuning with some negative examples to see if we can build that feedback mechanism in.
One way to hack the scrolling to an element is to first run extract_bbox on a natural language description (in your case for GitHub it might be "follow button") then take the Y coordinate of that element and scroll that number of pixels. I just wrote this bit of code that I tested and it brings the contribution graph into full view:
simplex.goto("github.com/spro")
coords = simplex.extract_bbox("follow button")
simplex.scroll(coords[1])
simplex.wait(2000)
image = simplex.extract_image("green tile graph")
But then it incorrectly picks the code review/submissions/etc. graph as the green tile graph -- we'll look into it!re: frequency graph typo -- just pushed a fix, thanks!
mkagenius
VLMs are great - I have been able to use it for a similar project too [1]. And it's only going to get better. Congratulations on the product launch what's your VLM model for this?
1. A framework to use/control mobile phones via any LLM - https://github.com/BandarLabs/clickclickclick
marcon680
We finetune our own VLMs for this -- unfortunately prefer not to share which ones we use specifically! ClickClickClick looks awesome, have you heard of FerretUI (https://arxiv.org/pdf/2404.05719)? Pretty similar idea.
mkagenius
Yes, I tried a similar one called "omniparser" - where the issue was it was missing annotating some UI elements sometimes. Moreover, Gemini and Molmo worked right out of the box without needing any fine tune.
xnx
Looks pretty cool. How do you distinguish Simplex from Skyvern or UI.Vision?
marcon680
Hey thanks! We think we're pretty different from Skyvern as Skyvern provides a full agent loop for users + is no code. We wanted to build something at a lower level (i.e., no high-level planner that chooses what tasks to do for you) and we wanted to be able to program our own logic alongside the intelligence part because using code is what's natural for us as developers.
I hadn't heard of UI Vision but just took a look at it -- it also looks like a no-code solution that's a Chrome extension, so I'd say the main differences are the same as the differences w/ Skyvern -- we're lower level and meant to be used by developers.
I'd add that we're also able to directly extract parts of websites that have no official API -- for example, an image of the GitHub contribution graph like I show in the video demo.
myflash13
Can this use a cloud browser API like browserless?
marcon680
Yep, any websocket URL works. I see that Browserless offers a websocket URL, so you can use them! You just need to pass in a Playwright browser object into the Simplex constructor.
picografix
it fails for this query search("amazon.in", "fitness watch")
marcon680
Ah, we've whitelisted some websites to prevent abuse -- amazon.in wasn't one of them. I just whitelisted amazon.in for you and tried the search query -- looks like it works!
mrbluecoat
+1 for considering how it could have been abused to visit blocked sites and circumvent online protections.
The syntax is also intuitive, although site loading seems pretty slow but maybe that's just the playground and the paid access is much faster.
marcon680
Yep, we also run all our python code in sandboxes and a few other security measures!
Site loading is pretty now slow due to a combination of 1. traffic we're currently getting, 2. running a remote session, 3. running a few large vision language models, and 4. adding waits to allow pages to load/allow you to view your search results. We're working on cutting our latency significantly since it leads to a better development experience.
desireco42
I have to say, looks both simple and 20 times better then those horrible no-code solutions with boxes and arrows.
I think you are onto something here.
marcon680
Thanks, we hope so! That's an interesting conversation -- I'd argue that the graph-based no-code solutions you're referring to are for a different set of people than those commonly found on HN. As a developer I didn't particularly want to work with those tools, so we built this to supercharge our own code-based flows instead. I actually don't think the node-based flows are horrible at all since they successfully enable non-developers/less technical people to use agents/build easy automations.
Simplex is an SDK that provides low-level functions you can use to create reliable web automations using code and natural language.
Here's a quick video to show you how it works: https://www.loom.com/share/8d27d0f9e0f34c77a81b0ac770151c12
A couple weeks ago, we needed a way to automatically find 3D assets from the internet for one of our work contracts. We didn’t need an AI to choose all the steps for us autonomously — we knew generally what the flow to find items should be and just needed some intelligence to handle different webpage formats.
Playwright couldn’t easily generalize to the different sites we wanted to search over and Claude computer use was tough to use — it was slow, expensive, struggled to click on the correct web elements, and there was no way to correct its actions if it failed at step 20 of our workflow.
So we built a vision-based solution using vision language models that sits on top of Playwright. In our SDK, building a function that can universally search websites is as easy as this:
You can play around with what we’ve built and see a few more examples at https://simplex.sh/playground.We'd love feedback!