Show HN: An API that takes a URL and returns a file with browser screenshots
106 comments
·February 6, 2025xnx
cmgriffing
Quick note: when trying to do full page screenshots, Chrome does a screenshot of the current view, then scrolls and does another screenshot. This can cause some interesting artifacts when rendering pages with scroll behaviors.
Firefox does a proper full page screenshot and even allows you to set a higher DPS value. I use this a lot when making video content.
Check out some of the args in FF using: `:screenshot --help`
null
wereHamster
That's not the behavior I'm seeing (with Puppeteer). Any elements positioned relative to the viewport stay within the area specified by screen size (eg. 1200x800) which is usually the top of the page. If the browser would scroll down these would also move down (and potentially appear multiple times in the image). Also intersection observers which are further down on the page do not trigger when I do a full-page screenshot (eg. an element which starts animation when it enters into the viewport).
genewitch
bravo for puppeteer, i guess? "singlefile" is the only thing i've ever seen not do weird artifacts in the middle of some site renders, or, like on reddit, just give up rendering comments and render blank space instead until the footer.
anyhow i've been doing this exact thing for a real long time, e.g.
https://raw.githubusercontent.com/genewitch/opensource/refs/...
using bash to return json to some stupid chat service we were running
null
xg15
I mean, if you have some of those annoying "hijack scrolling and turn the page into some sort of interactive animation experience" sites, I don't think "full page" would even be well-defined.
sixothree
Pretty sure this refers to sticky headers. They have caused me many headaches when trying to get a decent screenshot.
input_sh
Firefox equivalent:
firefox -screenshot file.png https://example.com --window-size=1280,720
A bit annoyingly, it won't work if you have Firefox already open.UnlockedSecrets
Does it work if you use a different profile with -p?
paulryanrogers
Maybe with --no-remote
genewitch
on my firefox if i right click on a part of the page the website hasn't hijacked, it gives the option to "take screenshot" - which i think required enabling a setting somewhere. I hope it wasn't in about:config or wherever the dark-art settings are. I use that feature of FF to screenshot youtube videos with the subtitles moved and the scrub bar cropped out, i feel like it's a cleaner and smaller clipboard copy than using win+shift+s. Microsoft changed a lot about how windows handles ... files ... internally and screenshots are huge .png now, making me miss the days of huge .bmp.
also as mentioned above, if you need entire sites backed up the firefox extension "singlefile" is the business. if image-y things? bulk image downloader (costs money but 100% worth; you know it if you need it: BID); and yt-dlp + ffmpeg for video, in powershell (get 7.5.0 do yourself a favor!)
```powershell
$userInput = Read-Host -Prompt '480 video download script enter URL'
Write-Output "URL:`t`t$userInput"
c:\opt\yt-dlp.exe `
-f 'bestvideo[height<=480]+bestaudio/best[height<=480]' `
--write-auto-subs --write-subs `
--fragment-retries infinite `
$userInput
```
blueflow
> it won't work if you have Firefox already open
now try and go ahead how you could isolate these instances so they cannot see each other. this leads into a rabbit hole of bad design.
yjftsjthsd-h
> now try and go ahead how you could isolate these instances so they cannot see each other. this leads into a rabbit hole of bad design.
Okay, done:
PROFILEDIR="$(mktemp -d)"
firefox --no-remote --profile "$PROFILEDIR" --screenshot $PWD/output.png https://xkcd.com
rm -r "$PROFILEDIR"
What's the rabbit hole?amelius
> A bit annoyingly, it won't work if you have Firefox already open.
I hate it when applications do this.
cmgriffing
LOL, you and I posted very similar replies at the same time.
azhenley
Very nice, I didn't know this. I used pyppeteer and selenium for this previously which seemed excessive.
martinbaun
Oh man, I needed this so many times didn't even think of doing it like this. I tried using Selenium and all different external services. Thank you!
Works in chromium as well.
antifarben
Does anyone know whether this would also be possible with Firefox, including explicit extensions (i.e. uBlock) and explicit configured block lists or other settings for these extensions?
hulitu
> Chrome also has the ability to save screenshots
Too bad that no browser is able to print a web page.
jot
If you’re worried about the security risks, edge cases, maintenance pain and scaling challenges of self hosting there are various solid hosted alternatives:
- https://browserless.io - low level browser control
- https://scrapingbee.com - scraping specialists
- https://urlbox.com - screenshot specialists*
They’re all profitable and have been around for years so you can depend on the businesses and the tech.
* Disclosure: I work on this one and was a customer before I joined the team.
ALittleLight
Looking at your urlbox - pretty funny language around the quota system.
>What happens if I go over my quota?
>No need to worry - we won't cut off your service. We automatically upgrade you to the next tier so you benefit from volume discounts. See the pricing page for more details.
So... If I go over the quota you automatically charge me more? Hmm. I would expect to be rejected in this case.
jot
I’m sure we can do better here.
In my experience our customers are more worried about having the service stop when they hit the limit of a tier than they are about being charged a few more dollars.
ALittleLight
Maybe I'm misreading. It sounds like you're stepping the user up a pricing tier - e.g. going from 50 a month to 100 and then charging at the better rate.
I would also worry about a bug on my end that fires off lots of screenshots. I would expect a quota or limit to protect me from that.
edm0nd
https://www.scraperapi.com/ is good too. Been using them to scrape via their API on websites that have a lot of captchas or anti scraping tech like DataDome.
rustdeveloper
Happy to suggest another web scraping API alternative I rely on: https://scrapingfish.com
xeornet
What’s the chance you’re affiliated? Almost every one of your comments links to it. And curiously similar interest in Rust from the official HN page and yours. No need to be sneaky.
bbor
Do these services respect norobot manifests? Isn't this all kinda... illegal...? Or at least non-consensual?
basilgohar
robots.txt isn't legally binding. I am interested to know if and how services even interact with it. It's more like a clue on when the interesting content for scrapers is on your site. This is how I imagine it goes:
"Hey, don't scrape the data here."
"You know what? I'm scrape it even harder!"
bbor
Soooo nonconsensual.
Maybe bluesky is right… are we the baddies?
tonyhart7
it is legally binding if your company based on SV (only California implement this law) and they can prove it
fc417fc802
[dead]
theogravity
there's also our product, Airtop (https://www.airtop.ai/), which is under the scraping specialist / browser automation category that can generate screenshots too.
kevinsundar
Hey I'm curious what your thoughts are on whether you need a full blown agent that moves the mouse and clicks to extract contents from webpages or a more simplistic tool that can just scrape pages + take screenshots and pass it through an LLM is generally pretty effective?
I can see niches cases likes videos or animations being better understood by an agent though.
theogravity
Airtop is designed to be flexible, you can use it as part of a full-blown agent that interacts with webpages or as a standalone tool for scraping and screenshots.
One of the key challenges in scraping is dealing with anti-bot measures, CAPTCHAs, and dynamic content loading. Airtop abstracts much of this complexity while keeping it accessible through an API. If you're primarily looking for structured data extraction, passing pages through an LLM can work well, but for interactive workflows (e.g., authentication, multi-step navigation), an agent-based approach might be better. It really depends on the use case.
jchw
One thing to be cognizant of: if you're planning to run this sort of thing against potentially untrusted URLs, the browser might be able to make requests to internal hosts in whatever network it is on. It would be wise, on Linux, to use network namespaces, and block any local IP range in the namespace, or use a network namespace to limit the browser to a wireguard VPN tunnel to some other network.
leptons
This is true for practically every web browser anyone uses on any site that they don't personally control.
jchw
This is true, although I think in a home environment, there aren't as many interesting things to hit, and you're limited by Same Origin Policy, as well as certain mitigations that web browsers deploy against attacks like DNS Rebinding. However, if you're running this on a server, there's a much greater likelihood that interesting services are under the firewall, e.g. maybe the Kubernetes API server. Code execution could potentially be a form post away.
remram
Very important note! This is called Server-Side Request Forgery (SSRF).
anonzzzies
Is there a self hosted version that does this properly?
jot
Too many developers learn this the hard way.
It’s one of the top reasons larger organisations prefer to use hosted services rather than doing it themselves.
morbusfonticuli
Similar project: gowitness [1].
A really cool tool i recently discovered. Next to scraping and performing screenshots of websites and saving it in multiple formats (including sqlite3), it can grab and save the headers, console logs & cookies and has a super cool web GUI to access all data and compare e.g the different records.
I'm planning to build my personal archive.org/waybackmachine-like web-log tool via gowitness in the not-so-distant future.
quink
> SCREENSHOT_JPEG_QUALITY
Not two words that should be near each other, and JPEG is the only option.
Almost like it’s designed to nerd-snipe someone into a PR to change the format based on Accept headers.
gkamer8
> Almost like it's designed to nerd-snipe someone into a PR to change the format based on Accept headers
pls
westurner
simonw/shot-scraper has a number of cli args, a GitHub actions repo template, and docs: https://shot-scraper.datasette.io/en/stable/
From https://news.ycombinator.com/item?id=30681242 :
> Awesome Visual Regression Testing > lists quite a few tools and online services: https://github.com/mojoaxel/awesome-regression-testing
> "visual-regression": https://github.com/topics/visual-regression
hedora
It'd be nice if it produced a list of bounding boxes + URL's you'd get if you clicked on the bounding box.
Then it'd be close to my dream of a serverless web browser service, where the client just renders a clickmap .png or .webp, and the requests go to a farm of "one request per page load" ephemeral web browser instances. The web browsers could cache the images + clickmaps they return in an S3 bucket.
Assuming the farm of browsers had a large number of users, this would completely defeat fingerprinting + cookies. It'd also provide an archive (as in durable, not as in high quality) of the browsed static content.
mlunar
Similar one I wrote a while ago using Pupetteer for the IoT low power display purposes. Neat trick is that it learns the refresh interval, so that it takes a snapshot just before it's requested :) https://github.com/SmilyOrg/website-image-proxy
rpastuszak
Cool! In using sth similar on my site to generate screenshots of tweets (for privacy purposes):
https://untested.sonnet.io/notes/xitterpng-privacy-friendly-...
manmal
Being a bit frustrated with Linkwarden’s resource usage, I’ve thought about making my own self hosted bookmarking service. This could be a low effort way of loading screenshots for these links, very cool! It‘ll be interesting how many concurrent requests this can process.
codenote
I thought it was a scale of code that could have been included in Abbe. https://github.com/US-Artificial-Intelligence/abbey
Was the motivation for separating it based on security considerations, as stated in the "Security Considerations"? https://github.com/US-Artificial-Intelligence/ScrapeServ?tab...
kevinsundar
I'm looking for something similar that can also extract the diff of content on the page over time, in addition to screenshots. Any suggestions?
I have a homegrown solution using an LLM and scrapegraphai for https://getchangelog.com but would rather offload that to a service that does a better job rendering websites. There's some websites that I get error pages from using playwright, but they load fine in my usual Chrome browser.
arnoldcjones
Good point on offloading it as for the amount of work that's required in setting up a wrapper for something like Puppeteer, Playwright etc that also works with a probably quite specific setup, I've found the best way to get a quality image consistently is to just subscribe to one of the many SASS' out there that already do this well. Some of the comments above suggest some decent screenshot-as-a-service products.
Really depends on how valuable your time is over your (or your companies) money. I prefer going for the quality (and more $) solution rather than the solution that boasts cheap prices, as I tend to avoid headaches of unreliable services. Sam Vines Boots theory and all that.
For image comparison I've always found using pixelmatch by Mapbox works well for PNG's
caelinsutch
The easiest solution to this is probably extracting / formatting the content, then running a diff on that. Otherwise you could use snapshot testing algorithms as a diffing method. We use browserbase and olostep which both have strong proxies (first one gives you a playwright instance, second one just screenshot + raw HTML).
For anyone who might not be aware, Chrome also has the ability to save screenshots from the command line using: chrome --headless --screenshot="path/to/save/screenshot.png" --disable-gpu --window-size=1280,720 "https://www.example.com"