OpenAI's bot crushed this seven-person company's web site 'like a DDoS attack'
109 comments
·January 10, 2025ericholscher
This keeps happening -- we wrote about multiple AI bots that were hammering us over at Read the Docs for >10TB of traffic: https://about.readthedocs.com/blog/2024/07/ai-crawlers-abuse...
They really are trying to burn all their goodwill to the ground with this stuff.
PaulHoule
In the early 2000s I was working at a place that Google wanted to crawl so bad that they gave us a hotline number to crawl if their crawler was giving us problems.
We were told at that time that the "robots.txt" enforcement was the one thing they had that wasn't fully distributed, it's a devilishly difficult thing to implement.
It boggles my mind that people with the kind of budget that some of these people have are struggling to implement crawling right 20 years later tough. It's nice those folks got a rebate.
One of the problems why people are testy today is that you pay by the GB w/ cloud providers; about 10 years ago I kicked out the sinosphere crawlers like Baidu because they were generating like 40% of the traffic on my site crawling over and over again and not sending even a single referrer.
jgalt212
I've found Googlebot has gotten a bit wonky lately. 10X the usual crawl rate and
- they don't respect the Crawl-Delay directive
- google search console reports 429s as 500s
https://developers.google.com/search/docs/crawling-indexing/...
maiku2501
I have found google severely declining in engineering quality. On January 8th 2025, they stopped accepting JCB credit cards, and emailed customers that their payment info was invalid and would be suspended (search twitter for examples in japanese). Seems it was a bug, without any explanation to customers receiving the notification, opening a ticket resulted in it being closed immediately while being lied to (my only guess is they wanted to increase their metrics). How was this not quality checked in the first place? I guess google has the policy of recording the chat transcript (where lies are recorded), but it means nothing when the company doesn't care. I don't like it, but aws seems the next logical place to move business to. As far as I can tell, the support there is real.
TuringNYC
Serious question - if robots.txt are not being honored, is there a risk that there is a class action from tens of thousands of small sites against both the companies doing the crawling and individual directors/officers of these companies? Seems there would be some recourse if this is done at at large enough scale.
krapp
No. robots.txt is not in any way a legally binding contract, no one is obligated to care about it.
vasco
If I have a "no publicity" sign in my mailbox and you dump 500 lbs of flyers and magazines by my door every week for a month and cause me to lose money dealing with all the trash, I think I'd have a reasonable ground to sue even if there's no contract saying you need to respect my wish.
End of the day the claim is someone's action caused someone else undue financial burden in an way that is not easily prevented beforehand, so I wouldn't say it's a 100% clear case but I'm also not sure a judge wouldn't entertain it.
ericmcer
You can sue over literally anything, the parent comment could sue you if they could demonstrate your reply damaged them in some way.
jdenning
We need a way to apply a click-through "user agreement" to crawlers
Uptrenda
Hey man, I wanted to say good job on read the docs. I use it for my Python project and find it an absolute pleasure to use. Write my stuff in restructured text. Make lots of pretty diagrams (lol), slowly making my docs easier to use. Good stuff.
Edit 1: I'm surprised by the bandwidth costs. I use hetzner and OVH and the bandwidth is free. Though you manage the bare metal server yourself. Would readthedocs ever consider switching to self-managed hosting to save costs on cloud hosting?
huntoa
Did I read it right that you pay 62,5$/TB?
exe34
can you feed them gibberish?
blibble
here's a nice project to automate this: https://marcusb.org/hacks/quixotic.html
couple of lines in your nginx/apache config and off you go
my content rich sites provide this "high quality" data to the parasites
Groxx
LLMs poisoned by https://git-man-page-generator.lokaltog.net/ -like content would be a hilarious end result, please do!
jcpham2
This would be my elegant solution, something like an endless recursion with a gzip bomb at the end if I can identify your crawler and it’s that abusive. Would it be possible to feed an abusing crawler nothing but my own locally-hosted LLM gibberish?
But then again if you’re in the cloud egress bandwidth is going to cost for playing this game.
Better to just deny the OpenAI crawler and send them an invoice for the money and time they’ve wasted. Interesting form of data warfare against competitors and non competitors alike. The winner will have the longest runway
actsasbuffoon
It wouldn’t even necessarily need to be a real GZip bomb. Just something containing a few hundred kb of seemingly new and unique text that’s highly compressible and keeps providing “links” to additional dynamically generated gibberish that can be crawled. The idea is to serve a vast amount of poisoned training data as cheaply as possible. Heck, maybe you could even make a plugin for NGINX to recognize abusive AI bots and do this. If enough people install it then you could provide some very strong disincentives.
jcgrillo
[flagged]
jsheard
Judging by how often these scrapers keep pulling the same pages over and over again I think they're just hoping that more data will magically come into existence if they check enough times. Like those vuln scanners which ping your server for Wordpress exploits constantly just in case your not-Wordpress site turned into a Wordpress site since they last looked 5 minutes ago.
KTibow
I personally predict this won't be as bad as it sounds since training on synthetic data usually goes well (see Phi)
spacecadet
While Phi is a good example of this technique, Phi as a model is very anemic. It was recently part of a CTF hosted by Microsoft, where other models were also included- I assume MS was looking to test performance of Phi against the competition... Phi performed the worst. Its outputs easier to predict, quicker to construct injection attacks and jailbreak. All models utilized the same defenses. As I have also trained and fine-tuned models using synthetic data, I have seen this approach increase determinism and increase predictability. Some might see this as a good thing- but I think it depends. On one hand this opens the model to several adversarial attacks such as jailbreaking, extraction, etc, on the other hand some consumers may prefer less random outputs.
joelkoen
> “OpenAI used 600 IPs to scrape data, and we are still analyzing logs from last week, perhaps it’s way more,” he said of the IP addresses the bot used to attempt to consume his site.
The IP addresses in the screenshot are all owned by Cloudflare, meaning that their server logs are only recording the IPs of Cloudflare's reverse proxy, not the real client IPs.
Also, the logs don't show any timestamps and there doesn't seem to be any mention of the request rate in the whole article.
I'm not trying to defend OpenAI but as someone who scrapes data I think it's unfair to throw around terms "like DDOS attack" without providing basic request rate metrics. This seems to be purely based on the use of multiple IPs, which was actually caused by their own server configuration and has nothing to do with OpenAI.
mvdtnz
Why should web store operators have to be so sophisticated to use the exact right technical language in order to have a legitimate grievance?
How about this: these folks put up a website in order to serve customers, not for OpenAI to scoop up all their data for their own benefit. In my opinion data should only be made available to "AI" companies on an opt-in basis, but given today's reality OpenAI should at least be polite about how they harvest data.
griomnib
I’ve been a web developer for decades as well as doing scraping, indexing, and analyzing million of sites.
Just follow the golden rule: don’t ever load any site more aggressively than you would want yours to be.
This isn’t hard stuff, and these AI companies have grossly inefficient and obnoxious scrapers.
As a site owner those pisses me off as a matter of decency on the web, but as an engineer doing distributed data collection I’m offended by how shitty and inefficient their crawlers are.
PaulHoule
I worked at one place where it probably cost us 100x (in CPU) more to serve content the way we were doing it as opposed to the way most people would do it. We could afford it because it was still cheap, but we deferred the cost reduction work for half a decade and went on a war against webcrawlers instead. (hint: who introduced the robots.txt standard?)
add-sub-mul-div
These people think they're on the verge of the most important invention in modern history. Etiquette means nothing to them. They would probably consider an impediment to their work a harm to the human race.
krapp
>They would probably consider an impediment to their work a harm to the human race.
They do. Marc Andreeson said as much in his "techno-optimist manifesto," that any hesitation or slowdown in AI development or adoption is equivalent to mass murder.
add-sub-mul-div
I want to believe he's bullshitting to hype it up for profit because at least that's not as bad as if it was sincere.
griomnib
Yeah but it’s just shit engineering. They re-crawl entire sites basically continuously absent any updates or changes. How hard is it to cache a fucking sitemap for a week?
It’s a waste of bandwidth and CPU on their end as well, “the bitter lesson” isn’t “keep duplicating the same training data”.
I’m glad DeepSeek is showing how inefficient and dogshit most of the frontier model engineer is - how much VC is getting burned literally redownloading a copy of the entire web daily when like <1% of it is new data.
I get they have no shame economically, that they are deluded and greedy. But bad engineering is another class of sin!
mingabunga
We've had to block a lot of these bots as they slowed our technical forum to a crawl, but new ones appear every now and again. Amazons was the worst
griomnib
I really wonder if these dogshit scrapers are wholly built by LLM. Nobody competent codes like this.
jonas21
It's "robots.txt", not "robot.txt". I'm not just nitpicking -- it's a clear signal the journalist has no idea what they're talking about.
That and the fact that they're using a log file with the timestamps omitted as evidence of "how ruthelessly an OpenAI bot was accessing the site" makes the claims in the article a bit suspect.
OpenAI isn't necessarily in the clear here, but this is a low-quality article that doesn't provide much signal either way.
ted_bunny
The best way to tell a journalist doesn't know their subject matter: check if they're a journalist.
peterldowns
Well said, I agree with you.
Thoreandan
Hear hear. Poor article going out the door for publication with zero editorial checking.
joelkoen
Haha yeah just noticed they call Bytespider "TokTok's crawler" too
spwa4
It's funny how history repeats. The web originally grew because it was a way to get "an API" into a company. You could get information, without a phone call. Then, with forms and credit cards and eventually with actual API's, you could get information, you could get companies to do stuff via an API. For a short while this was possible.
Now everybody calls this abuse. And a lot of it is abuse, to be fair.
Now that has been mostly blocked. Every website tries really hard to block bots (and mostly fail because Google funds their crawler millions of dollars while companies raise a stink over paying a single SWE), but it's still at the point that automated interactions with companies (through third-party services for example) are not really possible. I cannot give my credit card info to a company and have it order my favorite foods to my home every day, for example.
What AI promises, in a way, is to re-enable this. Because AI bots are unblockable (they're more human than humans as far as these tests are concerned). For companies, and for users. And that would be a way to ... put API's into people and companies again.
Back to step 1.
afavour
I see it as different history repeating: VC capital inserting itself as the middleman between people and things they want. If all of our interactions with external web sites now go through ChatGPT that gives OpenAI a phenomenal amount of power. Just like Google did with search.
spwa4
Well, it's not just that. Every company insists on doing things differently and usually in annoying ways. Having a way to deal with companies while avoiding their internal policies (e.g. upselling, "retention team", ...) would be very nice.
Yes, VCs want this because it's an opportunity for a double-sided marketplace, but I still want it too.
I wonder to what extent is what these FANG businesses want with AI can be described as just "an API into businesses that don't want to provide an API".
PaulHoule
First time I heard this story it was '98 or so and the perp was somebody in the overfunded CS department and the victim somebody in the underfunded math department on the other side of a short and fat pipe. (Probably running Apache httpd on a SGI workstation without enough ram to even run Win '95)
In years of running webcrawlers I've had very little trouble, I've had more trouble in the last year than in the past 25. (Wrote my first crawler in '99, funny my crawlers have gotten simpler over time not more complex)
In one case I found a site got terribly slow although I was hitting it at much less than 1 request per second. Careful observation showed the wheels were coming off the site and it had nothing to do with me.
There's another site that I've probably crawled in it's entirety at least ten times over the past twenty years. I have a crawl from two years ago, my plan was to feed it into a BERT-based system not for training but to discover content that is like the content that I like. I thought I'd get a fresh copy w/ httrack (polite, respects robots.txt, ...) and they blocked both my home IP addresses in 10 minutes. (Granted I don't think the past 2 years of this site was as good as the past, so I will just load what I have into my semantic search & tagging system and use that instead)
I was angry about how unfair the Google Economy was in 2013, in lines with what this blogger has been saying ever since
(I can say it's a strange way to market an expensive SEO community but...) and it drives me up the wall that people looking in the rear view mirror are getting upset about it now.
Back in '98 I was excited about "personal webcrawlers" that could be your own web agent. On one hand LLMs could give so much utility in terms of classification, extraction, clustering and otherwise drinking from that firehose but the fear that somebody is stealing their precious creativity is going to close the door forever... And entrench a completely unfair Google Economy. It makes me sad.
----
Oddly those stupid ReCAPTCHAs and Cloudflare CAPTCHAs torment me all the time as a human but I haven't once had them get in the way of a crawling project.
Hilift
People who have published books recently on Amazon have noticed that immediately there are fraud knockoff copies with the title slightly changed. These are created by AI, and are competing with humans. A person this happened to was recently interviewed about their experience on BBC.
vzaliva
From the article:
"As Tomchuk experienced, if a site isn’t properly using robot.txt, OpenAI and others take that to mean they can scrape to their hearts’ content."
The takeaway: check your robots.txt.
The question of how much load requests robots can reasonably generate when allowed is a separate matter.
krapp
Also probably consider blocking them with .htaccess or your server's equivalent, such as here: https://ethanmarcotte.com/wrote/blockin-bots/
All this effort is futile because AI bots will simply send false user agents, but it's something.
Sesse__
I took my most bothered page IPv6-only, the AI bots vanished in the course of a couple days :-) (Hardly any complaints from actual users yet. Not zero, though.)
OutOfHere
Sites should learn to use HTTP error 429 to slow down bots to a reasonable pace. If the bots are coming from a subnet, apply it to the subnet, not to the individual IP. No other action is needed.
Sesse__
I've seen _plenty_ of user agents that respond to 429 by immediately trying again. Like, literally immediately; full hammer. I had to eventually automatically blackhole IP addresses that got 429 too often.
OutOfHere
That is just what a bot does by default. It will almost always give up after a few retries.
The point of 429 is that you will not be using up your limited bandwidth sending the actual response, which will save you at least 99% of your bandwidth quota. It is not to find IPs to block, especially if the requestor gives up after a few requests.
The IPs that you actually need to block are the ones that are actually DoSing you without stopping even after a few retries, and even then only temporarily.
Sesse__
> That is just what a bot does by default. It will almost always give up after a few retries.
I've had them go on for hours. (This is not mainly bots, but various crap desktop applications.)
> The IPs that you actually need to block are the ones that are actually DoSing you without stopping even after a few retries, and even then only temporarily.
Yup. They only get blocked until next reboot :-)
jcgrillo
It seems like it should be pretty cheap to detect violations of Retry-After on a 429 and just automatically blackhole that IP for idk 1hr.
It could also be an interesting dataset for exposing the IPs those shady "anonymous scraping" comp intel companies use..
methou
I used to have some problem with some Chinese crawlers, first I told them no with robots.txt, then I see a swarm of of non-bot user-agents from cloud providers in China, so I blocked their ASN, and then I see another rise of IPs from some Chinese ISP, so I eventually I have to block the entire country_code = cn and show them a robots.txt
tonetegeatinst
What options exist if you want to handle this traffic and you own your hardware on prem?
It seems that any router or switch over 100G is extremely expensive, and often requires some paid for OS.
The pro move would be to not block these bots. Well I guess block them if you truly can't handle their throughput request (would an ASN blacklist work?)
Or if you want to force them to slow down, start sending data, but only a random % of responses are sent (so say ignore 85% of the traffic they spam you with, and reply to the others at a super low rate or you could purposely send bad data)
Or perhaps reachout to your peering partners and talk about traffic shaping these requests.
Recent and related:
AI companies cause most of traffic on forums - https://news.ycombinator.com/item?id=42549624 - Dec 2024 (438 comments)