Wikipedia is struggling with voracious AI bot crawlers
99 comments
·April 2, 2025diggan
roenxi
I've written some unfathomably bad web crawlers in the past. Indeed, web crawlers might be the most natural magnet for bad coding and eye-twitchingly questionable architectural practices I know of. While it likely isn't the major factor here I can attest that there are coders who see pages-articles-multistream.xml.bz2 and then reach for a wget + HTML parser combo.
If you don't live and breath Wikipedia it is going to soak up a lot of time figuring out Wikipedia's XML format and markup language, not to mention re-learning how to parse XML. HTTP requests and bashing through the HTML is all everyday web skills and familiar scripting that is more reflexive and well understood. The right way would probably be much easier but figuring it out will take too long.
Although that is all pre-ChatGPT logic. Now I'd start by asking it to solve my problem.
a2128
You don't even need to deal with any XML formats or anything, they publish a complete dataset on Huggingface that's just a few lines to load in your Python training script
jerf
To be a "good" web crawler, you have to go beyond "not bad coding". If you just write the natural "fetch page, fetch next page, retry if it fails" loop, notably, missing any sort of wait between fetches, so that you fetch as quickly as possible, you are already a pest. You don't even need multiple threads or machines to be a pest; a single machine on a home connection fetching pages as quickly as it can be already be a pest to a website with heavy backend computation or DB demands. Do an equally naive "run on a couple dozen threads" upgrade to your code and you expand the blast radius of your pestilence out to even more web sites.
Being a truly good web crawler takes a lot of work, and being a polite web crawler takes yet more different work.
And then, of course, you add the bad coding practices on top of it, ignoring robots.txt or using robots.txt as a list of URLs to scrape (which can be either deliberate or accidental), hammering the same pages over and over, preferentially "retrying" the very pages that are timing out because you found the page that locks the DB for 30 seconds in a hard query that even the website owners themselves didn't know was possible until you showed them by taking down the rest of their site in the process... it just goes downhill from there. Being "not bad" is already not good enough and there's plenty of "bad" out there.
marginalia_nu
I think most crawlers inevitably tend to turn into spaghetti code because of the number of weird corner cases you need to deal with.
Crawlers are also incredibly difficult to test in a comprehensive way. No matter what test scenarios you come up with, there's a hundred more weird cases in the wild. (e.g. there's a world's difference between a server taking a long time to respond to a request, and a server sending headers quickly but taking a long time to send the body)
soco
You'd probably ask ChatGPT to write you a crawler for Wikipedia, without thinking to ask whether there's a better way to get Wikipedia info. So that download would be missed, because how and what we ask AI stays very important. Actually this is not new, googling skills were known as being important before and even philosophers recognized that asking good questions was crucial.
joquarky
> Why would you crawl the web interface when the data is so readily available in a even better format?
Have you seen the lack of experience that is getting through the hiring process lately? It feels like 80% of the people onboarding are only able to code to pre-existing patterns without an ability to think outside the box.
I'm just bitter because I have 25 years of experience and can't even get a damn interview no matter how low I go on salary expectations. I obviously have difficulty in the soft skills department, but companies who need real work to get done reliably used to value technical skills over social skills.
Cthulhu_
Because the scrapers they use aren't targeted, they just try to index the whole internet. It's easier that way.
johannes1234321
While the dump may be simpler to consume, building it isn't simpler.
The generic web crawler works (more or less) everywhere. The Wikipedia dump solution works on Wikipedia dumps.
Also mind: This is tied in with search engines and other places, where the AI bot follows links from search results etc. thus they'd need the extra logic to detect a Wikipedia link, then find the matching article in the dump, and then add the original link back as reference for the source.
Also in one article on that I read about spikes around death from people etc. in that scenario they want the latest version of the article, not a day old dump.
So yeah, I guess they used the simple straight forward way and didn't care much about consequences.
diggan
I'm not sure this is what is currently affecting them the most, the article mentions this:
> Since AI crawlers tend to bulk read pages, they access obscure pages that have to be served from the core data center.
So it doesn't seem to be driven by "Search the web for keywords, follow links, slurp content" but trying to read a bulk of pages all together, then move on to another set of bulk pages, suggesting mass-ingestion, not just acting as a user-agent for an actual user.
But maybe I'm reading too much into the specifics of the article, I don't have any particular internal insights to the problem they're facing I'll confess.
marginalia_nu
I think most of these crawlers just aren't very well implemented. Takes a lot of time and effort to get crawling to work well, very easy to accidentally DoS a website if you don't pay attention.
is_true
This is what you get when an AI generates your code and your prompts are vague.
cowsaymoo
Vibe coded crawlers
iamacyborg
With the way Transclusion works in MediaWiki, dumps and the wiki api’s are often not very useful, unfortunately
mzajc
There are crawlers that will recursively crawl source repository web interfaces (cgit et al, usually expensive to render) despite having a readily available URL they could clone from. At this point I'm not far from assuming malice over sheer incompetence.
delichon
We're having the same trouble for a few hundred sites that we manage. It is no problem for crawlers that obey robots.txt since we ask for one visit per 10 seconds, which is manageable. The problem seems to be mostly the greedy bots that request as fast as we can reply. So my current plan is to set rate limiting for everyone, bots or not. But doing stats on the logs, it isn't easy to figure out a limit that won't bounce legit human visitors.
The bigger problem is that the LLMs are so good that their users no longer feel the need to visit these sites directly. It looks like the business model of most of our clients is becoming obsolete. My paycheck is downstream of that, and I don't see a fix for it.
MadVikingGod
I wonder if there is a WAF that has an exponential backoff and constant decay for delay. Something like start a 10us and decay 1us/s.
aucisson_masque
People got to make bots pay. That's the only way to get rid of this world wide DDOSing backed up by multi billions companies.
There are captcha to block bots or at least make them pay money to solve them, some people in Linux community also made tools to combat that, i think something that use a little cpu energy.
And in the same time, you offer an api, less expensive than the cost to crawl it, and everyone win.
Multi billions companies get their sweet sweet data, Wikipedia gets money to enhance their infrastructure or whatever, users benefits from Wikipedia quality engagement.
guerrilla
This is an interesting model in general: free for humans, pay for automation. How do you enforce that though? Captchas sounds like a waste.
jerf
Any plan that starts with "Step one: Apply the tool that almost perfectly distinguishes human traffic from non-human traffic" is doomed to failure. That's whatever the engineering equivalent of "begging the question" is, where the solution to the problem is that we assume that we have the solution to the problem.
zokier
Identity verification is not that far fetched these days. For europeans you got eIDAS and related tech, some other places have similar stuff, for rest of world you can do video based id checks. There are plenty of providers that handle this, it's pretty commonplace stuff.
scoofy
Honeypots in JS and CSS
I've been dealing with this over at golfcourse.wiki for the last couple years. It fucking sucks. The good news is that all the idiot scrapers who don't follow robots.txt seem to fall for the honeypots pretty easily.
Make the honeypot disappear with a big CSS file, make another one disappear with a JS file. Humans aren't aware they are there, bots won't avoid them. Programming a bot to look for visible links instead of invisible links is challenging. The problem is these programmers are ubiquitous, and since they are ubiquitous they're not going to be geniuses.
Honeypot -> autoban
karn97
Why not just rate limit every user to realistic human rates. You just punish anyone behaving like a bot.
mrweasel
Because, as pointed out in another post about the same problem: Many of these scrappers make one or two requests from one IP and then move on.
guerrilla
Sold. Pay by page retrieval rate.
graemep
Wikipedia provides dumps. Probably cheaper and easier than crawling it. Given the size of Wikipedia it would be well worth a little extra code. it also avoids the risk of getting blocked, and is more reliable.
It suggest to me that people running AI crawlers are throwing resources at the problem with little thought.
voidUpdate
Maybe they just vibe-coded the crawlers and that's why they don't work very well or know the best way to do it
milesrout
We shouldn't use that term. "Vibe coding". Nope. Brainless coding. That is what it is. It's what beginners do: programming without understanding what they--or their programs--are doing. The additional use of a computer program that brainlessly generates brainless code to complement their own brainless code doesn't mean we should call what they are doing by a new name.
laz
10 years ago at Facebook we had a systems design interview question called "botnet crawl" where the set up that I'd give would be:
I'm an entrepreneur who is going to get rich selling printed copies of Wikipedia. I'll pay you to fetch the content for me to print. You get 1000 compromised machines to use. Crawl Wikipedia and give me the data. Go.
Some candidates would (rightfully) point out that the entirety is available as an archive, so for "interviewing purposes" we'd have to ignore that fact.
If it went well, you would pivot back and forth: OK, you wrote a distributed crawler. Wikipedia hires you you to block it. What do you do? This cat and mouse game goes on indefinitely.
schneems
I was on a panel with the President of Wikimedia LLC at SXSW and this was brought up. There's audio attached https://schedule.sxsw.com/2025/events/PP153044.
I also like Anna's (Creative Commons) framing of the problem being money + attribution + reciprocity.
chuckadams
We need to start cutting off whole ASNs of ISPs that host such crawlers and distribute a spamhaus-style block list to that effect. WP should throttle them to serve like one page per minute.
PeterStuer
The weird thing is their own data does not reflect this at all. The number of articles accessed by users, spiders and bots alike has not moved significantly over the last few years. Why these strange wordings like "65 percent of the resource-consuming traffic"? Is there non-resource consuming traffic? Is this just another fundraising marketing drive? Wikimedia has been know to be less than truthful wrt their funding needs and spent.
https://stats.wikimedia.org/#/all-projects/reading/total-pag...
diggan
The graph you linked seems to be about article viewing ("page views", like a GET request to https://en.wikipedia.org/wiki/Democracy for example), while the article mentions multimedia content, so fetching the actual bytes of https://en.wikipedia.org/wiki/Democracy#/media/File:Economis... for example, which would consume more content than just loading article pages, as far as I understand.
zokier
multimedia content vs articles. It's easy to see how bad scraping of videos and images pushes bandwidth up more than just scraping articles.
The resource consuming traffic is clearly explained in the linked post:
> This means these types of requests are more likely to get forwarded to the core datacenter, which makes it much more expensive in terms of consumption of our resources.
I.e. difference between cached content at cdn edge vs hits to core services.
perching_aix
I thought all of Wikipedia can be downloaded directly if that's the goal? [0] Why scrape?
[0] https://en.wikipedia.org/wiki/Wikipedia:Database_download
netsharc
Someone's gotta tell the LLMs that when a prompt-kiddie asks them to build a scraper bot that "I suggest downloading the database instead".
tiagod
This is the first time I'm reading "prompt-kiddie", made me chuckle hard. Jumped straight into my vocabulary :-)
werdnapk
Turns out AI isn't smart enough to figure this out yet.
jerven
Working for an open-data project, I am starting to believe that the AI companies are basically criminal enterprises. If I did this kind of thing to them they would call the cops and say I am a criminal for breaking TOS and doing a DDOS, therefore they are likely to be criminal organizations and their CEOs should be in Alcatraz.
shreyshnaccount
they will ddos the open internet to the point where only big tech will be able to afford to host even the most basic websites? is that the endgame?
qwertox
Maybe the big tech providers should play fair and host the downloadable database for those bots as well as crawlable mirrors.
This has to be one of strangest targets to crawl, since they themselves make database dumps available for download (https://en.wikipedia.org/wiki/Wikipedia:Database_download) and if that wasn't enough, there are 3rd party dumps as well (https://library.kiwix.org/#lang=eng&category=wikipedia) that you could use if the official ones aren't good enough for some reason.
Why would you crawl the web interface when the data is so readily available in a even better format?