Anubis saved our websites from a DDoS attack
288 comments
·May 1, 2025mrweasel
jeroenhd
> The modern companies, mostly AI companies, seems to be more interested in flying under the radar, and have less respect for the internet infrastructure at a whole
I think that makes a lot of sense. Google's goal is (or perhaps used to be) providing a network of links. The more they scrape you, the more visitors you may end up receiving, and the better your website performs (monetarily, or just in terms of providing information to the world).
With AI companies, the goal is to consume and replace. In their best case scenario, your website will never receive a visitor again. You won't get anything in return for providing content to AI companies. That means there's no reason for website administrators to permit the good ones, especially for people who use subscriptions or ads to support their website operating costs.
eadmund
> With AI companies, the goal is to consume and replace.
I don’t think that’s really true. The AI companies’ goal is to consume and create something else.
> You won't get anything in return for providing content to AI companies.
That was the original problem with websites in general, and the ‘solution’ was ads. It would be really, really cool if the thing which finally makes micropayments happen is AI.
And then we humans could use micropayments too. Of course, the worst of both worlds would be micropayments and ads.
mrweasel
You can have non-commercial websites. Plenty of people have blogs or personal websites, sites that support a business, sites where you already pay and in this case it was the ScummVM website, an open source project.
A lot of those sites are at risk of being made irrelevant by AI companies who really don't give a shit about your motivations for doing something for free. If their crawler kills your site and their LLM steals views by regurgitation answers based on your work, so be it, you served your purpose.
If you want to talk payment: Ask the AI companies to pay you when they generate an answer based on your work, a license fee. That will kill their business model pretty quickly.
piokoch
Yes, search engines were not hiding, as it was website owner interest involved here as well - without those search bots their sites would not be indexed and searchable in the Internet. So there was kind of win-win situation, in most typical cases at least, as for instance publishers complained about deep links, etc. because their ads revenue was hurt.
AI scrapping bots provide zero value for sites owners.
Valodim
Is this really true? If I have a marketing website for a product, isn't it in my interest to have that marketing incorporated in AI models?
philipwhiuk
It's DDoS either way even if it's not an attack.
CaptainFever
> To me, Anubis is not only a blocker for AI scrapers. Anubis is a DDoS protection.
Anubis is DDoS protection, just with updated marketing. These tools have existed forever, such as CloudFlare Challenges, or https://github.com/RuiSiang/PoW-Shield. Or HashCash.
I keep saying that Anubis really has nothing much to do with AI (e.g. some people might mistakenly think that it magically "blocks AI scrapers"; it only slows down abusive-rate visitors). It really only deals with DoS and DDoS.
I don't understand why people are using Anubis instead of all the other tools that already exist. Is it just marketing? Saying the right thing at the right time?
Imustaskforhelp
I agree with you that it is infact a DDOS protection but still, the fact that it is open source and created by a really cool dev (she is awesome), I think I don't really mind it gaining popularity. And also they had created it out of their own necessity which is also really nice.
Anubis is getting real love out there and I think I am all for it. I personally host a lot of my stuff on cloudflare due to it being free with cloudflare workers but if I ever have a vps, I am probably going to use anubis as well
alias_neo
I'm not sure why there's so many negative comments here. This looks nice, appears to work, is open source and MIT licensed. Why _wouldn't_ I use this?
fmajid
It also doesn’t cede more market power to CloudFlare, which tends to block non-mainstream browsers, users with adblockers, Tor, or cookies and JavaScript disabled.
superkuh
All the other tools don't actually work. What I mean is that they block far, far, more than they intend to. Anubis actually works on every weird and niche browser I've tried. Which is to say, it lets actual human people through even if they aren't using Chrome.
CloudFlare doesn't do that. Cloudflare's false positive rate is extremely high, as are the others. Mostly because they all depend on bleeding edge JS and browser functions (CORS, etc) for fingerprinting functionality.
Cloudflare is for for-profit and other situations where you don't care if you block poor people because they can't give you money anyway. Anubis is for if you want everyone to be able to access your website.
prmoustache
I doubt it works with dillo or lynx.
touggourt
if it doens't work yet, you can suggest a patch
JodieBenitez
> I don't understand why people are using Anubis instead of all the other tools that already exist. Is it just marketing? Saying the right thing at the right time?
Care to share existing solutions that can be self-hosted ? (genuine question, I like how Anubis works, I just want something with a more neutral look and feel).
moebrowne
If you're using nginx there's this module: https://github.com/simon987/ngx_http_js_challenge_module
JodieBenitez
Unfortunately I'm stuck with Apache at work, but thanks for the suggestion.
dspillett
> I just want something with a more neutral look and feel.
If it is perfect for your needs other than the look, you could update the superficial parts to match your liking?
If it is designed in such a way as to make this difficult, such as if the visible content & styling is tangled within the code rather than all in static assets (I've not looked at the code myself yet), then perhaps raise an issue suggesting that this is changed (or if you are a coder yourself, perhaps do so and raise a pull request for your changes).
Given how popular the tool seems to be coming, I expect theming this sort of theming will be an official feature eventually anyway, of you are patient.
Of course the technique it uses is well know and documented, so there may already be other good implementations that match your visual needs without any of the above effort.
JodieBenitez
> If it is perfect for your needs other than the look, you could update the superficial parts to match your liking?
I would, but the author is not ok with this:
> Anubis is provided to the public for free in order to help advance the common good. In return, we ask (but not demand, these are words on the internet, not word of law) that you not remove the Anubis character from your deployment. If you want to run an unbranded or white-label version of Anubis, please contact Xe to arrange a contract.
CaptainFever
I linked https://github.com/RuiSiang/PoW-Shield in my post. Does it work?
JodieBenitez
Thanks, looks better indeed. I will test.
consp
Knowing something exists is half the challenge. Never used it but ,maybe ease of use/setup or license?
GoblinSlayer
The readme explains that it's for the case when you don't use cloudflare, also it's open source, analogous to PoW Shield, but has less heavy dependencies.
GoblinSlayer
Though PoW Shield uses simple symmetric signature, while anubis uses ed25519/jwt.
areyourllySorry
pow shield does not offer a furry loading screen so it can't be as good
areyourllySorry
hacker news is not immune to viral marketing
cedws
Fun fact: that PoW-Shield repo is authored by a guy jailed for running a massive darknet market (Incognito.)
areyourllySorry
that's how you know it's good
immibis
marketing plus a product that Just Does The Thing, it seems like. No bullshit.
btw it only works on AI scrapers because they're DDoSes.
CaptainFever
Not all DDoSes are AI-related, and not all AI scrapers are DDoSes.
superkuh
But almost all DoS's we're talking about are from corporations. The real non-human danger.
chrisnight
> Solving the challenge–which is valid for one week once passed–
One thing that I've noticed recently with the Arch Wiki adding Anubis, is that this one week period doesn't magically fix user annoyances with Anubis. I use Temporary Containers for every tab, which means that I constantly get Anubis regenerating tokens, since the cookie gets deleted as soon as the tab is closed.
Perhaps this is my own problem, but given the state of tracking on the internet, I do not feel it is an extremely out-of-the-ordinary circumstance to avoid saving cookies.
philipwhiuk
I think it's absolutely your problem. You're ignoring all the cache lifetimes on assets.
selfhoster11
OK, so what? Keeping persistent state on your machine shouldn't be mandatory for a comfortable everyday internet browsing experience.
orthecreedence
What then do you suggest as a good middle ground between website publishers and website enjoyers? Doing a one-time challenge and storing the result seems like a really good compromise between all parties. But that's not good enough! So what is?
aseipp
"In a fantasy land that doesn't exist, or maybe last existed decades ago, this wouldn't be needed." OK, that's nice. What does that have to do with reality as it stands today, though?
TiredOfLife
It's not a problem. You have configured your system to show up as a new visitor every time you visit a website. And you are getting expected behaviour.
jsheard
It could be worse, the main alternative is something like Cloudflares death-by-a-thousand-CAPTCHAs when your browser settings or IP address put you on the wrong side of their bot detection heuristics. Anubis at least doesn't require any interaction to pass.
Unfortunately nobody has a good answer for how to deal with abusive users without catching well behaved but deliberately anonymous users in the crossfire, so it's just about finding the least bad solution for them.
lousken
I hated everyone who enabled the cloudflare validation thing on their website, because it was blocked for months (I got stuck on that captcha that was refusing my Firefox). Eventually they fixed it but it was really annoying.
goku12
The CF verification page still appears far too often in some geographic regions. It's such an irritant that I just close the tab and leave when I see it. It's so bad that seeing the Anubis page instead is actually a big relief! I consider the CF verification and its enablers as a shameless attack the open web - a solution nearly as bad as the problem it tries to solve.
throwaway562if1
I am still unable to pass CF validation on my desktop (sent to infinite captcha loop hell). Nowadays I just don't bother with any website that uses it.
qiu3344
I'd even argue that Anubis is universally superior in this domain.
A sufficiently advanced web scraper can build a statistical model of fingerprint payloads that are categorized by CF as legit and change their proxy on demand.
The only person who will end up blocked is the regular user.
There is also a huge market of proprietary anti-bot solvers, not to mention services that charge you per captcha-solution. Usually it's just someone who managed to crack the captcha and is generating the solutions automatically, since the response time is usually a few hundred milliseconds.
This is a problem with every commercial Anti-bot/captcha solution and not just CF, but also AWS WAF, Akamai, etc.
xena
The pro gamer move is to use risk calculation as a means of determining when to throw a challenge, not when to deny access :)
trod1234
> Unfortunately nobody has a good answer for how to deal with abusive users without catching well behaved but deliberately anonymous users in the crossfire...
Uhh, that's not right. There is a good answer, but no turnkey solution yet.
The answer is making each request cost a certain amount of something from the person, and increased load by that person comes with increased cost on that person.
halosghost
Note that this is actually one of the things Anubis does. That's what the proof-of-work system is, it just operates across the full load rather than targeted to a specific user's load. But, to the GP's point, that's the best option while allowing anonymous users.
All the best,
-HG
Spivak
I know that you mean a system that transfers money but you are also describing Anubis because PoW is literally to make accessing the site cost more and scale that cost proportional to the load.
tpxl
This makes discussions such as this have a negative ROI for an average commenter. Spamming scam and grift links still has a positive ROI, albeit a slightly smaller one.
I use a certain online forum which sometimes makes users wait 60 or 900 seconds before they can post. It has prevented me from making contributions multiple times.
gruez
>It could be worse, the main alternative is something like Cloudflares death-by-a-thousand-CAPTCHAs when your browser settings or IP address put you on the wrong side of their bot detection heuristics.
Cloudflare's checkbox challenge is probably the better challenge systems. Other security systems are far worse, requiring either something to be solved, or a more annoying action (eg. holding a button for 5 seconds).
Dylan16807
Checking a box is fine when it lets you through.
The problem is when cloudflare doesn't let you through.
notpushkin
Yeah. A “drag this puzzle piece” captcha style is also relatively easy, but things like reCaptcha or hCaptcha are just infuriating.
For pure POW (no fingerprinting), mCaptcha is a nice drop-in replacement you can self-host: https://mcaptcha.org/
bscphil
It's even worse if you block cookies outright. Every time I hit a new Anubis site I scream in my head because it just spins endlessly and stupidly until you enable cookies, without even a warning. Absolutely terrible user experience; I wouldn't put any version of this in front of a corporate / professional site.
Dylan16807
Blocking cookies completely is just asking for a worse method of tracking sessions. It's fine for a site to be aware of visits. As someone who argues that sites should work without javascript, blocking all cookies strikes me as doing things wrong.
bscphil
A huge proportion of sites (a) use cookies, (b) don't need cookies. You can easily use extensions to enable cookies for the sites that need them, while leaving others disabled. Obviously some sites are going to do shitty things to track you, but they'd probably be doing that anyway.
The issue I'm talking about is specifically how frustrating it is to hit yet another site that has switched to Anubis recently and having to enable cookies for it.
goku12
I will take Anubis any day over its alternative - the cloudflare verification page. I just close the tab as soon as I see it.
Spivak
Browsers that have cookies and/or JS disabled have been getting broken experiences for well over a decade, it's hard to take this criticism seriously when professional sites are the most likely to break in this situation.
jezek2
If you want to browse the web without cookies (and without JS in an usable manner) you may try FixProxy[1]. It has a direct support for Anubis in the development version.
imcritic
For me the biggest issue with archwiki adding Anubis is that it doesn't let me in when I open it on mobile. I am using Cromite: it doesn't support extensions, but has some ABP integrated in.
ashkulz
I too use Temporary Containers, and my solution is to use a named container and associate that site with the container.
selfhoster11
I am low-key shocked that this has become a thing on Arch Wiki, of all places. And that's just to access the main page, not even for any searches. Arch Wiki is the place where you often go when your system is completely broken, sometimes to the extent that some clever proof of work system that relies on JS and whatever will fail. I'm sure they didn't decide this lightly, but come on.
jillyboel
> One thing that I've noticed recently with the Arch Wiki adding Anubis
Is that why it now shows that annoying slow to load prompt before giving me the content I searched for?
esseph
Would you like to propose an alternative solution that meets their needs and on their budget?
goku12
Anubis has a 'slow' and a 'fast' mode [1], with fast mode selected by default. It used to be so fast that I rarely used to get time to read anything on the page. I don't know why it's slower now - it could be that they're using the slower algorithm, or else the algorithm itself may have become slower. Either way, it shouldn't be too hard to modify it with a different algorithm or make the required work a parameter. This of course has the disadvantage of making it easier for the scrapers to get through.
[1] https://anubis.techaro.lol/docs/admin/algorithm-selection
jillyboel
a static cache for anyone not logged in, and only doing this check when you are authenticated which gives access to editing pages?
edit: Because HN is throwing "you're posting too fast" errors again:
> That falls short of the "meets their needs" test. Authenticated users already have a check (i.e., the auth process). Anubis is to stop/limit bots from reading content.
Arch Wiki is a high value target for scraping so they'll just solve the anubis challenge once a week. It's not going to stop them.
butz
As usual, there is a negative side to such protection: I was trying to download some raw files from git repository and instead of data got bunch of html. After quick look it turned out to be Anubis HTML page. Another issue was with broken links to issue tickets on main page, where Anubis was asking wrapper script to solve some hashes. Lesson here: after deploying Anubis, please carefully check the impact. There might be some unexpected issues.
eadmund
> I was trying to download some raw files from git repository and instead of data got bunch of html. After quick look it turned out to be Anubis HTML page.
Yup. Anubis breaks the web. And it requires JavaScript, which also breaks the web. It’s a disaster.
ziddoap
I feel like it's much more reasonable to blame the companies & people that are making it a necessity to have some sort of protection like Anubis for ruining the web (over-aggressive scrapers, bot farms, etc.), rather than blaming Anubis.
lytedev
I'm using a nearly default configuration which seems to not have this problem. curl still works and so do downloads.
I guess if your cookie expired at just the right time that could cause this issue, and that might be worth thinking about, but I think "breaks the web" is overstating it a bit, at least for the default configuration.
vachina
It’s not Anubis that saved your website, literally any sort of Captcha, or some dumb modal with a button to click into the real contents would’ve worked.
These crawlers are designed to work on 99% of hosts, if you tweak your site just so slightly out of spec, these bots wouldn’t know what to do.
boreq
So what you are saying is that it's anubis that saved their website.
null
forty
Anubis is nice, but could we have a PoW system integrated in protocols (http or TLS, I'm not sure) so we don't have to require JS ?
fc417fc802
Protocol is the wrong level. Integrate with the browser. Add a PoW challenge header to the HTTP response, receive a POW solution header with the next request.
forty
I think you've just described a protocol ;)
Yes it could be in higher layer than what I suggested indeed, on top of HTTP sounds good to me.
My rule of thumb is that it should work with curl (which makes it not antibots, but just anti scrapper & ddos, which is what I have a problem with)
fc417fc802
Ah yeah sloppy wording on my part. I think it should ideally be its own protocol built on top as opposed to integrated into an existing one. Integration is good but mandatory complexity and tight coupling not so much.
tpool
It's so bad we're going to the old gods for help now. :)
Hamuko
I’d sic Yogg-Saron on these scrapers if I could.
ranger_danger
Seems like rate-limiting expensive pages would be much easier and less invasive. Also caching...
And I would argue Anubis does nothing to stop real DDoS attacks that just indiscriminately blast sites with tens of gbps of traffic at once from many different IPs.
PaulDavisThe1st
In the last two months, ardour.org's instance of fail2ban has blocked more than 1.2M distinct IP addresses that were trawling our git repo using http instead of just fetching the goddam repository.
We shut down the website/http frontend to our git repo. There are still 20k distinct IP addresses per day hitting up a site that issues NOTHING but 404 errors.
felsqualle
Hi, author here.
Caching is already enabled, but this doesn’t work for the highly dynamic parts of the site like version history and looking for recent changes.
And yes, it doesn’t work for volumetric attacks with tens of gbps. At this point I don’t think it is a targeted attack, probably a crawler gone really wild. But for this pattern, it simply works.
GoblinSlayer
There's a theory they didn't get through, because it's a new protection method and the bots don't run javascript. It could be as simple as <script>setCookie("letmein=1");reload();</script>
Ocha
Rate limit according to what? It was 35k residential IPs. Rate limit would end up keeping real users out.
linsomniac
Rate limit according to destination URL (the expensive ones), not source IP.
If you have expensive URLs that you can't serve more than, say 3 of at a time, or 100 of per minute, NOT rate limiting them will end up keeping real users out simply because of the lack of resources.
danielheath
Right - but if you have, say, 1000 real user requests for those endpoints daily, and thirty million bot requests for those endpoints, the practical upshot of this approach is that none of the real users get to access that endpoint.
pluto_modadic
this feels like something /you can do on your servers/, and that other folks with resource constraints (like time, budget, or the hardware they have) find anubis valuable.
bastawhiz
Rate limiting does nothing when your adversary has hundreds or even thousands of IPs. It's trivial to pay for residential proxies.
supportengineer
Why aren't there any authorities going after this problem?
danielheath
Most of the "free" analytics tools for android/iOS are "funded" by running residential / "real user" proxies.
They wait until your phone is on wifi / battery, then make requests on behalf of whoever has paid the analytics firm for access to 'their' residential IP pool.
marginalia_nu
These residential botnets are pretty difficult to shut down, and often operated out of countries with poor diplomatic relations with the west.
okanat
1. Which authorities?
2. The US is currently broken and they are not going to punish only, albeit unsustainable, growth in their economy.
3. Internet is global. Even EU wants to regulate, will they charge big tech leaders and companies with information tech crimes which will pierce the corporate veil? It will ensure that nobody will invest in unsustainable AI growth in the EU. However fucking up economy and the planet is how the world operates now, and without infinite growth you lose buying power for everything. So everybody else will continue to do fuckery.
4. What can a regulating body do? Force disconnects for large swaths of internet? Then Internet is no more.
o11c
Because in a "free" nation, that means "free to run malware" not "free from malware".
By far most malware is legal and a portion of its income is used to fund election campaigns.
null
eikenberry
They could be doing it legally.
toast0
> And I would argue Anubis does nothing to stop real DDoS attacks that just indiscriminately blast sites with tens of gbps of traffic at once from many different IPs.
Volumetric DDoS and application layer DDoS are both real, but volumetric DDoS doesn't have an opportunity for cute pictures. You really just need a big enough inbound connection and then typically drop inbound UDP and/or IP fragments and turn off http/3. If you're lucky, you can convince your upstream to filter out UDP for you, which gives you more effective bandwidth.
lousken
Yes, have everything static (if you can't, use caching), optimize images, rate limit anything you have to generate dynamically
anonfordays
This (Anubis) "RiiR" of haproxy-protection is easily bypassed: https://addons.mozilla.org/en-US/firefox/addon/anubis-bypass...
Tiberium
From looking at some of the rules like https://github.com/TecharoHQ/anubis/blob/main/data/bots/head... it seems that Anubis explicitly punishes bots that are "honest" about their user agent - I might be missing something, but isn't this just pressuring anyone who does anything bot-related to just lie about their user agent?
Flat out user-agent blacklist seems really weird, it's going to reward the companies that are more unethical in their scraping practices than the ones who report their user agent truthfully. From the repo it also seems like all the AI crawlers are also DENY, which, again, would reward AI companies that don't disclose their identity in the user agent.
userbinator
User-agent header is basically useless at this point. It's trivial to set it to whatever you want, and all it does is help the browser incumbents.
Tiberium
You're right, that's why I'm questioning the reason Anubis implemented it this way. Lots of big AI companies are at least honest about their crawlers and have proper user agents (which Anubis outright blocks). So "unethical" companies who change the user-agent to something normal will have an advantage with the way Anubis is currently set up by default.
I'm aware that end users can modify the rules, but in reality most will just use the defaults.
xena
Shitty heuristics buy time to gather data and make better heuristics.
MillironX
Despite broadcasting their user agents properly, the AI companies ignore robots.txt and still waste my server resources. So yeah, the dishonest botnets will have an advantage, but I don't give swindlers a pass just because they rob me to my face. I'm okay with defaults that punish all bots.
null
EugeneOZ
The point is to reduce the server load produced by bots.
Honest AI scrapers use the information to learn, which increases their value, and the owner of the scraped server has to pay for it, getting nothing back — there's nothing honest about it. Search engines give you visitors, AI spiders only take your money.
jeroenhd
From what I can tell from the author's Mastodon, it seems like they're working on a fingerprinting solution to catch these fake bots in an upcoming version based on some passively observed behaviour.
And, of course, the link just shows the default behaviour. Website admins can change them to their needs.
I'm sure there will be workarounds (like that version of curl that has its HTTP stack replaced by Chrome's) but things are ever moving forward.
wzdd
The point of anubis is to make scraping unprofitable by forcing bots to solve a sha256-based proof-of-work captcha, so another point of view is that the explicit denylist is actually saving those bot authors time and/or money.
rubyn00bie
Sort of tangential but I’m surprised folks are still using Apache all these years later. Is there a certain language that makes it better than Nginx? Or it just the ease of use configuration that still pulls people? I switched to Nginx I don’t even know how many years ago and never looked back, just more or less wondering if I should.
mrweasel
Apache does everything, it's fairly easy to configure. If there's something you want to do, Apache mostly knows how, or have a module.
If you run a fleet of servers, all doing different things, Apache is a good choice because all the various uses are going to be supported. It might not be the best choice in each individual case, but it is the one that works in all of them.
I don't know why some are so quick to write off Apache. Is just because it's old? It's still something like the second most used webserver in the world.
anotherevan
Equally tangential, but I switched form Nginx to Caddy a few years ago and never looked back.
ahofmann
I'm using nginx since what feels like decades and occasionally I miss the ability to use .htaccess files. This is a very nice way to configure stuff on a server.
felsqualle
I use it because that’s the one I’m most familiar with. Using it since 15 years and counting. And since it doesn’t the job for me, I never had the urge to look into alternatives.
forinti
Apache has so much functionality. Why wouldn't anybody use it?
I started using it when Oracle's Webcache wouldn't support newer certificates and I had to keep Oracle Portal running. I could edit the incoming certificate (I had to snip the header and the footer) and put it in a specific header for Portal to accept it.
justusthane
I don’t really understand why this solved this particular problem. The post says:
> As an attacker with stupid bots, you’ll never get through. As an attacker with clever bots, you’ll end up exhausting your own resources.
But the attack was clearly from a botnet, so the attacker isn’t paying for the resources consumed. Why don’t the zombie machines just spend the extra couple seconds to solve the PoW (at which point, they would apparently be exempt for a week and would be able to continue the attack)? Is it just that these particular bots were too dumb?
judge2020
Anubis is new, so there may not have been foresight to implement a solver to get around it. Also, I wouldn't be surprised if the botnet actor is using vended software, not making it themselves to where they could quickly implement a solver to continue their attack.
maeln
Most DDoS bot don't bother running JS. A lot of botnets don't even really allow it, because the malware they run on the infected target only allow for basic stuff like simple HTTP request. This is why they often do some reconnaissance to find pages that take a long time to load, and therefore are probably using a lot of I/O and/or CPU time on the target server. Then they just spam the request. Huge botnet don't even bother with all that, they just kill you with the bandwidth.
cbarrick
I think the explanation "you’ll end up exhausting your own resources" is wrong for this case. I think you are correct that the bots are simply too dumb.
The likely explanation is that the bots are just curling the expensive URLs without a proper JavaScript engine to solve the challenge.
E.g. if I hack a bunch of routers around the world to act as my botnet, I probably wouldn't have enough storage to install Chrome or Selenium. The lightweight solution is just to use curl/wget (which may be pre-installed) or netcat/telnet.
Sadly it hard to tell if this is an actual DDoS attack, or scrappers descending on the site. It all looks very similar.
The search engines always seemed happy to announce that they are in fact GoogleBot/BingBot/Yahoo/whatever and frequently provided you with their expected IP ranges. The modern companies, mostly AI companies, seems to be more interested in flying under the radar, and have less respect for the internet infrastructure at a whole. So we're now at a point where I can't tell if it's an ill willed DDoS attack or just shitty AI startup number 7 reloading training data.