No as a Service
47 comments
·April 30, 2025varun_ch
lgl
Bug report: when the server is overloaded, the No's are no longer random :)
NotMichaelBay
It's so elegant. Even in failure, it's still operational.
riquito
Love it, it's brilliant, but I think the rate limiting logic is not doing what the author really wants, it actually costs more cpu to detect and produce the error than returning the regular response (then my mind goes on how to actually over optimize this thing, but that's another story :-D )
hotheadhacker
Rate limiting has been removed
kenrick95
Classic Hacker News hug of death
xnorswap
It looks like it's limited to 10 requests per minute, it's less of a hug and more of a gentle brush past.
It's documented as "Per IP", but I'm willing to bet either that documentation is wrong, or it's picking up the IP address of the reverse proxy or whatever else is in-front of the application server, rather than the originator IP.
Why do I think that? Well these headers:
x-powered-by Express
x-ratelimit-limit 10
x-ratelimit-remaining 0
Which means it's not being rate-limited by cloudflare, it's express doing the rate limiting.And I haven't yet made 10 requests, so unless it's very bad at picking up my IP, it's picking up the cloudflare IP instead.
egberts1
Probably all those cookies tipped and triggered the connection rate limiter.
deanputney
Not sure why, but reasons.json is mostly duplicates (as many as 50!) of the same 25 responses: https://gist.github.com/deanputney/4143ca30f7823ce53d894d3ed...
It'd be easier to add new ones if they were in there a single time each. Maybe the duplication is meant to handle distribution?
finnh
ah, yes, the "memory is no object" way of obtaining a weighted distribution. If you need that sweet sweet O(1) selection time, maybe check out the Alias Method :)
justin_oaks
Knowing that there are only 25 responses, it makes it all the more funny that rate limiting is mentioned.
And you can host the service yourself! Hard pass. I'll read the 25 responses from your gist. Thanks!
ziddoap
Fun idea. I wonder why the rejection messages are repeated so often in the "reasons" file.
"I truly value our connection, and I hope my no doesn't change that." shows up 45 times.
Seems like most of the rejections appear between 30 and 50 times.
mikepurvis
A single large file is also sadness for incorporating suggestions from collaborators as you're always dealing with merge conflicts. Better might be a folder of plain text files, where each can have multiple lines in it, and they're grouped by theme or contributor or something.
spiffyk
A folder of plain text files will be sadness for performance. It's a file with basically line-wise entries, merge conflicts in that will be dead easy to resolve with Git locally. It won't be single-click in GitHub, but not too much of a hassle.
mikepurvis
In fairness, I doubt most of these kinds of meme projects have a maintainer active enough to be willing to conduct local merges, even if it's "dead easy" to do so.
Maybe then this is really a request for Github to get better/smarter merge tools in the Web UI, particularly syntax-aware ones for structured files like JSON and YAML, where it would be much easier to guess, or even just preset AB and BA as the two concrete options available when both changes inserted new content at the same point. It could even read your .gitattributes file for supported mergers that would be able to telegraph "I don't care about the order" or "Order new list entries alphabetically" or whatever.
Retr0id
It's ~fine for performance if you load them once at service startup. But I agree, merging is also no big deal.
MalbertKerman
There are 25 unique responses in that 1000-line file.
justin_oaks
Once you remove the duplicates that are different only because of the typos in them, yes, that's correct.
hombre_fatal
I made a lot of things like this as a noob and threw them up on github.
As you gain experience, these projects become a testament to how far you've come.
"An http endpoint that returns a random array element" becomes so incredibly trivial that you can't believe you even made a repo for it, and one day you sheepishly delete it.
blahaj
I don't think things have to be impressive to be shown. A funny little idea is all you need, no matter how simple the code. Actually I find exactly that quite neat.
TehCorwiz
I think you'll enjoy this better: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
thih9
Example responses:
https://raw.githubusercontent.com/hotheadhacker/no-as-a-serv...
choult
Well this is something... someone creating a service off the back of a meme that's been flying around my networks for the past two days...
seabass
{"error":"Too many requests, please try again later."}
a missed opportunity for some humor
readthenotes1
Beats "I have a headache"
richrichardsson
{"error":"Computer says no."}
Retr0id
It could be genuinely useful for testing HTTP clients if it had a wider array of failure modes.
Some ideas:
- All the different HTTP status codes
- expired/invalid TLS cert
- no TLS cipher overlap
- invalid syntax at the TLS and/or HTTP level
- hang/timeout
- endless slowloris-style response
- compression-bomb
- DNS failure (and/or round-robin DNS where some IPs are bad)
- infinite redirect loop
- ipv6-only
- ipv4-only
- Invalid JSON or XML syntax
zikani_03
Not exactly what you are asking for, but reminded me that Toxiproxy[0] exists if you want to test your applications or even HTTP clients against various kinds of failures:
macleginn
A worthy spiritual disciple of the Journal of Universal Rejection (https://www.universalrejection.org/)
anonymousiam
Looks impressive, but out of the 1000 possible responses, only 26 are unique.
svilen_dobrev
nice. Reminds me of BOFH (Bastard operator from Hell) . And those box-like calendars with page-per-day with some excuse^w^w tip on each :)
qrush
Oh great, it's Balatro's Wheel of Fortune card as a Service (WoFaaS)
> {"error":"Too many requests, please try again later."}
I guess it still works.