Skip to content(if available)orjump to list(if available)

OpenFreeMap survived 100k requests per second

colinbartlett

Thank you for this breakdown and for this level of transparency. We have been thinking of moving from MapTiler to OpenFreeMap for StatusGator's outage maps.

hyperknot

Feel free to migrate. If you ever worry about High Availability, self-hosting is always an option. But I'm working hard on making the public instance as reliable as possible.

ch33zer

Since the limit you ran into was number of open files could you just raise that limit? I get blocking the spammy traffic but theoretically could you have handled more if that limit was upped?

hyperknot

I've just written my question to the nginx community forum, after a lengthy debugging session with multiple LLMs. Right now, I believe it was the combination of multi_accept + open_file_cache > worker_rlimit_nofile.

https://community.nginx.org/t/too-many-open-files-at-1000-re...

Also, the servers were doing 200 Mbps, so I couldn't have kept up _much_ longer, no matter the limits.

ndriscoll

One thing that might work for you is to actually make the empty tile file, and hard link it everywhere it needs to be. Then you don't need to special case it at runtime, but instead at generation time.

rtaylorgarlock

Is it always/only 'laziness' (derogatory, i know) when caching isn't implemented by a site like wplace.live ? Why wouldn't they save openfreemap all the traffic when a caching server on their side presumably could serve tiles almost as fast or faster than openfreemap?

VladVladikoff

I actually have a direct answer for this: priorities. I run a fairly popular auction website and we have map tiles via stadia maps. We spend about $80/month on this service for our volume. We definitely could get this cost down to a lower tier by caching the tiles and serving them from our proxy. However we simply haven’t yet had the time to work on this, as there is always some other task which is higher priority.

toast0

Why should they when openfreemap is behind a CDN and their home page says things like:

> Using our public instance is completely free: there are no limits on the number of map views or requests. There’s no registration, no user database, no API keys, and no cookies. We aim to cover the running costs of our public instance through donations.

> Is commercial usage allowed?

> Yes.

IMHO, reading this and then just using it, makes a lot of sense. Yeah, you could put a cache infront of their CDN, but why, when they said it's all good, no limits, for free?

I might wonder a bit, if I knew the bandwidth it was using, but I might be busy with other stuff if my site went unexpectedly viral.

markerz

It looks like a fun website, not a for-profit website. The expectations and focus of fun websites is more to just get it working than to handle the scale. It sounds like their user base exploded overnight, doubling every 14 hours or so. It also sounds like it’s other a solo dev or a small group based on the maintainers wording.

hyperknot

We are talking about an insane amount of data here. It was 56 Gbit/s (or 56 x 1 Gbit servers 100% saturated!). This is not something a "caching server" could handle. We are talking on the order of CDN networks, like Cloudflare, to be able to handle this.

ndriscoll

I'd be somewhat surprised if nginx couldn't saturate a 10Gbit link with an n150 serving static files, so I'd expect 6x $200 minipcs to handle it. I'd think the expensive part would be the hosting/connection.

wyager

> or 56 x 1 Gbit servers 100% saturated

Presumably a caching server would be 10GbE, 40GbE, or 100GbE

56Gbit/sec of pre-generated data is definitely something that you can handle from 1 or 2 decent servers, assuming each request doesn't generate a huge number of random disk reads or something

null

[deleted]

null

[deleted]

eggbrain

Limiting by referrer seems strange — if you know a normal user makes 10-20 requests (let’s assume per minute), can’t you just rate limit requests to 100 requests per minute per IP (5x the average load) and still block the majority of these cases?

Or, if it’s just a few bad actors, block based on JA4/JA3 fingerprint?

hyperknot

What if one user really wants to browse around the world and explore the map. I remember spending half an hour in Google Earth desktop, just exploring around interesting places.

I think referer based limits are better, this way I can ask high users to please choose self-hosting instead of the public instance.

jspiner

The cache hit rate is amazing. Is there something you implemented specifically for this?

hyperknot

Yes, I designed the whole path structure / location blocks with caching in mind. Here is the generated nginx.conf, if you are interested:

https://github.com/hyperknot/openfreemap/blob/main/docs/asse...

LoganDark

> I believe what is happening is that those images are being drawn by some script-kiddies.

Oh absolutely not. I've seen so many autistic people literally just nolifing and also collaborating on huge arts on wplace. It is absolutely not just script kiddies.

> 3 billion requests / 2 million users is an average of 1,500 req/user. A normal user might make 10-20 requests when loading a map, so these are extremely high, scripted use cases.

I don't know about that either. Users don't just load a map, they look all around the place to search for and see a bunch of the art others have made. I don't know how many requests is typical for "exploring a map for hours on end" but I imagine a lot of people are doing just that.

I wouldn't completely discount automation but these usage patterns seem by far not impossible. Especially since wplace didn't expect sudden popularity so they may not have optimized their traffic patterns as much as they could have.

Karliss

Just scrolled around a little bit 2-3minutes with network monitor open. That already resulted in 500requests, 5MB transferred (after filtering by vector tile data). Not sure how many of those got cached by browser with no actual requests, cached by browser exchanging only headers or cached by cloudflare. I am guessing that the typical 10-20 requests/user case is for embedded map fragment like those commonly found in contact page where most users don't scroll at all or at most slightly zoom out to better see rest of city.

nemomarx

There are some user scripts to overlay templates on the map and coordinate working together, but I can't imagine that increases the load much. What might is that wplace has been struggling under the load and you have to refresh to see your pixels placed or any changes and that could be causing more calls an hour maybe?

v5v3

The article mentions Cloudflare, so how much of this was cached by them?

do_anh_tu

Do you even read the article?

jwilk

From the HN Guidelines <https://news.ycombinator.com/newsguidelines.html>:

> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".

RandomBacon

That guideline is decent I guess.

I am disappointed that they edited another guideline for the worse:

> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.

It used to just say, don't complain about voting.

If the number of votes are so taboo, why do they even show us the number or user karma (and have a top list)?

keketi

Are you new? Nobody actually reads the articles.

LorenDB

False. I almost never upvote an article without reading it, and half of those upvotes are because I already read something similar recently that gave me the same information.

willsmith72

so 96% availability = "survived" now?

but interesting write-up. If I were a consumer of OpenFreeMap, I would be concerned that such an availability drop was only detected by user reports

timmg

96% during a unique event. I think you would typically consider long term in a stat like that.

Assuming it was close to 100% the rest of the year, that works out to 99.97% over 12 months.

ndriscoll

If I were a consumer of a free service from someone who will not take your money to offer support or an SLA (i.e. is not trying to run a business), I would assume there's little to no monitoring at all.

fnord77

sounds like they survived 1,000 reqs/sec and the cloudflare CDN survived 99,000 reqs/sec

charcircuit

>Nice idea, interesting project, next time please contact me before.

It's impossible to predict that one's project may go viral.

>As a single user, you broke the service for everyone.

Or you did by not having a high enough fd limit. Blaming sites when using it too much when you advertise there is no limit is not cool. It's not like wplace themselves were maliciously hammering the API.

010101010101

Do you expect him just to let the service remain broken or to scale up to infinite cost to himself on this volunteer project? He worked with the project author to find a solution that works for both and does not degrade service for every other user, under literally no obligation to do anything at all. This isn’t Anthropic deciding to throttle users paying hundreds of dollars a month for a subscription. Constructive criticism is one thing, but entitlement to something run by an individual volunteer for free is absurd.

charcircuit

We are talking about hosting a fixed amount of static files. This should be a solved problem. This is nothing like running large AI models for people.

010101010101

The nature of the service is completely irrelevant.

columb

You are so entitled... Because of you most nice things have "no limits but...". Not cool stress testing someone's infrastructure. Not cool. The author of this post is more than understanding, tried to fix it and offered a solution even after blocking them. On a free service.

Show us what you have done.

charcircuit

>You are so entitled

That's how agreements work. If someone says they will sell a hamburger for $5, and another person pays $5 for a hamburger, then they are entitled to a hamburger.

>On a free service.

It's up to the owner to price the service. Being overwhelmed by traffic when there are no limits is not a problem limited only to free services.

eszed

Sure, and if you bulk-order 5k hamburgers the restaurant will honor the price, but they'll also tell you "we're going to need some notice to handle that much product". Perfect analogy, really. This guy handled the situation perfectly, imo.

perching_aix

> Do you offer support and SLA guarantees?

>

> At the moment, I don’t offer SLA guarantees or personalized support.

From the website.

rikafurude21

the funny part is that his service didnt break- cloudflares cache caught 99% of the requests. just wanted to feel powerful and break the latest viral trend.

feverzsj

So, OFM was hit by another Million Dollar Homepage for kids.