When imperfect systems are good: Bluesky's lossy timelines
310 comments
·February 19, 2025pornel
ericvolp12
This is probably what we'll end up with in the long-run. Things have been fast enough without it (aside from this issue) but there's a lot of low-hanging fruit for Timelines architecture updates. We're spread pretty thin from a engineering-hours standpoint atm so there's a lot of intense prioritization going on.
Xunjin
Just to be clear, you are a Bluesky engineer, right?
off-topic: how has been dealing with the influx of new users after X political/legals problems aftermath? Did you see an increase in toxicity around the network? And how has you (Bluesky moderation) dealing with it.
ToucanLoucan
[flagged]
petra
Maybe this would be helpful:http://daslab.seas.harvard.edu/datacalculator/
curious_cat_163
That's insightful. Keep up the good work!
rsynnott
> and later when serving each follower's timeline, fetch the celebrity's posts and merge them into the timeline
I think then you still have the 'weird user who follows hundreds of thousands of people' problem, just at read time instead of write time. It's unclear that this is _better_, though, yeah, caching might help. But if you follow every celeb on Bluesky (and I guarantee you this user exists) you'd be looking at fetching and merging _thousands_ of timelines (again, I suppose you could just throw up your hands and say "not doing that", and just skip most or all of the celebs for problem users).
Given the nature of the service, making read predictably cheap and writes potentially expensive (which seems to be the way they've gone) seems like a defensible practice.
fc417fc802
> I suppose you could just throw up your hands and say "not doing that", and just skip most or all of the celebs for problem users
Random sampling? It's not as though the user needs thousands of posts returned for a single fetch. Scrolling down and seeing some stuff that's not in chronological order seems like an acceptable tradeoff.
christkv
You might mix the approaches based on some cut off point
rubslopes
This problem is discussed in the beginning of the Designing Data-Intensive Applications book. It's worth a read!
Brystephor
Do you know the name of the problem or strategy used for solving the problem? I'd be interested in looking it up!
I own DDIA but after a few chapters of how database work behind the scenes, I begin to fall asleep. I have trouble understanding how to apply the knowledge to my work but this seems like a useful thing with a more clear application.
bitbckt
Yes, we used the Yahoo! “Feeding Frenzy” paper as the basis for the design of Haplocheirus (the timeline service).
locusofself
Why do they "insert" even non-celebrity posts into each follower's timeline? That is not intuitive to me.
giovannibonetti
To serve a user timeline in single-digit milliseconds, it is not practical for a data store to load each item in a different place. Even with an index, the index itself can be contiguous in disk, but the payload is scattered all over the place if you keep it in a single large table.
Instead, you can drastically speed up performance if you are able to store data for each timeline somewhat contiguously on disk.
wlonkly
Think of it as pre-rendering. Of pre-rendering and JIT collecting, pre-rendering means more work but it's async, and it means the timeline is ready whenever a user requests it, to give a fast user experience.
(Although I don't understand the "non-celebrity" part of your comment -- the timeline contains (pointers to) posts from whoever someone follows, and doesn't care who those people are.)
locusofself
Perhaps I misunderstanding, I thought the actual content of each tweet was being duplicated to every single timeline who followed the author, which sounded extremely wasteful, especially in the case of someone who has 200 million followers.
VWWHFSfQ
At some point they'll end up just doing the Bieber rack [1]. It's when a shard becomes so hot that it just has to be its own thing entirely.
[1] - https://www.themarysue.com/twitter-justin-bieber-servers/
@bluesky devs, don't feel ashamed for doing this. It's exactly how to scale these kinds of extreme cases.
genewitch
I've stood up machines for this before I did not know they had a name, and I worked at the mouse company and my parking spot was two over from a J. Beibe'rs spot.
So now we have Slashdot effect, HN hug, and its not Clarkson its... Stephen Fry effect? Maybe can be Cross-Discipline - there's a term for when lots of UK turns their kettles on at the same time.
I should make a blog post to record all the ones I can remember.
k1t
TV Pickup aka the Half Time Kettle Effect.
stavros
Given that BlueSky is funded by Twitter, I'm assuming they know a lot more than us on how Twitter architects systems.
bitbckt
We never actually had a literal “Bieber Box”, but the joke took off.
Hot shards were definitely an issue, though.
Imustaskforhelp
Its so crazy.
Thanks a lot for sharing this link.
ChuckMcM
As a systems enthusiast I enjoy articles like this. It is really easy to get into the mindset of "this must be perfect".
In the Blekko search engine back end we built an index that was 'eventually consistent' which allowed updates to the index to be propagated to the user facing index more quickly, at the expense that two users doing the exact same query would get slightly different results. If they kept doing those same queries they would eventually get the exact same results.
Systems like this bring in a lot of control systems theory because they have the potential to oscillate if there is positive feedback (and in search engines that positive feedback comes from the ranker which is looking at which link you clicked and giving it a higher weight) and it is important that they not go crazy. Some of the most interesting, and most subtle, algorithm work was done keeping that system "critically damped" so that it would converge quickly.
Reading this description of how user's timelines are sharded and the same sorts of feedback loops (in this case 'likes' or 'reposts') sounds like a pretty interesting problem space to explore.
snailmailman
I guess I hadn’t considered that search engines could be reranking pages on the fly as I click them. I’ve been seeing my DuckDuckGo results shuffle around for a while now thinking it’s an awful bug.
Like I click one page, don’t find what I want, and go back thinking “no, I want that other result that was below” and it’s an entirely different page with shuffled results, missing the one that I think might have been good.
PaulHoule
That's connected with a basic usability complaint about current web interfaces, that ads and recommended content aren't stable. You very well might want to engage with an ad after you are done engaging what you wanted to engage with but you might never see it again. Similarly, you might see two or three videos that you want to click on on the side of a YouTube video you're watching but you can only click on one (though if you are thinking ahead you can open these in another tab.)
On top of that immediate frustration, the YouTube style interface here
https://marvelpresentssalo.com/wp-content/uploads/2015/09/id...
collects terrible data for recommendations because, even though it gives them information that you liked the thumbnail for a video, they can't come to any conclusion about whether or not you liked any of the other videos. TikTok, by focusing on one video at a time, collects much better information.
4ggr0
> though if you are thinking ahead you can open these in another tab
or add it to the "Watch Later" playlist :) so you can watch it...later.
cgriswald
I don't use DDG, but in my (very limited, just now) testing it doesn't seem to shuffle results unless you reload the page in some way. Is it possible you're browser is reloading the page when you go back? If so, setting DDG to open links in new tabs might fix this problem.
snailmailman
Interesting. Maybe something in my configuration is affecting it. I’ll have to look into it
numeri
This behavior started happening for me in the last few months. If I click on a result, then go back, I have different search results.
I've found a workaround, though – click back into the DDG search box at the top of the page and hit enter. This then returns the original search results.
gtfiorentino
Hi - I work on search at DuckDuckGo. Do you mind sharing a bit more detail about this issue? What steps would allow us to reproduce what you're seeing?
gopher_space
> Some of the most interesting, and most subtle, algorithm work was done keeping that system "critically damped" so that it would converge quickly.
Looking back at my early work with microservices I'm wondering how much time I would have saved by just manually setting a tongue weight.
dwedge
Similar to how Google images loads lower quality blurred thumbnails towards the bottom of the window at first so that the user thinks they loaded faster
aqueueaqueue
This is less a question of perfection and one of trade off's. Laws of physics put a limit on how efficiently you can keep data in NYC and London in perfect sync, so you choose CAP-style trade-offs. There are also $/SLO trade-offs. Each 9 costs more money.
I like your example it is very interesting. If I get to work on (or even hear someone in my team is working on) such interesting problems and I can hear about it, I get happy.
Interesting problems are rare because like a house you might talk about brick vs. Timber frame once, but you'll talk about cleaning the house every week!
gregw134
Would you be willing to share more about how you guys did click ranking at Blekko? It's an interesting problem.
culi
What became of Blekko?
an_ko
> It was acquired by IBM in March 2015, and the service was discontinued.
— https://en.wikipedia.org/wiki/Blekko
Perhaps GP has a more interesting answer though.
ChuckMcM
That's the correct answer, IBM wanted the crawler mostly to feed Watson. Building a full search engine (crawler, indexer, ranker, API, web application) for the English language was a hell of an accomplishment but by the time Blekko was acquired Google was paying out tens of billions of dollars to people to send them and only them their search queries. For a service that nominally has to live on advertising revenue getting humans to use it was the only way to be net profitable, and you can't spend billions buying traffic and hope to make it back on advertising as the #3 search engine in the English speaking markets.
There are other ways to monetize search (look at Kagi for example) than advertising. Blekko missed that window though. (too early, Google needed to get a crappy as it is today to make the value of a spam free search engine desirable)
genewitch
PID techniques useful?
dsauerbrun
I'm a bit confused.
The lossy timeline solution basically means you skip updating the feed for some people who are above the number of reasonable followers. I get that
Seeing them get 96% improvements is insane, does that mean they have a ton of users following an unreasonable number of people or do they just have a very low number for reasonable followers. I doubt it's the latter since that would mean a lot of people would be missing updates.
How is it possible to get such massive improvements when you're only skipping a presumably small % of people per new post?
EDIT: nvm, I rethought about it, the issue is that a single user with millions of follows will constantly be written to which will slow down the fanout service when a celebrity makes a post since you're going through many db pages.
friendzis
When a system gets "overloaded", typically it enters exponential degradation of performance state, i.e. performs self ddos.
> Seeing them get 96% improvements is insane
TFA is talking about P99 tail latencies. It does not sound too insane to reduce tail latencies by extraordinary margins. Remember, it's just reshaping of latency distribution. In this case pathological cases get dropped.
Beretta_Vexee
> does that mean they have a ton of users following an unreasonable number of people
Look at the accounts of OnlyFans models, crypto influencers, etc. They follow thousands or even tens of thousands of accounts in the hope that we will follow them in return.
mapt
I don't see that accommodating this behavior is prosocial or technically desirable.
Can you think of a use case?
All sorts of bots want this sort of access, but whether there are legitimate reasons to grant it to them on a non-sharded basis is another question since a lot of these queries do not scale resources with O(n) even on a centralized server architecture.
tart-lemonade
Given enough time, you'll end up with a lot of legitimate users who follow a huge number of accounts but rarely interact with more than a handful, similar to how many long-time YouTubers have a very high subscriber:viewer ratio (that is, they have way more subscribers than you would expect given their average view count), and there's nothing inherently suspicious about it. People lose access to their accounts, make new accounts, die, get bored, or otherwise stop watching the content but never bother unsubscribing because the algorithm recognized this and stopped recommending the channel's uploads to them.
Bluesky doesn't have this problem yet because it's so young, so the outsized follow counts are mostly going to be from doomscrollers and outright malicious users, but even if it was exclusively malicious users, there is no perfect algorithm to identify them, much less do so before they start causing performance problems. Under those constraints, it makes sense to limit the potential blast radius and keep the site more usable for everyone.
marksomnian
From TFA:
> Generally, this can be dealt with via policy and moderation to prevent abusive users from causing outsized load on systems, but these processes take time and can be imperfect.
So it’s a case of the engineers accepting that, however hard they try to moderate, these sorts of cases will crop up and they may as well design their infrastructure to handle them.
aloha2436
> does that mean they have a ton of users following an unreasonable number of people
They do, there are groups of users on bluesky who follow inordinate numbers of other accounts to try and get follows back.
citrus1330
They were specifically looking at worst-case performance. P99 means 99th percentile, so they saw 96% improvement on the longest 1% of jobs.
rakoo
Ok I'm curious: since this strategy sacrifices consistency, has anyone thoughts about something that is not full fan-out on reads or on writes ?
Let's imagine something like this: instead of writing to every user's timeline, it is written once for each shard containing at least one follower. This caps the fan-out at write time to hundreds of shards. At read time, getting the content for a given users reads that hot slice and filters actual followers. It definitely has more load but
- the read is still colocated inside the shard, so latency remains low
- for mega-followers the page will not see older entries anyway
There are of course other considerations, but I'm curious about what the load for something like that would look like (and I don't have the data nor infrastructure to test it)
spoaceman7777
Hmm. Twitter/X appears to do this at quite a low number, as the "Following" tab is incredibly lossy (some users are permanently missing) at only 1,200 followed people.
It's insanely frustrating.
Hopefully you're adjusting the lossy-ness weighting and cut-off by whether a user is active at any particular time? Because, otherwise, applying this rule, if the cap is set too low, is a very bad UX in my experience x_x
VWWHFSfQ
> It's _insanely_ frustrating.
> at only 1,200 followed people.
I follow like, 50 people on bluesky. Who is following 1,200 people? What kind of value do you even get out of your feed?
peoplepostphew
1200 people is really nothing, specially if you have a job tangentially related to social media (for example journalists). It's really simple, you are not the same type of user. You have 50 "acquaintances", they have 1200 "sources".
The article is talking about people who have following/follower counts in the millions. Those are dozens of writes per second in one feed and a fannout of potentially millions. Someone with 1200 followers, if everyone actually posts once a day (most people do not) gets... a rate of 0.138 writes per second.
They should be background noise, irrelevant to the discussion. That level of work is within reasonable expectation. What they're pointing out is that Twitter is aggressively anti-perfectionist for no good technical reason - so there must be a business reason for it.
VWWHFSfQ
Why are you following 1,200 people? What is the point of your home feed? What are you trying to see?
throw10920
I can come up with 100 people I'd want to follow on Twitter, and I don't even have an account. Don't dismiss other people's use-cases if you don't have or understand them.
rconti
> Additionally, beyond this point, it is reasonable for us to not necessarily have a perfect chronology of everything posted by the many thousands of users they follow, but provide enough content that the Timeline always has something new.
While I'm fine with the solution, the wording of this sentence led me to believe that the solution was going to be imperfect chronology, not dropped posts in your feed.
jadbox
So, let's say I follow 4k people in the example and have a 50% drop rate. It seems a bit weird that if all (4k - 1) accounts I follow end up posting nothing in a day, that I STILL have a 50% chance that I won't see the 1 account that posts in a day. It seems to me that the algorithm should consider my feed's age (or the post freshness of my followers). Am I overthinking?
imrehg
This feels like an edge case.
The "reasonable limit" is likely set based on experimentation, and thus on how much people post on average and the load it generates (so the real number is unlikely to be exactly "2000", IMHO).
If you follow a lot of people, how likely it is that their posting pattern is so different from the average? The more people you follow, the less likely that is.
So while you can end up in such situation in theory, it would need to be a very unusual (and rare) case.
brianolson
I think the 'law of large numbers' says that it's very unlikely for you to follow 4k and have _none_ of them posting. You could artificially construct a counter-example by finding 4k open but silent accounts, but that's silly.
The other workaround is: follow everyone. Write some code to get what you want out of the jetstream event feed. https://docs.bsky.app/blog/jetstream
kevincox
Yeah, this seems concerning to me. Maybe now as the platform is new this isn't much of an issue. But as accounts go inactive people will naturally collect "dead" accounts that they are still following. On Facebook it isn't uncommon of to have old accounts of sociable people naturally collect thousands of friends.
It seems that what they are trying to measure is "busy timelines" and it seems bag they could probably measure that more directly. For example what is the number of posts in the timeline over theast 24h? It seems that it should be fairly easy to use this as the metric for calculating drop rate.
ultra-boss
Love reading these sorts of "technical problem + solution" pieces. The world does not need more content, in general, but it does need more of this kind of quality information sharing.
knallfrosch
Anyone following hundreds of thousands of users is obviously a bot account scraping content. I'd ban them and call it a day.
However, I do love reading about the technical challenge. I think Twitter has a special architecture for celebrities with millions of followers. Given Bluesky is a quasi-clone, I wonder why they did not follow in these footsteps.
psionides
You don't need to follow anyone (or even have an account) to scrape content… Someone following a huge amount of accounts usually wants to get a lot of followers quickly this way through follow-backs.
mikemitchelldev
Yes, and Starter Packs make this possible.
steveklabnik
> Given Bluesky is a quasi-clone, I wonder why they did not follow in these footsteps.
There are only six users with over a million followers, and none with two million yet.
I'm sure they'll get there.
culi
Maybe not hundreds of thousands but I'd follow anybody that looks remotely interesting and then primarily use customized feeds. E.g. if I wanna hear about union news, my personal irl network, etc I check that feed
ruined
if you want to scrape all the content, that's what the firehose is for, and it's allowed.
the only reason to mass-follow is for spam purposes.
Retr0id
This does assume that scrapers are smart, and often they're really not. They have infrastructure for scraping HTML from webpages at scale and that is the hammer they use for all nails. (e.g. Wikipedia has to fight off scraper traffic despite full archives being available as torrents, etc.)
In this case I agree though, they're all spammers and/or "clout farmers", or trying to make an account seem more authentic for future scams. They want to generate follow notifications in the hope that some will follow them back (and if they don't, they unfollow again after some interval).
sarchertech
100%. I ran a job board where we provided a nice machine readable XML feed of all of our jobs, but we had bots that insisted on using the standard search box. Searching by city using an alphabetized list.
Geographic search to was the most expensive thing they could have done and no matter what we did we couldn’t get them to use the XML feed.
I even tried returning a link to the feed when we detected a bot. No dice. They just kept working around the bot detection.
mikemitchelldev
BlueSky has starter packs that allow you to mass follow in the click of a button. You join 10 starter packs in one day, you are following over 1000 people. Sometimes following others is the only way to get people to engage with your content.
null
tshaddox
Or just enforce a maximum number of followed accounts.
ARandumGuy
No matter how high you set a maximum limit for interactions on social media (followers, friends, posts, etc), someone will reach the limit and complain about it. I can see why Bluesky would prefer a "soft limit", where going above the limit will degrade the experience. It gives more flexibility to adjust things later, and prevents obnoxious complaints from power users with outsized influence.
tshaddox
I’m skeptical that the people who would complain about that wouldn’t find something else to complain about if you resolved the first complaint. I’d recommend implementing product features that you think are reasonable and accepting the fact that you will get complaints from people who disagree.
DeepSeaTortoise
Potential solutions:
- Make it easy to systematically unfollow people (or degrade them to a different tier, see below, or sort them automatically into a different feed; maybe even allow automatic following of certain people, like your cities mayor or local ice cream parlors). Like based on recent activity, last engagement with a post, type of content (pictures, videos, links ...), on a schedule (e.g. follow for 3 yeard, follow until 2028), special status (family, friends, member of congress, member of city council, mayor...), number/ratio of common followers, regex expressions, recommendations by certain accounts, letter-to-word ratio, season, planetary alignment, weather, age, train departure time, side-chaing based on other accounts, force accounts to play russian unfollow roulette, urgency to pee, healthcare CEO life expectancy derivative, ... or any combination of these.
- Allow different tiers of following someone. Like friends (never unfollow, always fetch updates), family (never unfollow, rate limit high-energy uncles), news (filter based on urgency or current topics of interest), politicians (highlight as untrustworthy, attach link to donation and board membership disclosure, attach term-limit and next election countdown), local businesses (hard rate limit, attach opening hours), bookmark (never unfollow, no updates), ... maybe multiple tiers in each category and allow those being followed to either temporarily boost their tier (or tiers of certain posts) or e.g. once per year.
- Allow people from exempting some of their posts from not being shown to some of their followers. E.g. two per week and an additional 5 per month.
- Allow people to choose which followers should be given a higher priority when writing posts to their feeds.
cavisne
AWS has a cool general approach to this problem (one badly behaving user effecting others on their shard)
https://aws.amazon.com/builders-library/workload-isolation-u...
The basic idea is to assign each user to multiple shards, decreasing the changes of another user sharing all their shards with the badly behaving user.
Fixing this issue as described in the article makes sense, but if they did shuffle sharding in the first place it would cover any new issues without effecting many other users.
artee_49
I think shuffle sharding is beneficial for read-only replica cases, not for writing scenarios like this. You'll have to write to the primary and not to a "virtual node". Right? Or am I understand it incorrectly? I just read that article now.
ramblejam
Nice problem to have, though. Over on Nostr they're finding it a real struggle to get to the point where you're confident you won't miss replies to your own notes, let alone replies from other people in threads you haven't interacted with.
The current solution is for everyone to use the same few relays, which is basically a polite nod to Bluesky's architecture. The long-term solution is—well it involves a lot of relay hint dropping and a reliance on Japanese levels of acuity when it comes to picking up on hints (among clinets). But (a) it's proving extreme slow going and (b) it only aims to mitigate the "global as relates to me" problem.
sphars
When I go directly to a user's profile and see all their posts, sometimes one of their posts isn't in my timeline where it should be. I follow less than 100 users on Bluesky, but I guess this explains why I occasionally don't see a user's post in my timeline.
Lossy indeed.
Retr0id
If another user you follow reposted or replied to a post, it can affect its order in your following feed. You shouldn't be seeing any loss as described in the article from following only 100 users.
sphars
I've experienced it with "first-party" posts, not replies. A post wouldn't show in my timeline but would on the user's profile. This is the official android app, but there has been an update or two so I'll have to double check again
Eric_WVGG
Are you using an app, website, or combination?
Various clients (I’m writing one) interpret the timeline differently, as a feed that shows literally everything includes could things that most people would find undesirable or irrelevant. (replies to strangers, replies to replies to replies, etc)
sphars
I'm using the official android app. There has been an update or two so I'll have to confirm it's still happening
I wonder why timelines aren't implemented as a hybrid gather-scatter choosing strategy depending on account popularity (a combination of fan-out to followers and a lazy fetch of popular followed accounts when follower's timeline is served).
When you have a celebrity account, instead of fanning out every message to millions of followers' timelines, it would be cheaper to do nothing when the celebrity posts, and later when serving each follower's timeline, fetch the celebrity's posts and merge them into the timeline. When millions of followers do that, it will be cheap read-only fetch from a hot cache.