Decentralized Syndication – The Missing Internet Protocol
21 comments
·January 10, 2025teddyh
Is he reinventing USENET netnews?
bb88
Yes and no. I think the issue primarily is that I could never just generate a new newsgroup back when usenet was popular and get it to syndicate with other servers.
The other issue is who's going to host it? I need a port somehow (CGNAT be damned!).
hinkley
Spam started on Usenet. As did Internet censorship. You can’t just reinvent Usenet. Or we could all just use Usenet.
glenstein
While everyone is waiting for Atproto to proto, ActivityPub is already here. This is giving me "Sumerians look on in confusion as god creates world" vibes.
https://theonion.com/sumerians-look-on-in-confusion-as-god-c...
wmf
1. Domain names: good.
2. Proof of work time IDs as timestamps: This doesn't work. It's trivial to backdate posts just by picking an earlier ID. (I don't care about this topic personally but people are concerned about backdating not forward-dating.)
N. Decentralized instances should be able to host partial data: This is where I got lost. If everybody is hosting their own data, why is anything else needed?
macawfish
Domain names are fine but they shouldn't be forced onto anyone. Nothing about DID or any other flexible and open decentralized naming/identity protocol will prevent anyone from using domain names if they want to.
hinkley
Time services can help with these sorts of things. They aren’t notarizing the message. You don’t trust the service to validate who wrote it or who sent it, you just trust that it saw these bytes at this time.
catlifeonmars
Something that maintains a mapping between a signature+domain and the earliest seen timestamp for that combination? I think at that point the time service becomes a viable aggregated index for readers who use to look for updates. I think this also solves the problem for lowering the cost of participation… since the index would only store a small amount of data per-post, and since indexes can be composed by the reader, it could scale cost effectively.
evbogue
If the data is a signed hash, why does it need the domain name requirement? One can host self-authenticating content in many places.
And one can host many signing keys at a single domain.
catlifeonmars
In the article, the main motivation for requiring a domain name, is to raise the barrier to entry above “free” to mitigate spamming/abuse.
wmf
One person per domain is essentially proof of $10.
toomim
I am working on something like this. If you are, too, please contact me! toomim@gmail.com.
evbogue
I'm working on something like this too! I emailed you.
pfraze
Atproto supports deletes and partial syncs
hkt
https://en.wikipedia.org/wiki/Syndie was a decent attempt at this which is, I gather, still somewhat alive.
convolvatron
alot of the use cases for this would have been covered by protocol designs suggested by Floyd, Jacobson and Zhang in https://www.icir.org/floyd/papers/adapt-web.pdf
but it came right at a time when the industry had kind of just stopped listening to that whole group, and it was built on multicast, which was a dying horse.
but if we had that facility as a widely implemented open standard, things would be much different and arguably much better today.
fiatjaf
Nostr is kind of what you're looking for.
Uptrenda
>Everybody has to host their own content
Yeah, this won't work. Like at all. This idea has been tried over and over on various decentralized apps and the problem is as nodes go offline and online links quickly break...
No offense but this is a very half-assed post to gloss over what has been one of the basic problems in the space. It's a problem that inspired research in DHTs, various attempts at decentralized storage systems, and most recently -- we're getting some interesting hybrid approaches that seem like they will actually work.
>Domain names should be decentralized IDs (DIDs)
This is a hard problem by itself. All the decentralized name systems I've seen suck. People currently try use DHTs. I'm not sure that a DHT can provide reliability though and since the name is the root of the entire system it needs to be 100% reliable. In my own peer-to-peer work I side-step this problem entirely by having a fixed list of root servers. You don't have to try "decentralize" everything.
>Proof of work time IDs can be used as timestamps
Horribly inefficient for a social feed and orphans are going to screw you even more.
I think you've not thought about this very hard.
catlifeonmars
> In my own peer-to-peer work I side-step this problem entirely by having a fixed list of root servers. You don't have to try "decentralize" everything.
Not author, but that is what the global domain system is. There are a handful of root name servers that are baked into DNS resolvers.
I would love to have an RSS interface where I can republish articles to a number of my own feeds (selectively or automatically). Then I can follow some my friends' republished feeds.
I feel like the "one feed" approach of most social platform is not here to benefit users but to encourage doom-scrolling with FOMO. It would be a lot harder for them to get so much of users' time and tolerance for ads if it were actually organized. But it seems to me that there might not be that much work needed to turn an RSS reader into a very productive social platform for sharing news and articles.