Skip to content(if available)orjump to list(if available)

The Making of Community Notes (2024)

The Making of Community Notes (2024)

17 comments

·January 20, 2025

yodsanklai

It's a very interesting design space. How to design public forums where people can share ideas and eventually improve their knowledge collectively.

There are very successful instances of that, for instance MathOverflow.

But I don't know what can be achieved when a good chunk of participants don't agree on some basic rules beforehand, such as logic and good faith. It's a bit like consensus with byzantine failures, maybe there's an impossibility theorem here and large social network should be limited to cat videos.

gsf_emergency_2

Here's one vision:

HN, after it's been overwhelmed by Eternal Septembrists, opensources it's moderation code.

o7 uses the the data to hunt for possibility theorems while lone humans herd diffusion generators to spray humanity with miniHN

In the meantime, lesser entities find better realtime algos to extract/compress novel information from random fora

randysalami

I’ve had an idea for a while called DoubleSign except it’s for political beliefs. It resolves the agreement issue by indexing users with other users that believe the same thing as them e.g. socialists are grouped with socialists, fascists with fascists, etc. but on a more granular level.

By making it transparent and obvious unlike algorithmic echo chambers, people know where they are, and immediately achieve consensus and can organize for productive aims rather than bicker and in-fight. There are many more mechanics and concepts but that’s the high-level.

patrickTs

This actually seems cool, but it would be difficult to exist as a tech company that explicitly exists to organize extremists. But yea, would probably lead to greater actual discussions than something like twitter, which seems to feed on the anger of its users.

randysalami

Yes, definitely less of a company that exists to maximize profit but more of a public service for the greater good. Platforming bad people isn’t ideal but in my opinion, it’s important and those groups should look pathetic in comparison to the majority. I’d imagine memberships in the thousands while more popular beliefs have hundreds of thousands.

At the end of the day, they are our fellow countrymen and we all need to know where we stand. If they were somehow the majority, that doesn’t reflect poorly on application but poorly on the society we live in. It’s like a mirror.

I also want to add that it exists to organize practical action. In these indexes, there would exist mechanisms to organize people politically in the real world with direction and funding. I think how there are so many local elections that go unopposed or have meager voting turnout. These elections could be displayed and individual indexes could actually crowdfund and run candidates on their behalf. People which might never vote in these elections all the sudden are realizing that 500 people in their small town believe the exact same thing as them (not just go blue or go red) and can actually !win! an election. Their candidate is now elected and a direct member of their index which they can talk to and have advocate for them.

The big idea is the bottom up approach could eventually lead to the platform getting senators or governors elected by users of the network if indexes become large enough and potentially threaten the traditional two party system when people realize they don’t stand alone in their distinct beliefs and can do something about it. Pretty cool and pretty powerful (:

derbOac

It's a useful piece, thanks for posting it.

Flagging of misinformation has become a controversial topic but for me personally it strikes me as odd to even do such a thing at all. Or rather, if there's a need for such a system it suggests the platform has already failed, in that people are not able to use it discerningly, whether because of the posters, readers, system, or some interaction thereof. If they are able to do so, why have it?

I've posted this elsewhere but the nature of community notes makes it a bit murkier. If Platform A introduces misinformation flags, you can always compartmentalize it as the platform inserting itself into the conversation one way or another, but with something like community notes, you're left wondering at some level "what happened here?" in a way that's similar to upvotes or downvotes.

The point about people trusting community notes I'm not sure what to do with either. Other studies I've seen cited elsewhere suggest that there's a very high false negative rate, so are they doing what they're supposed to? How are people interpreting the unflagged posts?

The whole thing seems well-intended at some level but also missing the forest for the trees or something. I don't want community notes or misinformation flags, I just want discourse and I want to be able to ignore certain sources of posts and be more exposed to others.

It's an interesting idea and I'm not even sure I'm objecting to its implementation. I guess it's more so that I feel like there's an overconfidence in all of the misinformation flagging methods, and this overconfidence is maybe taking attention away from more serious problems, like how people consume and evaluate information, or how information networks are being controlled.

antidamage

Depending on the subject, CN is frequently a second vector for disinformation now as well. Posts that are clearly disinformation, from the right account, almost seem to have their supporting CNs lined up in advance. It feels a little organised, or at least a bit pretend-stochastic.

Overall CNs are still working but if you're exposed to the pre-helpful stage notes it can either be frustrating or confusing.

emmelaich

Sort of inevitable that as CNs get more valued, the more people will attempt to misuse it.

I only ever 'ok/agree' about 30pc of proposed community notes.

hammock

Can you give a couple examples of CN that are “disinformation”?

_DeadFred_

Please link to studies when referenced. Otherwise is just seems like an appeal to authority especially as your post is very anti the topic discussed.

Alive-in-2025

A big reason we got to this point in the us is that repeating falsehoods over and over ahead on say fox news convinces a good number of people they are true. We have social media too. It's very successful. The success of this strategy is notable. The propaganda capabilities of today exceed the original electronic propaganda schemes in the early 20th century.

This is a separate question about whether the propaganda methods are spreading correct or false information. I think that's a question with an obvious answer but I'm sure people will differ.

hammock

Can you give a couple of examples of falsehoods that are repeated over and over on Fox News that a good number of people are now convinced are true?

nxobject

Brilliant postmortem on the processes used to intentionally develop a seed of a quality idea into practice. However. I’ll be interested to see what the authors’ instincts lead them to do when a lot of energy is put into figuring out how to game, and once the car and mouse game begins… thinking adversarially is very different.

yowayb

Haven't read the entire interview yet, but I have to say so far it has been very helpful.

null

[deleted]

null

[deleted]