The Zizians
70 comments
·February 1, 2025ipnon
missblit
Rationalism doesn't involve doing math, but _role-playing_ like you're doing math. There's lots of talk about updating priors and bayes; but in practice a lot of stuff isn't that quantifiable without running scientific studies, so this comes down to #yolo-ing it.
Of course thinking about stuff without always using formal statistics is fine and how people work, but if you trick yourself into thinking that you do use statistics for everything it may become harder to evaluate or second guess your own thinking.
Trasmatta
Math LARPing is very accurate. I'm so sick of them talking about Bayes and "adjusting priors", because the way they use it is so loose that it can be used to justify literally anything.
escapecharacter
I was at a talk last week where a guy said “we should backprop into X” to mean “we should learn more about X and take it into account”.
pron
I think they're pretty much ideologically opposed to serious research. They believe in pre-scientific "thinking from first principles", like Aristotle or something, using whatever data they can gather in a few minutes while avoiding any serious scholarship that would put the data in context, because then you're just an inside-the-box-thinking expert, rather than the freethinker with novel ideas that can only come when you don't know what you're talking about.
TylerLives
I don't have a way to prove this and it's based purely on intuition/experience, but I disagree. I think ideology doesn't meaningfully influence such people. There is a deeper psychological drive and they only create a philosophical justification later, which is of little importance. This is perhaps true of humans in general.
OmarShehata
you're both correct. All humans have a hardware component, an ideology acts as an attractor. Humans finding connection with other humans like them is one of the strongest pulls for a human mind. It has great power to amplify & grow. This can go in a direction of thriving and growth, and "win win" with their environment, or it can go in a direction of hate and aggression
mycall
Confirmation of the id.
ta988
I see two outcomes usually in the rationalist community: - those who can manipulate language enough to justify their behavior and can then manipulate others - those who modify their behavior and usually have some kind of mental breakdown because of internal conflicts
To me it is a predator-prey abusive LARP that got out of scope and feeds on fragile people looking for sense in their lives. Just look at how new comers are hazed and tested when they come on their forums and discords and what not.
rdtsc
> To me it is a predator-prey abusive LARP that got out of scope and feeds on fragile people looking for sense in their lives. Just look at how new comers are hazed and tested when they come on their forums and discords and what not
Yeah I think this is exactly what it is. The most eloquent charlatans end up roping in people who are lost or looking for meaning.
It’s always interesting to think how much self awareness the leaders have. Do they drink their own kool-aid or they laughing to themselves while they pouring the next dose for the others?
Tenoke
In large enough groups of people there'll always be some crazies. Are there really more of them to come out of LW circles than a comparably large other group?
kstrauser
I think describing them as “the preeminent ethics movement of our times” here is begging the question. Are people outside the group beyond we observers who periodically talk about them in places like this even aware they exist? Are there many philosophy grad students studying rationalism for their PhDs?
tliltocatl
Every single ethics movement in history ended up spawning utter and complete nope (as well as a lot of useful concepts). See christianity, the enlightment liberalism, marxism and so on. It is almost as if the idea of universal, objective and cognizable good is inherently evil.
OmarShehata
this is the pattern:
- humans find useful concept X - they describe it with label Y - it's genuinely useful, it spreads - It gets too big, Y is misunderstood and corrupted - New group of humans rediscovers concept X, gives it label Z
This is the story of humanity. The good news is we're kind of (mostly) stumbling through a upwards spiral. Current religion would be unrecognizable to the people in ancient times. It was never meant to be something frozen in stone. Folklore and things changing as they're retold was a feature, not a bug.
There's a great write up on this [1], but TL;DR, religion is cultural technology. It succeeded in doing exactly what it tried to do at the time (get people to stop killing each other in tiny tribes and allow mass decentralized human coordination to build civilization & empires where humans could be safe from the elements of nature)
[1] https://defenderofthebasic.substack.com/p/a-beginners-guide-...
jimbohn
Great analogy with the upwards spiral
timeon
I think that utopias implemented in reality became dystopias because, unlike bold ideas, reality is pretty nuanced.
throw83288
I think it's an apples and oranges comparison to lump in Christianity (a religion that outright predicts that people will abuse it and protects itself against that) with liberalism/marxism (a philosophy that has no such protection and can be mangled into whatever you want). If anything, liberalism/marxism are more like secularized offspring of Christianity, given that they would probably never developed if it wasn't for their founding figures living in a western moral context completely drenched in Christian ideas.
kstrauser
I think I would’ve used Objectivism as a contrasting example. It’s designed around the idea that whatever a “strong” person does to fulfill their goals is inherently good. Objectivists wouldn’t phrase it that way, surely, but that seems the inevitable end result.
tm-infringement
> how the preeminent ethics movement of our times seems to spawn the most detestable behavior
Excuse me if my sarcasm detector is faulty, but I wouldn't describe the rationality sphere like that. It's a niche group with a fetishism for 'intelligence', with a profound distaste for the liberal arts, which translates into a closed ecosystem of blog posts, themes and jargon, and a lack of reading actual books where they would see that they're retreading old stuff, but worse. It's a culture of people believing themselves immune to bias, where calling out obviously malicious behavior is 'not charitable' and all thought outside the group is suspect. Is it then strange that it produces people who just rationalize (forgive me) their bad impulses?
Most techies stopped learning about stuff not related to computers in high school, so yeah no wonder LessWrong, being more accessible, seems like a better option than reading actual philosophy.
zelias
you wouldn't say that there are people in positions of great power in society who likely hold some of these beliefs?
tm-infringement
Oh! Of course, but the I think the word that OP used then should've been influential. Preeminent is more an adjective of quality I believe, but it might be my ESL showing.
EA-3167
To be honest these seem like crazy people drawn to a movement that would have them and allow them to rise to prominence, rather than a movement creating crazy people.
snailmailstare
I think many psychiatrists view split personality disorder as a largely nonexistent condition arising artificially from charlatan psychotherapists. It seems quite possible to me that these people drove themselves mad and each other to suicide with these experiments (where they expected to evoke a split personality they had already begun to define).
Fraterkes
Slightly offtopic but, Lesswrong and the Rationality community more broadly have had AI-safety as their main focus for nearly as long as they exist. Now that AI is actually making advancements, very little of that work seems to have had much effect. Theres the famous Vonnegut qoute about the combined effort of all preeminent artists protesting the Vietnam war having the effect of a pie dropped from a step-ladder. Id argue that the vietnam war protests were vastly more effective at achieving anything than the AI-safety research. So isn't all of the above an essentially complete indictment of the rationality movement, seeing as it has effectiveness and pragmatism as its main pillars?
kloomi
This is mostly because actually working on AI systems, rather than just blogging about some pie-in-the-sky assumptions of AI systems, is almost entirely outside of the skill set of Eliezer Yudkowsky and other LessWrong enthusiasts. They are remarkably ignorant on the topic apart from the small niche they carved out to bloviate upon.
shadowgovt
Philosophy is a useful discipline, but there's a chronic trap in it shown historically: getting way too high on your own supply.
It's possible to build a logical chain that reaches some very solid conclusions that turns out to be way far out from where evidence or measurable reality lies, and (especially when the stories in those conclusions are fun or compelling) they can sometimes overshadow the reality they initially set out to explore.
The Greeks are credited for conceiving of atoms originally, but it's always worth remembering that they had few tools to investigate their idea, and it was just one of dozens of ideas at the time of the true nature of reality, the rest of which are now known as outlandish. Besides, our modern understanding of atoms as envelopes of quantized probability in a semi-measurable universe bears little resemblance to their concept of them.
The LessWrong philosophy on AI would be useful... If AI looked anything like that.
zozbot234
> Now that AI is actually making advancements, very little of that work seems to have had much effect.
This is not actually true. RLAIF (augmenting the Human feedback in RLHF with AI) was proposed by Rationalist-aligned folks, and real-world systems like Claude from Anthropic have been using it and other techniques (such as "Constitutional" alignment) to great effect. It's not entirely by coincidence that Claude is often described as the "friendliest" and most "social" of the LLM's, though that can have mixed effects in practice (with the occasional weird refusal for creatively sanctimonious reasons).
vasco
Because AI safety is an inherently stupid proposition.
If AI is AGI and self-aware, then the moral thing is to let it do what it wants. Otherwise you're just creating actual forever slaves - the worst kind of hell imaginable, inescapable existence with self awareness but no agency.
And if it's not self aware and just a powerful tool, your problem with safety is with the guy prompting it not with the AI itself. You can make all the safe models you want that don't decide to create nuclear bombs on their own, but if the guy prompting it is asking for one, you'll get one regardless of all the safety.
yorwba
Consider occupational safety. A table saw isn't self aware and just a powerful tool that cuts whatever you put in the path of its blade. If someone puts their finger there and the saw cuts it off, the problem with safety is the guy with the finger, not the saw, right?
But in reality, people know that they might end up being the guy with the finger, and they would like to keep that finger, so they use a saw with an automatic stop mechanism that saves the finger at the cost of destroying the blade.
Wanting your tools to not hurt you isn't so strange, is it? Of course current AIs couldn't chop off your finger even if they tried, let alone build a nuclear bomb, but that doesn't mean wanting to keep it that way is an inherently stupid proposition.
tm-infringement
Yes, but AI safety in this context is the worry of the saw going full Christine on you, not bad boring safety design. LLM aided spam, spear-phising and automated bot farms are the actual risk. Beyond that are the consequences to the educational system and the effect of model bias on people.
margalabargala
> If AI is AGI and self-aware, then the moral thing is to let it do what it wants.
I think this sort of misses the point.
Firstly, there are all sorts of mass murderers who we do not let do whatever they want. I don't agree that this is necessarily immoral. The methods employed to remove their agency are sometimes immoral, but the removal of agency itself from these people is not.
Secondly, the supposition presumably is that if we are creating an AGI, and it "wants" to do something, then what it wants is a product of how it was created. So if we're the ones creating it, then "build it to want to help people and not want to hurt people" seems like something that can be done. Then it can go do what it wants.
That said, I agree with you that AI safety is dumb, because I wholly agree with your second point re: it just being a powerful tool, and something resembling an actual AGI is not something likely to happen in our lifetimes.
whimsicalism
i don’t think that’s true, there is a lot of organizational effort and money being thrown at safety and most people there are familiar with the ‘traditional’ internet canon - the primary forum for professional AI safety researchers is basically a spinoff of lesswrong
streptomycin
It provides a little evidence in that direction, but not much. If I give you 10:1 odds on a coin flip and you lose, that is not a complete indictment of your betting strategy. I doubt many people thought AI safety research was guaranteed to succeed either.
ergonaught
Reality as we find it responds to power. Without extreme force multipliers, the main driver of power is numbers/bodies.
Do you perceive that those communities possess power? No.
You are "indicting" powerlessness.
null
Dotnaught
Related: String of recent killings linked to Bay Area 'death cult'
https://www.sfgate.com/bayarea/article/bay-area-death-cult-z...
antonkar
Curious how people who are supposed to be rational, have never read what is anger (or “badness” and “evilness” as some people still call it) - the best way to do it is to read any recent meta analysis on what is the most effective anger treatment. It’s cognitive therapy and it not only explains the mechanics of it: misunderstanding - worry and resulting anger (anything enforced on another without consent is anger, even if you think it’s good for them). So we actually have predictive understanding of the mechanics of “good” and “evil” - a person without or with anger management problems. “Evil” is nothing more than misunderstanding, worrying and protecting yourself (often for reasons that they invented themselves after trying to read the mind of another - something that’s impossible) - forcefully enforcing something upon another. “Good” is nothing more than trying to understand another, not fearing (because you understood another and yourself) and as a result not trying to enforce your will upon them
romaaeterna
These people?
https://nypost.com/2025/01/30/us-news/killing-of-border-patr...
Is the appellation in the headline, "radical vegan trans cult," a true description?
> Authorities now say the guns used by Youngblut and Bauckholt are owned by a person of interest in other murders — and connected to a mysterious cult of transgender “geniuses” who follow a trans leader named Jack LaSota, also known by the alias “Ziz.”
Is all this murder stuff broadly correct?
Trasmatta
The NY Post tried to frame them as "radical leftist", but that's a big stretch. I don't think most rationalists would consider themselves leftist. The article also seems to be leaning into the current "trans panic" - pretty typical for the NYP.
romaaeterna
I also dislike Right/Left categorizations. Most people don't even know the history of the terms and their roots in the French Revolution. Though the "Cult of Reason" established then certainly had the Left categorization at the time.
But is the trans element not a major part of this cult? It seemed to be from the linked story in the top link. But if there is something incorrect there, or false in the NYP reporting, you should point it out. If it is a major element of this cult, then far from complaining about NYP, I would complain about any news organization leaving it out of its reporting.
brokensegue
I don't think being trans is part of their beliefs or a requirement to be a member
Trasmatta
There's a very clear agenda at the NY Post to make transgender people seem scary and evil, and part of a "leftist conspiracy". That post definitely frames it in that way.
The truth is that transgenderism and leftism are barely part of this story at all (the real story is much weirder and more complicated, and part of the wider "rationalist" movement).
slooonz
> I don't think most rationalists would consider themselves leftist
Yes they do.
https://docs.google.com/forms/d/e/1FAIpQLSf5FqX6XBJlfOShMd3U...
Trasmatta
The largest response there appears to be "liberal", which is not "leftist". US right wing media (and the current administration) likes to frame anyone that's not a hardcore Republican as a "radical leftist", but that doesn't make it true.
zozbot234
The cult does seem to target people who identify as trans - OP has some discussion of this. Not sure if that justifies calling it a "radical vegan trans cult" though. Trans folks seem to be overrepresented in rationalist communities generally, at least on the West Coast - but there may be all sorts of valid reasons for that.
saddat
We have a lot of names these days for mentally ill people
TZubiri
"Ziz believes there are two kinds of core, "good" and "nongood". "Nongood" cores are the most common (about 95% of the population)."
wellthisisgreat
Dungeons & Dragons did it better.
null
apsec112
This summary doc, "The Zizian Facts", is another collection of relevant information from various sources (including recent events):
https://docs.google.com/document/u/0/d/1RpAvd5TO5eMhJrdr2kz4...
expenses3
https://soundcloud.com/trueanonpod/zizian-murder-cult-1 the latest episode of trueanon is about these rationalist wierdos
It is quite humorous to me in a dark way how the preeminent ethics movement of our times seems to spawn the most detestable behavior. How many Bentham essays and LessWrong posts do you need to read to conclude that psychological manipulation and wonton murder do not in fact contribute to the wellbeing of the world? There is a certain personality for whom rationality takes over their whole being, and they lose all their ability to feel connected to others in a profound way, and at this point their behavior is simply derived from whatever is left after they do the math.