Guess I'm a Rationalist Now
405 comments
·June 19, 2025contrarian1234
jandrese
They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic. Not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value.
There is a term for this. "Getting stuck up your own butt." It wouldn't be so bad except that said people often take on an air of absolute superiority because they used "only logic" and in their head they can not be wrong. Many people end up thinking like this as teenagers or 20 somethings, but most will have someone in their life who smacks them over the head and tells them to stop being so foolish, but if you have enough money and the Internet you can insulate yourself from that kind of oversight.
troyastorino
The overlap between the Effective Altruism community and the Rationalist community is extremely high. They’re largely the same people. Effective Altruism gained a lot of early attention on LessWrong, and the pessimistic focus on AI existential risk largely stems from an EA desire to avoid “temporal-discounting” bias. The reasoning is something like: if you accept that future people count just as much as current people, and that the number of future people vastly outweighs everyone alive today (or who has ever lived), then even small probabilities of catastrophic events wiping out humanity yield enormous negative expected value. Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
People in these communities are generally quite smart, and it’s seductive to reason in a purely logical, deductive way. There is real value in thinking rigorously and in making sure you’re not beholden to commonly held beliefs. But, like you said, reality is complex, and it’s really hard to pick initial premises that capture everything relevant. The insane conclusions they get to could be avoided by re-checking & revising premises, especially when the argument is going in a direction that clashes with history, real-world experience, or basic common sense.
dasil003
Intelligence and rational thought is useful, but like any strategy it has its tradeoffs and limitations. No amount of intelligence can overcome the chaos of long time horizons, especially when we're talking about human civilization. IMHO it's reasonable to pick a long-term problem/risk and focus on solving it. But it's pure hubris to think rationality will give you anything approaching high confidence of what the biggest problems and risks actually are on a 20-50 year time horizon, let alone 200-500 years or longer.
The whole reason we even have time to think this way is because we are at the peak of an industrial civilization that has created a level of abundance that allows a lot of people a lot of time to think. But the whole situation that we live in is not stable at all, "progress" could continue, or we could hit a peak and regress. As much as we can see a lot of long-term trajectories (eg. peak oil, global warming), we really have no idea what will be the triggers and inflection points that change the social fabric in ways that are unforeseeable and quickly invalidate whatever prior assumptions all that deep thinking was resting upon. I mean 50 years ago we thought overpopulation was the biggest risk, and that thinking has completely flipped even without a major trajectory change for industrial civilization in that time.
AnthonyMouse
> Therefore, nothing can produce greater positive expected value than preventing existential risks—so working to reduce these risks becomes the highest priority.
Incidentally, the flaw in this theory is in thinking you understand what all the existential risks are.
Suppose you clock "malicious AI" as a huge risk and then hamper AI, but it turns out the bigger risk is not doing space exploration, which AI would have accelerated, because something catastrophic yet already-inevitable is going to happen to the Earth in a few hundred years and if we're not sustainably multi-planetary by then it's all over.
The thing evolution teaches us is that diversity is a group survival trait. Anybody insisting "nobody anywhere should do X" is more likely to cause an ELE than prevent one.
jdmichal
I'm not familiar with any of these communities. Is there also a general bias towards one side between "the most important thing gets the *most* resources" and "the most important thing gets *all* the resources"? Or, in other words, the most important thing is the only important thing?
IMO it's fine to pick a favorite and devote extra resources to it. But that turns less fine when one also starts working to deprive everything else of any oxygen because it's not your favorite. (And I'm aware that this criticism applies to lots of communities.)
cassepipe
I read the whole tree of responses under this comment and I could only convince myself that when people have no arguments they try to make you look bad.
Most of criticisms are just "But they think they are better than us !" and the rest is "But sometimes they are wrong !"
I don't know about the community and couldn't care less but their writings have brought me some almost life saving fresh air in how to think about the world. It is very sad to me to read so many falsely elaborate responses from supposedly intelligent people having their ego hurt but in the end it reminds me why I like rationalists and I don't like most people.
trod1234
Most of the people talking aren't actually doing so from a rational perspective.
There are a lot of vested interests seeking to discredit rational thought since it is the basis for western philosophy something that all communist nations seek to undermine and destroy through 5GW.
AI's low cost has allowed these cohorts to hit the shannon limit where no organization can occur.
TL;DR when you see this you know communications are compromised.
Collectivists and propagandists use a whole host of dirty tricks to shut down discussion with sophistry and worse. Baseless opinion matters little. If its backed up by sound objective measure it matters a lot.
You don't see anyone here actually doing that because the ones that did were voted off the island, the downvotes removed their posts from view, and it was done purposefully.
smus
Feels like "they are wrong and smug" is enough reason to dislike the movement
null
ajkjk
Here's a theory of what's happening, both with you here in this comment section and with the rationalists in general.
Humans are generally better at perceiving threats than they are at putting those threats into words. When something seems "dangerous" abstractly, they will come up with words for why---but those words don't necessarily reflect the actual threat, because the threat might be hard to describe. Nevertheless the valence of their response reflects their actual emotion on the subject.
In this case: the rationalists, to put it mildly, creep people out. There is something "insidious" about their philosophy. And this is not a delusion on the part of the people judging them: it really does threaten them, and likely for good reason. The explanation is something like "we extrapolate from the way that rationalists think and realize that their philosophy leads to dangerous conclusions." Some of these conclusions they have already made---like valuing people far away abstractly over people next door, by trying to quantify suffering and altruism like a math problem (or to place moral weight on animals over humans, or people in the future over people today). Other conclusions are just implied, waiting to be made later. But the human mind detects them anyway as implications of the way of thinking, and reacts accordingly: thinking like this is dangerous and should be argued against.
This extrapolation is hard to put into words, so everyone who tries to express their discomfort misses the target somewhat, and then, if you are the sort of person who only takes things literally, it sounds like they are all just attacking someone out of judgment or bitterness or something instead of for real reasons. But I can't emphasize this enough: their emotions are real, they're just failing to put them into words effectively. It's a skill issue. You will understand what's happening better if you understand that this is what's going on and then try to take their emotions seriously even if they are not communicating them very well.
So that's what's going on here. But I think I can also do a decent job of describing the actual problem that people have with the rationalist mindset. It's something like this:
Humans have an innate moral intuition that "personal" morality, the kind that takes care of themselves and their family and friends and community, is supposed to be sacrosanct: people are supposed to both practice it and protect the necessity of practicing it. We simply can't trust the world to be a safe place if people don't think of looking out for the people around them as a fundamental moral duty. And once those people are safe, protecting more people, such as a tribe or a nation or all of humanity or all of the planet, becomes permissible.
Sometimes people don't or can't practice this protection for various reasons, and that's fine; it's a local problem that can be solved locally. But it's very insidious to turn around and justify not practicing it: "actually it's better not to behave morally; it's better to allocate resources to people far away; it's better to dedicate ourselves to fighting nebulous threats like AI safety or other X-risks instead of our neighbors". It's fine to work on important far-away problems once local problems are solved, if that's what you want. But it can't take priority. But to work on global numbers-game problems instead of local problems, and to justify that with arguments, and to try to convince other people to also do that---that's dangerous as hell. It proves too much: it argues that humans at large ought to dismantle their personal moralities in favor of processing the world like a paperclip-maximizing robot. And that is exactly as dangerous as a paperclip-maximizing robot is. Just at a slower timescale.
(No surprise that this movement is popular among social outcasts, for whom local morality is going to feel less important, as well as economically-insulated well-to-do tech-nerd types who are less likely to be exposed to suffering in their immediate communities.)
Ironically paperclip-maximizing-robots are exactly the thing that the rationalists are so worried about. They are a group of people who missed, and then disavowed, and now advocate disavowing, this "personal" morality, and unsurprisingly they view the world in a lens that doesn't include personality morality, which means mostly being worried about problems of the same sort. But it provokes a strong negative reaction from everyone who thinks about the world in terms of that personal duty to safety.
noname120
> They remind me of the "Effective Altruism" crowd who get completely wound up in these hypothetical logical thought exercises and end up coming to insane conclusions that they feel trapped in because they got there using pure logic. Not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value.
Do you have examples of that? I have a different perception, most of the EAs I've met are very grounded and sharp.
For example the most recent issue of their newsletter: https://us8.campaign-archive.com/?e=7023019c13&u=52b028e7f79...
I'm not sure where there are any “hypothetical logical thought exercises” that “end up coming to insane conclusions” in there.
For the first part where you say “not realizing that their initial conditions were highly artificial so any conclusion they reach is only of academic value” this is quite the opposite of my experience with them. They are very receptive to criticism and reconsider their point of view in reaction to that.
They are generally well-aware of the limits of data-driven initiatives and the dangers of indulging into purely abstract thinking that can lead to conclusions that indeed don't make sense.
TimTheTinker
> their initial conditions were highly artificial
There has to be (or ought to be) a name for this kind of epistemological fallacy, where in pursuit of truth, the pursuit of logical sophistication and soundness between starting assumptions (or first principles) and conclusions becomes functionally way more important than carefully evaluating and thoughtfully choosing the right starting assumptions (and being willing to change them when they are found to be inconsistent with sound observation and interpretation).
nyeah
Yes, there's a name for it. They're dumbasses.
“[...] Clevinger was one of those people with lots of intelligence and no brains, and everyone knew it except those who soon found it out. In short, he was a dope." - Joseph Heller, Catch-22 https://www.goodreads.com/quotes/7522733-in-short-clevinger-...
OtherShrezzing
They _are_ the effective altruism crowd.
HPsquared
People who confuse the map for the territory.
not_your_mentat
The notion that our moral obligation somehow demands we reduce the suffering of wild animals in an ecosystem, living their lives as they have done since predation evolved and as they will do long after humans have ceased to be, is such a wild misunderstanding of who we are and what we are and what the universe is. I love my Bay Area friends. To quote the great Gwen Stefani, “This sh!t is bananas.”
UncleOxidant
> They remind me of the "Effective Altruism" crowd
Isn't there a lot of overlap between the two groups?
I recently read a great book that examines these various groups and their commonality: More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity by Adam Becker Highly recommended.
mitthrowaway2
> They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is".
Aren't these the people who started the trend of writing things like "epistemic status: mostly speculation" on their blog posts? And writing essays about the dangers of overconfidence? And measuring how often their predictions turn out wrong? And maintaining webpages titled "list of things I was wrong about"?
Are you sure you're not painting this group with an overly-broad brush?
Certhas
I think this is a valid point. But to some degree both can be true. I often felt when reading some of these type of texts: Wait a second, there is a wealth of thinking on these topics out there; You are not at all situating all your elaborate thinking in a broader context. And there absolutely is willingness to be challenged, and (maybe less so) a willingness to be wrong. But there also is an arrogance that "we are the ones thinking about this rationally, and we will figure this out". As if people hadn't been thinking and discussing and (verbally and literally) fighting over all sorts of adjacent and similar topics in philosophy and sociology and anthropology and ... clubs and seminars forever. And importantly maybe there also isn't as much taste for understanding the limits of vigorous discussion and rational deduction. Adorno and Horkheimer posit a dialectic of rationality and enlightenment, Habermas tries to rebuild rational discourse by analyzing its preconditions. Yet for all the vigorous intellectualism of the rationalists, none of that ever seems to feature even in passing (maybe I have simply missed it...).
And I have definitely encountered "if you just listen to me properly you will understand that I am right, because I have derived my conclusions rationally" in in person interactions.
On the balance I'd rather have some arrogance and willingness to be debated and be wrong, over a timid need to defer to centuries of established thought though. The people I've met in person I've always been happy to hang out with and talk to.
mitthrowaway2
That's a fair point. Speaking only for myself, I think I fail to understand why it's important to situate philosophical discussions in the context of all the previous philosophers who have expressed related ideas, rather than simply discussing the ideas in isolation.
I remember as a child coming to the same "if reality is a deception, at least I must exist to be deceived" conclusion that Descartes did, well before I had heard of Descartes. (I don't think this makes me special, it's just a natural conclusion anyone will reach if they ponder the subject). I think it's harmless for me to discuss that idea in public without someone saying "you need to read Descartes before you can talk about this".
I also find my personal ethics are stronly aligned with what Kant espoused. But most people I talk to are not academic philosophers and have not read Kant, so when I want to explain my morals, I am better off explaining the ideas themselves than talking about Kant, which would be a distraction anyway because I didn't learn them from Kant, we just arrived at the same conclusions. If I'm talking with a philosopher I can just say "I'm a Kantian" as shorthand, but that's really just jargon for people who already know what I'm talking about.
I also think that while it would be unusual for someone to (for example) write a guide to understanding relativity without once mentioning Einstein, it also wouldn't be a fundamental flaw.
(But I agree there's no certainly excuse for someone asserting that they're right because they're rational!)
voidhorse
You're spot on here, and I think this is probably also why they appeal to programmers and people in software.
I find a lot of people in software have an insufferable tendency to simply ignore entire bodies of prior art, prior research, etc. outside of maybe computer science (and even that can be rare), and yet they act as though they are the most studied participants in the subject, proudly proclaiming their "genius insights" that are essentially restatements of basic facts in any given field that they would have learned if they just bothered to, you know, actually do research and put aside their egos for half a second to wonder if maybe the eons of human activity prior to their precious existence might have led to some decent knowledge.
bakuninsbart
Weirdly enough, both can be true. I was tangentially involved in EA in the early days, and have some friends who were more involved. Lots of interesting, really cool stuff going on, but there was always latent insecurity paired with overconfidence and elitism as is typical in young nerd circles.
When big money got involved, the tone shifted a lot. One phrase that really stuck with me is "exceptional talent". Everyone in EA was suddenly talking about finding, involving, hiring exceptional talent at a time where there was more than enough money going around to give some to us mediocre people as well.
In the case of EA in particular circlejerks lead to idiotic ideas even when paired with rationalist rhetoric, so they bought mansions for team building (how else are you getting exceptional talent), praised crypto (because they are funding the best and brightest) and started caring a lot about shrimp welfare (no one else does).
salynchnew
> caring a lot about shrimp welfare (no one else does).
Ah. I guess they are working out ecology through first principles, I guess?
I feel like a lot of the criticism of EA and rationalism does boil down to some kind of general criticism of naivete and entitlement, which... is probably true when applied to lots of people, regardless of whether they espouse these ideas or not.
It's also easier to criticize obviously doomed/misguided efforts at making the world a better place than to think deeply about how many of the pressing modern day problems (environmental issues, extinction, human suffering, etc.) also seem to be completely intractable, when analyzed in terms of the average individual's ability to take action. I think some criticism of EA or rationalism is also a reaction to a creeping unspoken consensus that "things are only going to get worse" in the future.
mitthrowaway2
I don't think this validates the criticism that "they don't really ever show a sense of[...] maybe I'm wrong".
I think that sentence would be a fair description of certain individuals in the EA community, especially SBF, but that is not the same thing as saying that rationalists don't ever express epistemic uncertainty, when on average they spend more words on that than just about any other group I can think of.
ToValueFunfetti
>they bought mansions for team building
They bought one mansion to host fundraisers with the super-rich, which I believe is an important correction. You might disagree with that reasoning as well, but it's definitely not as described.
gjm11
> both can be true
Yes! It can be true both that rationalists tend, more than almost any other group, to admit and try to take account of their uncertainty about things they say and that it's fun to dunk on them for being arrogant and always assuming they're 100% right!
Dracophoenix
[dead]
hiddencost
They're behind Anthropic and were behind openai being a nonprofit. They're behind the friendly AI movement and effective altruism.
They're responsible for funneling huge amounts of funding away from domain experts (effective altruism in practice means "Oxford math PhD writes a book report about a social sciences problem they've only read about and then defunds all the NGOs").
They're responsible for moving all the AI safety funding away from disparate impact measures to "save us from skynet" fantasies.
mitthrowaway2
I don't see how this is a response to what I wrote. Can you explain?
NoGravitas
I've always seen the breathless Singularitarian worrying about AI Alignment as a smokescreen to distract people from thinking clearly about the more pedestrian hazards of AI that isn't self-improving or superhuman, from algorithmic bias, to policy-washing, to energy costs and acceleration of wealth concentration. It also leads to so-called longtermism - discounting the benefits of solving current real problems and focusing entirely on solving a hypothetical one that you think will someday make them all irrelevant.
tuveson
My feeling has been that it’s a lot of people that work on B2B SaaS that are sad they hadn’t gotten the chance to work on the Manhattan Project. Be around the smartest people in your field. Contribute something significant (but dangerous! And we need to talk about it!) to humanity. But yeah computer science in the 21st century has not turned out to be as interesting as that. Maybe just as important! But Jeff Bezos important, not Richard Feynman important.
thom
The Singularitarians were breathlessly worrying 20+ years ago, when AI was absolute dogshit - Eliezer once stated that Doug Lenat was incautious in launching Eurisko because it could've gone through a hard takeoff. I don't think it's just an act to launder their evil plans, none of which at the time worked.
salynchnew
Yeah, people were generally terrified of this stuff back before you could make money off of it.
NoMoreNicksLeft
>s a smokescreen to distract people from thinking clearly about the more pedestrian hazards of AI that isn't self-improving or superhuman,
Anything that can't be self-improving or superhuman almost certainly isn't worthy of the moniker "AI". A true AI will be born into a world that has already unlocked the principles of intelligence. Humans in that world would be capable themselves of improving AI (slowly), but the AI itself will (presumably) run on silicon and be a quick thinker. It will be able to self-improve, rapidly at first, and then more rapidly as its increased intelligence allows for even quicker rates of improvement. And if not superhuman initially, it would soon become so.
We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
danans
> We don't even have anything resembling real AI at the moment. Generative models are probably some blind alley.
I think that the OP's point was that it doesn't matter whether it's "real AI" or not. Even if it's just a glorified auto-correct system, it's one that has the clear potential to overturn our information/communication systems and our assumptions about individuals' economic value.
philipov
yep, the biggest threat posed by AI comes from the capitalists who want to own it.
impossiblefork
I actually think the people developing AI might well not get rich off it.
Instead, unless there's a single winner, we will probably see the knowledge on how to train big LLMs and make them perform well diffuse throughout a large pool of AI researchers, with the hardware to train models reasonably close to the SotA becoming more quite accessible.
I think the people who will benefit will be the owners of ordinary but hard-to-dislodge software firms, maybe those that have a hardware component. Maybe firms like Apple, maybe car manufacturers. Pure software firms might end up having AI assisted programmers as competitors instead, pushing margins down.
This is of course pretty speculative, and it's not reality yet, since firms like Cursor etc. have high valuations, but I think this is what you'd get from the probably pressure if it keeps getting better.
parpfish
Or the propagandists that use it
felipeerias
The problem with trying to reason everything from first principles is that most things didn’t actually came about that way.
Both our biology and other complex human affairs like societies and cultures evolved organically over long periods of time, responding to their environments and their competitors, building bit by bit, sometimes with an explicit goal but often without one.
One can learn a lot from unicellular organisms, but won’t probably be able to reason from them all the way to an elephant. At best, if we are lucky, we can reason back from the elephant.
ImaCake
>The problem with trying to reason everything from first principles is that most things didn’t actually came about that way.
This is true for science and rationalism itself. Part of the problem is that "being rational" is a social fashion or fad. Science is immensely useful because it produces real results, but we don't really do it for a rational reason - we do it for reasons of cultural and social pressures.
We would get further with rationalism if we remembered or maybe admitted that we do it for reasons that make sense only in a complex social world.
baxtr
Yes, and if you read Popper that’s exactly how he defined rationality / the scientific method: to solve problems of life.
lsp
A lot of people really need to be reminded of this.
I originally came to this critique via Heidegger, who argues that enlightenment thinking essentially forgets / obscures Being itself, a specific mode of which you experience at this very moment as you read this comment, which is really the basis of everything that we know, including science, technology, and rationality. It seems important to recover and deepen this understanding if we are to have any hope of managing science and technology in a way that is actually beneficial to humans.
loose-cannon
Reducibility is usually a goal of intellectual pursuits? I don't see that as a fault.
nyeah
Ok. A lot of things are very 'reducible' but information is lost. You can't extend back from the reduction to the original domain.
Reduce a computer's behavior to its hardware design, state of RAM, and physical laws. All those voltages make no sense until you come up with the idea of stored instructions, division of the bits into some kind of memory space, etc. You may say, you can predict the future of the RAM. And that's true. But if you can't read the messages the computer prints out, then you're still doing circuits, not software.
Is that reductionist approach providing valuable insight? YES! Is it the whole picture? No.
This warning isn't new, and it's very mainstream. https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_differen...
nyrikki
'Reducibility' is a property if present that makes problems tractable or possibly practical.
What you are mentioning is called western reductionism by some.
In the western world it does map to Plato etc, but it is also a problem if you believe everything is reducible.
Under the assumption that all models are wrong, but some are useful, it helps you find useful models.
If you consider Laplacian determinism as a proxy for reductionism, Cantor diagonalization and the standard model of QM are counterexamples.
Russell's paradox is another lens into the limits of Plato, which the PEM assumption is based on.
Those common a priori assumptions have value, but are assumptions which may not hold for any particular problem.
jltsiren
"Reductionist" is usually used as an insult. Many people engaged in intellectual pursuits believe that reductionism is not a useful approach to studying various topics. You may argue otherwise, but then you are on a slippery slope towards politics and culture wars.
colordrops
What the person you are replying to is saying that some things are not reducible, i.e. the the vast array of complexity and detail is all relevant.
Avicebron
Yeah the "rational" part always seemed a smokescreen for the ability to produce and ingest their own and their associates methane gases.
I get it, I enjoyed being told I'm a super genius always right quantum physicist mathematician by the girls at Stanford too. But holy hell man, have some class, maybe consider there's more good to be done in rural Indiana getting some dirt under those nails..
shermantanktop
The meta with these people is “my brilliance comes with an ego that others must cater to.”
I find it sadly hilarious to watch academic types fight over meaningless scraps of recognition like toddlers wrestling for a toy.
That said, I enjoy some of the rationalist blog content and find it thoughtful, up to the point where they bravely allow their chain of reasoning to justify antisocial ideas.
dkarl
It's a conflict as old as time. What do you do when an argument leads to an unexpected conclusion? I think there are two good responses: "There's something going on here, so let's dig into it," or, "There's something going on here, but I'm not going to make time to dig into it." Both equally valid.
In real life, the conversation too often ends up being, "This has to be wrong, and you're an obnoxious nerd for bothering me with it," versus, "You don't understand my argument, so I am smarter, and my conclusions are brilliantly subversive."
bilbo0s
Might kind of point to real life people having too much of what is now called, "rationality", and very little of what used to be called "wisdom"?
Cthulhu_
It feels like a shield of sorts, "I am a rationalist therefore my opinion has no emotional load, it's just facts bro how dare you get upset at me telling xyz is such-and-such you are being irrational do your own research"
but I don't know enough about it, I'm just trolling.
TacticalCoder
[dead]
iNic
Every community has a long list of etiquettes, rules and shared knowledge that is assumed and generally not spelled out explicitly. One of the core assumptions of the rationalist community is that every statement has uncertainty unless you explicitly spell out that you are certain! This came about as a matter of practicality, as it would be inconvenient to preempt every other sentence with "I'm uncertain about this". Many discussions you will see have the flavor of "strong opinions, lightly held" for this reason.
hiAndrewQuinn
>Maybe it's actually going to be rather benign and more boring than expected
Maybe, but generally speaking, if I think people are playing around with technology which a lot of smart people think might end humanity as we know it, I would want them to stop until we are really sure it won't. Like, "less than a one in a million chance" sure.
Those are big stakes. I would have opposed the Manhattan Project on the same principle had I been born 100 years earlier, when people were worried the bomb might ignite the world's atmosphere. I oppose a lot of gain-of-function virus research today too.
That's not a point you have to be a rationalist to defend. I don't consider myself one, and I wasn't convinced by them of this - I was convinced by Nick Bostrom's book Superintelligence, which lays out his case with most of the assumptions he brings to the table laid bare. Way more in the style of Euclid or Hobbes than ... whatever that is.
Above all I suspect that the Internet rationalists are basically a 30 year long campaign of "any publicity is good publicity" when it comes to existential risk from superintelligence, and for what it's worth, it seems to have worked. I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
s1mplicissimus
> I don't hear people dismiss these risks very often as "You've just been reading too many science fiction novels" these days, which would have been the default response back in the 90s or 2000s.
I've recently stumbled across the theory that "it's gonna go away, just keep your head down" is the crisis response that has been taught to the generation that lived through the cold war, so that's how they act. That bit was in regards to climate change, but I can easily see it apply to AI as well (even though I personally believe that the whole "AI eat world" arc is only so popular due to marketing efforts of the corresponding industry)
hiAndrewQuinn
It's possible, but I think that's just a general human response when you feel like you're trapped between a rock and a hard place.
I don't buy the marketing angle, because it doesn't actually make sense to me. Fear draws eyeballs, sure, but it just seems otherwise nakedly counterproductive, like a burger chain advertising itself on the brutality of its factory farms.
socalgal2
Do you think opposing the manhattan project would have lead to a better world?
note, my assumption is not that the bomb would not have been developed. Only that by opposing the manhattan project the USA would not have developed it first.
hiAndrewQuinn
My answer is yes, with low-moderate certainty. I still think the USA would have developed it first, and I think this is what is suggested to us by the GDP trends of the US versus basically everywhere else post-WW2.
Take this all with more than a few grains of salt. I am by no means an expert in this territory. But I don't shy away from thinking about something just because I start out sounding like an idiot. Also take into account this is post-hoc, and 1940 Manhattan Project me would obviously have had much, much less information to work with about how things actually panned out. My answer to this question should be seen as separate to the question of whether I think dodging the Manhattan Project would have been a good bet, so to speak.
Most historians agree that Japan was going to lose one way or another by that point in the war. Truman argued that dropping the bomb killed fewer people in Japan than continuing, which I agree with, but that's a relatively small factor in the calculation.
The much bigger factor is that the success of the Manhattan Project as an ultimate existence proof for the possibility of such weaponry almost certainly galvanized the Soviet Union to get on the path of building it themselves much more aggressively. A Cold War where one side takes substantially longer to get to nukes is mostly an obvious x-risk win. Counterfactual worlds can never be seen with certainty, but it wouldn't surprise me if the mere existence proof led the USSR to actually create their own atomic weapons a decade faster than they would have otherwise, by e.g. motivating Stalin to actually care about what all those eggheads were up to (much to the terror of said eggheads).
This is a bad argument to advance when we're arguing about e.g. the invention of calculus, which as you'll recall was coinvented in at least 2 places (Newton with fluxions, Liebniz with infinitesimals I think), but calculus was the kind of thing that could be invented by one smart guy in his home office. It's a much more believable one when the only actors who could have made it were huge state-sponsored laboratories in the US and the USSR.
If you buy that, that's 5 to 10 extra years the US would have had in order to do something like the Manhattan Project, but in much more controlled, peace-time environments. The atmosphere-ignition prior would have been stamped out pretty quickly by later calculations of physicists to the contrary, and after that research would have gotten back to full steam ahead. I think the counterfactual US would have gotten onto the atom bomb in the early 1950s at the absolute latest with the talent they had in an MP-less world. Just with much greater safety protocols, and without the Russians learning of it in such blatant fashion. Our abilities to detect such weapons being developed elsewhere would likely have also stayed far ahead of the Russians. You could easily imagine a situation where the Russians finally create a weapon in 1960 that was almost as powerful as what we had cooked up by 1950.
Then you're more or less back to an old-fashioned deterrence model, with the twist that the Russians don't actually know exactly how powerful the weapons the US has developed are. This is an absolute good: You can always choose to reveal just a lower bound of how powerful your side is, if you think you need to, or you can choose to remain totally cloaked in darkness. If you buy the narrative that the US were "the good guys" (I do!) and wouldn't risk armaggedon just because they had the upper hand, then this seems like it can only make the future arc of the (already shorter) Cold War all the safer.
I am assuming Gorbachev or someone still called this whole circus off around the late 80s-early 90s. Gotta trim the butterfly effect somewhere.
resters
Not meaning to be too direct, but you are misinterpreting a lot about rationalists.
In my view, rationalists are often "Bayesian" in that they are constantly looking for updates to their model. Consider that the default approach for most humans is to believe a variety of things and to feel indignant if someone holds differing views (the adage never discuss religion or politics). If one adopts the perspective that their own views might be wrong, one must find a balance between confidently acting on a belief and being open to the belief being overturned or debunked (by experience, by argument, etc.).
Most rationalists I've met enjoy the process of updating or discarding beliefs in favor of ones they consider more correct. But to be fair to one's own prior attempts at rationality, one should try reasonably hard to defend one's current beliefs so that they can be fully and soundly replaced if necessary, without leaving any doubt that they were insufficiently supported, etc.
To many people (the kind of people who never discuss religion or politics) all this is very uncomfortable and reveals that rationalists are egotistical and lacking in humility. Nothing could be further from the truth. It takes tremendous humility to assume that one's own beliefs are quite possibly wrong. The very name of Eliezer's blog "Less Wrong" makes this humility quite clear. Scott Alexander is also very open with his priors and known biases / foci, and I view his writing as primarily focusing on big picture epistemological patterns that most people end up overlooking because most people are busy, etc.
One final note about the AI-dystopianism common among rationalists -- we really don't know yet what the outcome will be. I personally am a big fan of AI, but we as humans do not remotely understand the social/linguistic/memetic environment well enough to know for sure how AI will impact our society and culture. My guess is that it will amplify rather than mitigate differences in innate intelligence in humans, but that's a tangent.
I think to some, the rationalist movement feels like historical "logical positivist" movements that were reductionist and socially darwinian. While it is obvious to me that the rationalist movement is nothing of the sort, some people view the word "rationalist" as itself full of the implication that self-proclaimed rationalists consider themselves superior at reasoning. In fact they simply employ a heuristic for considering their own rationality over time and attempting to maximize it -- this includes listening to "gut feelings" and hunches, etc,. in case you didn't realize.
ajkjk
not to be too cynical here, but I would say that the most-apt description of the rationalists is that they are people who would say they are constantly looking for updates to their models. But that they are not necessarily doing it appreciably more than anyone else is. They will do it freely on unimportant things---they tend to be smart people who view the world intellectually and so they are free to toss or keep factual beliefs about things, of which they have many, with little fanfare, and sure, they get points for that. But they are as rooted in their moral beliefs as anybody else is. Maybe more than other people since they have such a strong intellectual edifice that justifies not changing their minds, because they believe that their beliefs follow from nearly irrefutable calculations.
matthewdgreen
My impression is that many rationalists enjoy believing that they update their beliefs, but in practice they're human and just as attached to preconceived notions as anyone else. But if you go around telling everyone that updating is your super-power, you're going to be a lot less humble about your own failures to do so.
If you want to see how human and tribal rationalists are, go criticize the movement as an outsider. Or try to write a mildly critical NYT piece about them and watch how they react.
thom
Yes, I've never met anyone who stated they have "strong opinions, weakly held" who wasn't A) some kind of arsehole and B) lying.
mathattack
Logic is an awesome tool that took us from Greek philosophers to the gates on our computers. The challenge with pure rationalism is checking the first principles that the thinking comes from. Logic can lead you astray if the principles are wrong, or you miss the complexity along the way.
On the missing first principles, look at Aristotle. One of the history's greatest logicians came to many false conclusions.
On missing complexity, note that Natural Selection came from empirical analysis rather than first principles thinking. (It could have come from the latter, but was too complex) [1]
This doesn't discount logic, it just highlights that answers should always come with provisional humility.
And I'm still a superfan of Scott Aaronson.
[0] https://www.wired.com/story/aristotle-was-wrong-very-wrong-b...
kragen
The ‘rationalist’ group being discussed here aren't Cartesian rationalists, who dismissed empiricism; rather, they're Bayesian empiricists. Bayesian probability turns out to be precisely the unique extension of Boolean logic to continuous real probability that Aristotle (nominally an empiricist!) was lacking. (I think they call themselves “rationalists” because of the ideal of a “rational Bayesian agent” in economics.)
However, they have a slogan, “One does not simply reason over the joint conditional probability distribution of the universe.” Which is to say, AIXI is uncomputable, and even AIXI can only reason over computable probability distributions!
edwardbernays
Logic is the study of what is true, and also what is provable.
In the most ideal circumstances, these are the same. Logic has been decomposed into model theory (the study of what is true) and proof theory (the study of what is provable). So much of modern day rationalism is unmoored proof theory. Many of them would do well to read Kant's "The Critique of Pure Reason."
Unfortunately, in the very complex systems we often deal with, what is true may not be provable and many things which are provable may not be true. This is why it's equally as important to hone your skills of discernment, and practice reckoning as well as reasoning. I think of it as hearing "a ring of truth," but this is obviously unfalsifiable and I must remain skeptical against myself when I believe I hear this. It should be a guide toward deeper investigation, not the final destination.
Many people are led astray by thinking. It is seductive. It should be more commonly said that thinking is but a conscious stumbling block on the way to unconscious perfection.
eth0up
>provisional humility.
I hope this becomes the first ever meme with some value. We need a cult... of Provisional Humility.
Must. Increase. The. pH
jrm4
Yup, can't stress the word "tool" enough.
It's a "tool," it's a not a "magic window into absolute truth."
Tools can be good for a job, or bad. Carry on.
gooseus
I've never thought ill of Scott Aaronson and have often admired him and his work when I stumble across it.
However, reading this article about all these people at their "Galt's Gultch", I thought — "oh, I guess he's a rhinoceros now"
https://en.wikipedia.org/wiki/Rhinoceros_(play)
Here's a bad joke for you all — What's the difference between a "rationalist" and "rationalizer"? Only the incentives.
NoGravitas
I have always considered Scott Aaronson the least bad of the big-name rationalists. Which makes it slightly funny that he didn't realize he was one until Scott Siskind told him he was.
wizzwizz4
Reminds me of Simone de Beauvoir and feminism. She wrote the book on (early) feminism, yet didn't consider herself a feminist until much later.
dcminter
Upvote for the play link - that's interesting and I hadn't heard of it before. Worthy of a top-level post IMO.
t_mann
> “You’re [X]?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”
> “Yes,” I replied, not bothering to correct the “physicist” part.
Didn't read much beyond that part. He'll fit right in with the rationalist crowd...
johnfn
To be honest, if I encountered Scott Aaronson in the wild I would probably react the same way. The guy is super smart and thoughtful, and can write more coherently about quantum computing than anyone else I'm aware of.
NooneAtAll3
if only he stayed silent on politics...
simianparrot
No actual person talks like that —- and if they really did, they’ve taken on the role of a fictional character. Which says a lot about the clientele either way.
I skimmed a bit here and there after that but this comes off as plain grandiosity. Even the title is a line you can imagine a hollywood character speaking out loud as they look into the camera, before giving a smug smirk.
FeteCommuniste
I assumed that the stuff in quotes was a summary of the general gist of the conversations he had, not a word for word quote.
riffraff
I don't think GP objects to the literalness, as much as to the "I am known for always being right and I acknowledge it", which comes off as.. not humble.
kragen
Why would you comment on the post if you stopped reading near its beginning? How could your comments on it conceivably be of any value? It sounds like you're engaging in precisely the kind of shallow dismissal the site guidelines prohibit.
JohnMakin
Aren't you doing the same thing?
kragen
No, I read the comment in full, analyzed its reasoning quality, elaborated on the self-undermining epistemological implications of its content, and then related that to the epistemic and discourse norms we aspire to here. My dismissal of it is anything but shallow, though I am of course open to hearing counterarguments, which you have fallen short of offering.
null
junon
I got to that part, thought it was a joke, and then... it wasn't.
Stopped reading thereafter. Nobody speaking like this will have anything I want to hear.
joenot443
Scott's done a lot of really excellent blogging in the past. Truthfully, I think you risk depriving yourself of great writing if you're willing to write off an author because you didn't like one sentence.
GRRM famously written some pretty awkward sentences but it'd be a shame if someone turned down his work for that alone.
derangedHorse
Is it not a joke? I’m pretty sure it was.
lcnPylGDnU4H9OF
It doesn’t really read like a joke but maybe. Regardless, I guess I can at least be another voice saying it didn’t land. It reads like someone literally said that to him verbatim and he literally replied with a simple, “Yes.” (That said, while it seems charitable to assume it was a joke but that doesn’t mean it’s wrong to assume that.)
myko
I laughed, definitely read that way to me
IshKebab
I think the fact that we aren't sure says a lot!
alphan0n
If that was a joke, all of it is.
*Guess I’m a rationalist now.
dcminter
Also...
> they gave off some (not all) of the vibes of a cult
...after describing his visit with an atmosphere that sounds extremely cult-like.
jcranmer
The podcast Behind the Bastards described Rationalism not as a cult but as the fertile soil which is perfect for growing cults, leading to the development of cults like the Zizians (who both the Rationalists and Zizians are at pains to emphasize their mutual hostility to one another, but if you're not part of either movement, it's pretty clear how Rationalism can lead to something like the Zizians).
ARandumGuy
At least one cult originates from the Rationalist movement, the Zizians [1]. A cult that straight up murdered at least four people. And while the Zizian belief system is certainly more extreme then mainstream Rationalist beliefs, it's not that much more extreme.
For more info, the Behind the Bastards podcast [2] did a pretty good series on how the Zizians sprung up out of the Bay area Rationalist scene. I'd highly recommend giving it a listen if you want a non-rationalist perspective on the Rationalist movement.
[1]: https://en.wikipedia.org/wiki/Zizians [2]: https://www.iheart.com/podcast/105-behind-the-bastards-29236...
cubefox
> At least one cult originates from the Rationalist movement, the Zizians [1]. A cult that straight up murdered at least four people. And while the Zizian belief system is certainly more extreme then mainstream Rationalist beliefs, it's not that much more extreme.
Ziz did not consider himself ("herself") a rationalist, and mainstream rationalism is of course not extreme at all. They didn't murder any people. They are merely people interested in AI and philosophy and math.
wizzwizz4
No, Guru Eliezer Yudkowsky wrote an essay about how people asking "This isn’t a cult, is it?" bugs him, so it's fine actually. https://www.readthesequences.com/Cultish-Countercultishness
NoGravitas
Hank Hill: Are y'all with the cult?
Cult member: It's not a cult! It's an organization that promotes love and..
Hank Hill: This is it.
dcminter
Extreme eagerness to disavow accusations of cultishness ... doth the lady protest too much perhaps? My hobby is occasionally compared to a cult. The typical reaction of an adherent to this accusation is generally "Heh, yeah, totally a cult."
Edit: Oh, but you call him "Guru" ... so on reflection you were probably (?) making the same point... (whoosh, sorry).
James_K
I made it to “liberal zionist” before quitting.
samuel
I'm currently reading Yudkowsky's "Rationality: from AI to zombies". Not my first try, since the book is just a collection of blog posts and I found it a bit hard to swallow due its repetitiveness, so I gave up after the first 50 "chapters" the first time I tried. Now I'm enjoying it way more, probably because I'm more interested in the topic now.
For those who haven't delved(ha!) into his work or have been pushed back by the cultish looks, I have to say that he's genuinelly onto something. There are a lot of practical ideas that are pretty useful for everyday thinking ("Belief in Belief", "Emergence", "Generalizing from fiction", etc...).
For example, I recall being in lot of arguments that are purely "semantical" in nature. You seem to disagree about something but it's just that both sides aren't really referring to the same phenomenon. The source of the disagreement is just using the same word for different, but related, "objects". This is something that seems obvious, but the kind of thing you only realize in retrospect, and I think I'm much better equipped now to be aware of it in real time.
I recommend giving it a try.
Bjartr
Yeah, the whole community side to rationality is, at best, questionable.
But the tools of thought that the literature describes are invaluable with one very important caveat.
The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It is an incredibly easy mistake to make. To make effective use of the tools, you need to become more humble than before you were using them or you just turn into an asshole who can't be reasoned with.
If you're saying "well actually, I'm right" more often than "oh wow, maybe I'm wrong", you've failed as a rationalist.
wannabebarista
This reminds me of undergrad philosophy courses. After the intro logic/critical thinking course, some students can't resist seeing affirming the antecedent and post hoc fallacies everywhere (even if more are imagined than not).
wizzwizz4
Chapter 67. https://www.readthesequences.com/Knowing-About-Biases-Can-Hu... (And since it's in the book, and people know about it, obviously they're not doing it themselves.)
FeepingCreature
Also the Valley of Bad Rationality tag. https://www.lesswrong.com/w/valley-of-bad-rationality
the_af
> The moment you think something like "I am more correct than this other person because I am a rationalist" is the moment you fail as a rationalist.
It's very telling that some of them went full "false modesty" by naming sites like "LessWrong", when you just know they actually mean "MoreRight".
And in reality, it's just a bunch of "grown teenagers" posting their pet theories online and thinking themselves "big thinkers".
mariusor
> you just know they actually mean "MoreRight".
I'm not affiliated with the rationalist community, but I always interpreted "Less Wrong" as word-play on how "being right" is an absolute binary: you can either be right, nor not be right, while "being wrong" can cover a very large gradient.
I expect the community wanted to emphasize how people employing the specific kind of Bayesian iterative reasoning they were proselytizing would arrive at slightly lesser degrees of wrong than the other kinds that "normal" people would use.
If I'm right, your assertion wouldn't be totally inaccurate, but I think it might be missing the actual point.
greener_grass
I think there is an arbitrage going on where STEM types who lack background in philosophy, literature, history are super impressed by basic ideas from those subjects being presented to them by stealth.
Not saying this is you, but these topics have been discussed for thousands of years, so it should at least be surprising that Yudkowsky is breaking new ground.
sixo
To the Stem-enlightened mind, the classical understanding and pedagogy of such ideas is underwhelming, vague, and riddled with language-game problems, compared to the precision a mathematically-rooted idea has.
They're rederiving all this stuff not out of obstinacy, but because they prefer it. I don't really identify with rationalism per se, but I'm with them on this--the humanities are over-cooked and a humanity education tends to be a tedious slog through outmoded ideas divorced from reality
biofox
If you contextualise the outmoded ideas as part of the Great Conversation [1], and the story of how we reached our current understanding, rather than objective statements of fact, then they becomes a lot more valuable and worthy of study.
throwawaymaroon
[dead]
FeepingCreature
In AI finetuning, there's a theory that the model already contains the right ideas and skills, and the finetuning just raises them to prominence. Similarly in philosophic pedagogy, there's huge value in taking ideas that are correct but unintuitive and maybe have 30% buy-in and saying "actually, this is obviously correct, also here's an analysis of why you wouldn't believe it anyway and how you have to think to become able to believe it". That's most of what the Sequences are: they take from every field of philosophy the ideas that are actually correct, and say "okay actually, we don't need to debate this anymore, this just seems to be the truth because so-and-so." (Though the comments section vociferously disagrees.)
And it turns out if you do this, you can discard 90% of philosophy as historical detritus. You're still taking ideas from philosophy, but which ideas matters, and how you present them matters. The massive advantage of the Sequences is they have justified and well-defended confidence where appropriate. And if you manage to pick the right answers again and again, you get a system that actually hangs together, and IMO it's to philosophy's detriment that it doesn't do this itself much more aggressively.
For instance, 60% of philosophers are compatibilists. Compatibilism is really obviously correct. "What are you complaining about, that's a majority, isn't that good?" What is wrong with those 40% though? If you're in those 40%, what arguments may convince you? Repeat to taste.
elt895
Are there other philosophy- or history-grounded sources that are comparable? If so, I’d love some recommendations. Yudkowsky and others have their problems, but their texts have an interesting points, are relatively easy to read and understand, and you can clearly see which real issues they’re addressing. From my experience, alternatives tend to fall into two categories: 1. Genuine classical philosophy, which is usually incredibly hard to read and after 50 pages I have no idea what the author is even talking about anymore. 2. Basically self help books that take one or very few idea and repeat them ad nouseam for 200 pages.
wannabebarista
Likely the best resource to learn about philosophy is the Stanford Encyclopedia of Philosophy [0]. It's meant to provide a rigorous starting point for learning about a topic, where 1. you won't get bogged down in a giant tome on your first approach and 2. you have references for further reader.
Obviously, the SEP isn't perfect, but it's a great place to start. There's also the Internet Encyclopedia of Philosophy [1]; however, I find its articles to be more hit or miss.
NoGravitas
I don't know if there's anything like a comprehensive high-level guide to philosophy that's any good, though of course there are college textbooks. If you want real/academic philosophy that's just more readable, I might suggest Eugene Thacker's "The Horror of Philosophy" series (starting with "In The Dust Of This Planet"), especially if you are a horror fan already.
ashwinsundar
I don't have an answer here either, but after suffering through the first few chapters of HPMOR, I've found that Yudk and others tech-bros posing as philosophers are basically like leaky, dumbed-down abstractions for core philosophical ideas. Just go to the source and read about utilitarianism and deontology directly. Yudk is like the Wix of web development - sure you can build websites but you're not gonna be a proper web developer unless you learn HTML, CSS and Javascript. Worst of all, crappy abstractions train you in some actively bad patterns that are hard to unlearn
It's almost offensive - are technologists so incapable of understanding philosophy that Yudk has to reduce it down to the least common denominator they are all familiar with - some fantasy world we read about as children?
samuel
I don't claim that his work is original (the AI related probably is, but it's just tangentially related to rationalism), but it's clearly presented and is practical.
And, BTW, I could just be ignorant in a lot of these topics, I take no offense in that. Still I think most people can learn something from an unprejudiced reading.
HDThoreaun
Rationalism largely rejects continental philosophy in favor of a more analytic approach. Yes these ideas are not new, but they’re not really the mainstream stuff you’d see in philosophy, literature, or history studies. You’d have to seek out these classes specifically to find them.
TimorousBestie
They largely reject analytic philosophy as well. Austin and Whitehead are roughly as detestable to a Rationalist as Foucault and Marx.
Carlyle, Chesterton and Thoreau are about the limit of their philosophical knowledge base.
bnjms
I think you’re mostly right.
But also that it isn’t what the Yudkowsky is (was?) trying to do with it. I think he’s trying to distill useful tools which increase baseline rationality. Religions have this. It’s what the original philosophers are missing. (At least as taught, happy to hear counter examples)
ashwinsundar
I think I'd rather subscribe to an actual religion, than listen to these weird rationalist types of people who seem to have solved the problem that is "everything". At least there is some interesting history to learn about with religion
null
hiAndrewQuinn
If you're in it just to figure out the core argument for why artificial intelligence is dangerous, please consider reading the first few chapters of Nick Bostom's Superintelligence instead. You'll get a lot more bang for your buck that way.
quickthrowman
Your time would probably be better spent reading his magnum opus, Harry Potter and the Methods of Rationality.
NoGravitas
Probably the most useful book ever written about topics adjacent to capital-R Rationalism is "Neoreaction, A Basilisk: Essays on and Around the Alt-Right" [1], by Elizabeth Sandifer. Though the topic of the book is nominally the Alt-Right, a lot more of it is about the capital-R Rationalist communities and individuals that incubated the neoreactionary movement that is currently dominant in US politics. It's probably the best book to read for understanding how we got politically and intellectually from where we were in 2010, to where we are now.
https://www.goodreads.com/book/show/41198053-neoreaction-a-b...
FeepingCreature
If you want a book on the rationalists that's not a smear dictated by a person who is banned from their Wikipedia page for massive npov violations, I hear Chivers' The AI Does Not Hate You and Rationalist's Guide to the Galaxy are good.
(Disclaimer: Chivers kinda likes us, so if you like one book you'll probably dislike the other.)
kragen
Thanks for the recommendation! I hadn't heard about the book.
mananaysiempre
That book, IMO, reads very much like a smear attempt, and not one done with a good understanding of the target.
The premise, with an attempt to tie capital-R Rationalists to the neoreactionaries though a sort of guilt by association, is frankly weird: Scott Alexander is well-known among the former to be essentially the only prominent figure that takes the latter seriously—seriously enough, that is, to write a large as-well-stated-as-possible survey[1] followed by a humongous point-by-point refutation[2,3]; whereas the “cult leader” of the rationalists, Yudkowsky, is on the record as despising neoreactionaries to the point of refusing to discuss their views. (As far as recent events, Alexander wrote a scathing review of Yarvin’s involvement in Trumpist politics[4] whose main thrust is that Yarvin has betrayed basically everything he advocated for.)
The story of the book’s conception also severely strains an assumption of good faith[5]: the author, Elizabeth Sandifer, explicitly says it was to a large extent inspired, sourced, and edited by David Gerard, a prominent contributor to RationalWiki and r/SneerClub (the “sneerers” mentioned in TFA) and Wikipedia administrator who after years of edit-warring got topic-banned from editing articles about Scott Alexander (Scott Siskind) for conflict of interest and defamation[6] (including adding links to the book as a source for statements on Wikipedia about links between rationalists and neoreaction). Elizabeth Sandifer herself got banned for doxxing a Wikipedia editor during Gerard's earlier edit war at the time of Manning's gender transition, for which Gerard was also sanctioned[7].
[1] https://slatestarcodex.com/2013/03/03/reactionary-philosophy...
[2] https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f...
[3] https://slatestarcodex.com/2013/10/24/some-preliminary-respo...
[4] https://www.astralcodexten.com/p/moldbug-sold-out
[5] https://www.tracingwoodgrains.com/p/reliable-sources-how-wik...
[6] https://en.wikipedia.org/wiki/Wikipedia:Administrators%27_no...
[7] https://en.wikipedia.org/wiki/Wikipedia:Arbitration/Requests...
null
bargainbin
Never ceases to amaze me that the people who are clever enough to always be right are never clever enough to see how they look like complete wankers when telling everyone how they’re always right.
cogman10
> clever enough to always be right
Oh, see here's the secret. Lots of people THINK they are always right. Nobody is.
The problem is you can read a lot of books, study a lot of philosophy, practice a lot of debate. None of that will cause you to be right when you are wrong. It will, however, make it easier for you to sell your wrong position to others. It also makes it easier for you to fool yourself and others into believing you're uniquely clever.
KolibriFly
Sometimes the meta-skill of how you come across while being right is just as important as the correctness itself…
gadders
It's a coping mechanism for autists, mainly.
falcor84
I don't see how that's any more "wanker" then this famous saying by Socrates's; Western thought is wankers all the way down.
> Although I do not suppose that either of us knows anything really beautiful and good, I am better off than he is – for he knows nothing, and thinks he knows. I neither know nor think I know.
tptacek
Well that was a whole thing. I especially liked the existential threat of Cade Metz. But ultimately, I think the great oracle of Chicago got this whole thing right when he said:
-Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself. I quote John Lennon, "I don't believe in Beatles, I just believe in me." Good point there. After all, he was the walrus. I could be the walrus. I'd still have to bum rides off people.
dragonwriter
> Ism's in my opinion are not good. A person should not believe in an -ism, he should believe in himself
There's an -ism for that.
Actually, a few different ones depending on the exact angle you look at the it from: solipsism, narcissism,...
djoldman
Just to confirm, this is about:
https://en.wikipedia.org/wiki/Rationalist_community
and not:
https://en.wikipedia.org/wiki/Rationalism
right?
FeepingCreature
Absolutely everybody names it wrong. The movement is called rationality or "LessWrong-style rationality", explicitly to differentiate it from rationalism the philosophy; rationality is actually in the empirical tradition.
But the words are too close together, so this is about as lost a battle as "hacker".
thomasjudge
Along these lines I am sort of skimming articles/blogs/websites about Lightcone, LessWrong, etc, and I am still struggling with the question...what do they DO?
Mond_
Look, it's just an internet community of people who write blog posts and discuss their interests on web forums.
Asking "What do they do?" is like asking "What do Hackernewsers do?"
It's not exactly a coherent question. Rationalists are a somewhat tighter group, but in the end the point stands. They write and discuss their common interests, e.g. the progress of AI, psychiatry stuff, bayesianism, thought experiments, etc.
FeepingCreature
Twenty years or so ago, Eliezer Yudkowsky, a former proto-accelerationalist, realized that superintelligence was probably coming, was deeply unsafe, and that we should do something about that. Because he had a very hard time convincing people of this to him obvious fact, he first wrote a very good blog about human reason, philosophy and AI, in order to fix whatever was going wrong in people's heads that caused them to not understand that superintelligence was coming and so on. The group of people who read, commented on and contributed to this blog are called the rationalists.
(You're hearing about them now because these days it looks a lot more plausible than in 2007 that Eliezer was right about superintelligence, so the group of people who've beat the drum about this for over a decade now form the natural nexus around which the current iteration of project "we should do something about unsafe superintelligence" is congealing.)
nathcd
Some of the comments here remind me of online commentary about some place called "the orange site". Always wondered who they were talking about...
IlikeKitties
I once was interested in a woman who was really into the effective altruism/rationalism crowd. I went to a few meetings with her but my inner contrarian didn't like it.
Took me a few years to realize how cultish it all felt and that I am somewhat happy my edgy atheist contrarian personality overwrote my dicks thinking with that crowd.
bikamonki
https://en.wikipedia.org/wiki/Rationalist_community
"In particular, several women in the community have made allegations of sexual misconduct, including abuse and harassment, which they describe as pervasive and condoned."
There's weird sex stuff, logically, it's a cult.
The article made me think deeper about what rubs me the wrong way about the whole movement
I think there is some inherent tension btwn being "rational" about things and trying to reason about things from first principle.. And the general absolutist tone of the community. The people involved all seem very... Full of themselves ? They don't really ever show a sense of "hey, I've got a thought, maybe I haven't considered all angles to it, maybe I'm wrong - but here it is". The type of people that would be embarrassed to not have an opinion on a topic or say "I don't know"
In the Pre-AI days this was sort of tolerable, but since then.. The frothing at the mouth convinced of the end of the world.. Just shows a real lack of humility and lack of acknowledgment that maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected