The Rationality Community Sucks

Added 2023-06-21, Modified 2023-06-21

Why I view the rationalist community as ACTIVELY HARMFUL


Epistemic status: angry and sarcastic rant about why I view the rationalist community as harmful.

Comparing Down

Standard Rationalist dinner conversation: Bob is talking about dumb people he met in his physics PhD in a fruitless attempt to address his insecurity, someone mentions a case of someone in the out-group being irrational, everyone laughs.

A LessWrong post is brought up and everyone pretends to be productive for a while despite the opportunity cost making the whole discussion net negative.

Joe brings up a new AI policy idea he's been toying with in an attempt to boost his self-importance by pretending anybody will implement it. Everyone plays along because it makes them feel important too.

STOP.

THIS ONLY MAKES YOU WEAKER.

INSANE IS STILL INSANE EVEN IF YOU'RE LESS INSANE THAN OTHERS.

Why would you discuss people failing when you could discuss people succeeding? Maybe the latter group has things you could copy? No? Too busy bonding with your polycule?

Groupthink

"Let's get all the people who hate groupthink together, then they'll avoid groupthink even more"

You fucking dolts. Rationality is about winning, not about magically staying sane while you're fucking your coworkers in a group house paid for by the same organization half of you work at.

The standard, tried and true method of avoiding groupthink is by getting feedback from different groups and avoiding identifying with a group. This is what actually trying looks like, not reading endless Yudkowsky with the hope of absorbing some magical thinking patterns that make you immune to millions of years of evolution.

By avoiding identifying with a group I mean actually, on the intuitive level, not considering yourself an <X>. Many rationalists claim to have "fundamental disagreements" with the ideology despite their obvious intuitive identification.

Sucking at Stuff

Compared to researchers are top labs, Rationalists are terrible at machine learning. Many (most?) alignment researchers haven't done real ML research. And if you stop doing mental gymnastics, it seems like figuring out how to make safe AI would require an actually high level of skill with... AI.

...and they don't realize this. I've had several frustrating conversations that go something like this:

Me: "I really need to get better at ML in order to solve alignment. I've done very little beyond the basics and don't read many papers or integrate that knowledge into practice"

Them: "You seem pretty good at ML. You've implemented transformers and autograd right? what else is there to know"

Me: "On an absolute level this absolutely sucks. I couldn't do research at a major lab, and I'm nowhere near diminishing returns, if anything I'm getting increasing returns for now"

Them: "shrugs seems like continuing to fumble around doing direct research is better"

BREAKING NEWS: ALIGNMENT SOLVED BY TEEN IN A RECENT LESSWRONG POST. THE 500 PAGE GOOGLE DOC WITH NO MATH OR CODE SHOCKED EXPERTS THE WORLD OVER

"It was so easy! We just had to make my schemes more complicated!" - Paul Christiano

(Okay that was a bit of an outtake, back to the uh- serious stuff)

"Systematized Winning"

Now, this may be a new idea for rationalists, but I think there's some kind of core structure common among highly capable agents, and that imitating this core structure can lead to better thinking and action. A "core of consequentialism" if you will.

If you actually view rationality as systematized winning, you should learn it by copying people who are systematically winning. The standard successful people in your domain. Not by reading a bunch of theoretical blog posts.

Maybe you should have friends

A generalization of "reverse any advice you hear" is to avoid social groups where people have correlated flaws to you.

If you struggle with implementing your genius ideas in the real world, maybe you shouldn't hang out with a group where saving the world is done through philosophy blog posts.