SneerClub

1190 readers
4 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
1
 
 

Via reddits sneerclub. Thanks u/aiworldism.

I have called LW a cult incubator for a while now, and while the term has not catched on, nice to see more reporting on the problem that lw makes you more likely to join a cult.

https://www.aipanic.news/p/the-rationality-trap the original link for the people who dont like archive.is used the archive because I dont like substack and want to discourage its use.

2
3
 
 

despite the error of postulating that Luigi was there for the CEO's coincidental bullet-involved collision, when he was at bible study with me, and you

4
 
 

I used to think that psychiatry-blogging was Scott Alexander's most useful/least harmful writing, because its his profession and an underserved topic. But he has his agenda to preach race pseudoscience and 1920s-type eugenics, and he has written in some ethical grey areas like stating a named friend's diagnosis and desired course of treatment. He is in a community where many people tell themselves that their substance use is medicinal and want proscriptions. Someone on SneerClub thinks he mixed up psychosis and schizophrenia in a recent post.

If you are in a registered profession like psychiatry, it can be dangerous to casually comment on your colleagues. Regardless, has anyone with relevant qualifications ever commented on his psychiatry blogging and whether it is a good representation of the state of knowledge?

5
6
7
27
submitted 1 week ago* (last edited 1 week ago) by CinnasVerses@awful.systems to c/sneerclub@awful.systems
 
 

Bad people who spend too long on social media call normies NPCs as in video-game NPCs who follow a closed behavioural loop. Wikipedia says this slur was popular with the Twitter far right in October 2018. Two years before that, Maciej Ceglowski warned:

I've even seen people in the so-called rationalist community refer to people who they don't think are effective as ‘Non Player Characters’, or NPCs, a term borrowed from video games. This is a horrible way to look at the world.

Sometime in 2016, an anonymous coward on 4Chan wrote:

I have a theory that there are only a fixed quantity of souls on planet Earth that cycle continuously through reincarnation. However, since the human growth rate is so severe, the soulless extra walking flesh piles around us are NPC’s (sic), or ultimate normalfags, who autonomously follow group think and social trends in order to appear convincingly human.

Kotaku says that this post was rediscovered by the far right in 2018.

Scott Alexander's novel Unsong has an angel tell a human character that there was a shortage of divine light for creating souls so "I THOUGHT I WOULD SOLVE THE MORAL CRISIS AND THE RESOURCE ALLOCATION PROBLEM SIMULTANEOUSLY BY REMOVING THE SOULS FROM PEOPLE IN NORTHEAST AFRICA SO THEY STOPPED HAVING CONSCIOUS EXPERIENCES." He posted that chapter in August 2016 (unsongbook.com). Was he reading or posting on 4chan?

Did any posts on LessWrong use this insult before August 2016?

Edit: In HPMOR by Eliezer Yudkowsky (written in 2009 and 2010), rationalist Harry Potter calls people who don't do what he tells them NPCs. I don't think Yud's Harry says they have no souls but he has contempt for them.

8
9
10
 
 

Apparently we get a shout-out? Sharing this brings me no joy, and I am sorry for inflicting it upon you.

11
12
 
 

It might as well be my own hand on the madman’s lever—and yet, while I grieve for all innocents, my soul is at peace, insofar as it’s ever been at peace about anything.

Psychopath.

13
 
 

This Bruenig follow up to his recent drubbing of Kelsey Piper was entertaining, but it got me thinking about just what she is now gesturing at.

She contends that cash welfare does not really help much. She presents a few recent studies showing null results for cognitive and health outcomes. She doesn’t present an explicit framework for evaluating whether a particular welfare policy is good, but implicitly adopts an evaluative framework that says welfare programs can be deemed good or bad by looking at the extent to which they promote human capital and related indicators.

I argue that we should look to the more traditional goals of the welfare state: eradicating class difference and social alienation, reducing inequality and leveling living standards, compressing and smoothing income and consumption, providing workers and individuals refuge and independence from coercion by reducing economic dependence on the labor market and the family, among other things.

Now the frame Piper used was relatively banal in the neoliberal era. Everything was about "equality of opportunity, not outcome." But wait a minute, isn't Piper in an IQ-obsessed cult? I thought genetic differences determine people's human capital, and that she was one of the good ones that says "yes and" we should throw a few bones at the dullards for their misfortune. She's also a market fundamentalist that presumably understands that her preferred political economic arrangements lead to ever greater pre-transfer inequality.

When you start with a left hereditarian and take away their commitment to welfare, because in certain RCTs it doesn't change people's human capital enough (a thing they believe is mostly immutable), what does that make her?

14
 
 

this is Habryka talking about how his moderating skills are so powerful it takes lesswrong three fucking years to block a poster who's actively being a drain on the site

here's his reaction to sneerclub (specifically me - thanks Oliver!) calling LessOnline "wordy racist fest":

A culture of loose status-focused social connection. Fellow sneerers are not trying to build anything together. They are not relying on each other for trade, coordination or anything else. They don't need to develop protocols of communication that produce functional outcomes, they just need to have fun sneering together.

He gets us! He really gets us!

15
16
17
 
 

Excerpt:

ZMD: Yeah, that was actually my second question here. I was a little bit disappointed by the article, but the audio commentary was kind of worse. You open the audio commentary with:

"We have arrived at a moment when many in Silicon Valley are saying that artificial intelligence will soon match the powers of the human brain, even though we have no hard evidence that will happen. It's an argument based on faith."

End quote. And just, these people have written hundreds of thousands of words carefully arguing why they think powerful AI is possible and plausibly coming soon.

CM: That's an argument.

ZMD: Right.

CM: It's an argument.

ZMD: Right.

CM: We don't know how to get there.

ZMD: Right.

CM: We do not—we don't know—

ZMD: But do you understand the difference between "uncertain probabilistic argument" and "leap of faith"? Like these are different things.

CM: I didn't say that. People need to understand that we don't know how to get there. There are trend lines that people see. There are arguments that people make. But we don't know how to get there. And people are saying it's going to happen in a year or two, when they don't know how to get there. There's a gap.

ZMD: Yes.

CM: And boiling this down in straightforward language for people, that's my job.

ZMD: Yeah, so I think we agree that we don't know how to get there. There are these arguments, and, you know, you might disagree with those arguments, and that's fine. You might quote relevant experts who disagree, and that's fine. You might think these people are being dishonest or self-deluding, and that's fine. But to call it "an argument based on faith" is different from those three things. What is your response to that?

CM: I've given my response.

ZMD: It doesn't seem like a very ...

CM: We're just saying the same thing.

18
19
 
 

As found by @gerikson here, more from the anti anti TESCREAL crowd. How the antis are actually R9PRESENTATIONALism. Ottokar expanded on their idea in a blog post.

Original link.

I have not read the bigger blog post yet btw, just assumed it would be sneerable and posted it here for everyone's amusement. Learn about your own true motives today. (This could be a troll of course, boy does he drop a lot of names and thinks that is enough to link things).

E: alternative title: Ideological Turing Test, a critical failure

20
21
 
 

The rats would love to have a coherent position on the Palestinian genocide but there's just no one from the polycule that's written about it in their nerd blogs so they're all going to have to continue rejecting all evidence.

After being admonished by Paul Graham for baselessly questioning the veracity of a Palestinian child saying goodbye to their dying father, Yud writes

Why do you believe that any of this is true? Serious question. I haven't been able to find any blog with two serious nerds fighting it out, each side says the other side's stuff is all fake, and each side has compelling instances of other-side stuff being fake.

Aella too finds this all very confusing due to the low IQ of everyone with an opinion

I do really wish someone smart and good at critical thinking would sit down and invest a lot of research into which claims by both sides are accurate and which are propaganda. This would be so good for the world

Nathan Young is also Spartacus, and none of your mean taunts to "open a newspaper" will change that

I respect Eliezer’s public confusion here. I too am confused and struggling to find good sources to understand Gaza.

Nor does people yelling at me make me believe them more.

The important thing is that they are actually very open minded and unbiased and will figure this all out someday when the mass graves are exhumed.

22
 
 

Original title 'What we talk about when we talk about risk'. article explains medical risk and why the polygenic embryo selection people think about it the wrong way. Includes a mention of one of our Scotts (you know the one). Non archived link: https://theinfinitesimal.substack.com/p/what-we-talk-about-when-we-talk-about

23
 
 

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

24
25
 
 

Since Harry Potter and the Methods of Rationality is apparently still a thing, I figured I'd spend a few minutes before fediverse monster-movie night to collect relevant links:

And a question dug up from one of those old threads: OK, so, Yud poured a lot of himself into writing HPMoR. It took time, he obviously believed he was doing something important — and he was writing autobiography, in big ways and small. This leads me to wonder: Has he said anything about Rowling, you know, turning out to be a garbage human?

view more: next ›