this post was submitted on 20 Feb 2025
51 points (100.0% liked)

SneerClub

1223 readers
22 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] lurker@awful.systems 3 points 6 hours ago (2 children)

the key assumption here is that these “superbabies” will naturally hold the “correct” moral values that they will then program into a superintelligent AI system, which will then elevate humanity into a golden period where we get to live in a techno-utopia amongst the stars.

which is pretty weird and has some uncomfortable implications

smart people are still capable of being pieces of shit. Eliezer’s whole “we need to focus everything on augmenting human intelligence” thing pretty much glosses over this. It only takes one group of superbabies/augmented intelligence humans getting into some fascist shit for this to blow up in his face.

[–] Architeuthis@awful.systems 2 points 5 hours ago* (last edited 5 hours ago) (1 children)

Rationalism, among other things, is supposed to cure you of being a piece of shit, in fact it's such a flawless epistemic hack that it's a common belief among them that it's impossible for two sufficiently rationalist individuals to disagree, as in come to different conclusions about the same assumptions. So if you think some rat influencer is full of shit, it could in fact be you that hasn't yet attained the appropriate level of ~~thetans~~ rationalism.

So yeah they'll just have the superbabies read the sequences and Harry Potter fan fiction.

Also, any talk of alignment should be seen in light of them being mostly ok with humanity going poof by this time next week, as long as the stage is set for whatever technological facsimile of consciousness they deem reasonably human-like to inherit the cosmos.

[–] lurker@awful.systems 1 points 4 hours ago

considering Yud’s previous comments on nuking data centres and bombing Wuhan, I wouldn’t be surprised if he’s cool with smart fascists programming their values into an AI and controlling it because “at least not all of humanity is dead, and there are humans living amongst the stars in a utopia!”