lurker

joined 1 day ago
[–] lurker@awful.systems 1 points 16 minutes ago

kinda depressing seeing people fall for Yud’s shtick without realising about all the other bullshit. thankfully people in the comment section are more skeptical

[–] lurker@awful.systems 1 points 26 minutes ago (1 children)

considering his confidence in an AI apocalypse I heavily doubt it

[–] lurker@awful.systems 1 points 3 hours ago* (last edited 3 hours ago)

feels like a good enough place to dump my other observations of this book’s reviews

-It’s currently sitting at a 3.99 on Goodreads, with 4K+ ratings and 757 reviews

-higher on Amazon with a 4.5, though less reviews, only 313 (i couldve sworn it was 800 earlier but whatever)

-it received several high profile endorsements, all listed on the wikipedia page. only 7 of these endorsements work in the compsci field, and only one of them’s an AI expert (Yoshua Bengio)

[–] lurker@awful.systems 1 points 4 hours ago

considering Yud’s previous comments on nuking data centres and bombing Wuhan, I wouldn’t be surprised if he’s cool with smart fascists programming their values into an AI and controlling it because “at least not all of humanity is dead, and there are humans living amongst the stars in a utopia!”

[–] lurker@awful.systems 6 points 5 hours ago

the new hit anime coming real soon to a simulation near you

[–] lurker@awful.systems 2 points 5 hours ago

“Tellingly, although the authors acknowledge at the start of the book that LLMs seem “shallow,” they do not ever mention hallucinations, the most significant problem that LLMs face”

christ, it’s that bad?

 

I searched for “eugenics” on yud’s xcancel (i will never use twitter, fuck you elongated muskrat) because I was bored, got flashbanged by this gem. yud, genuinely what are you talking about

[–] lurker@awful.systems 2 points 6 hours ago

that post got way funnier with Eliezer’s recent twitter post about “EAs developing more complex opinions on AI other than itll kill everyone is a net negative and cancelled out all the good they ever did”

[–] lurker@awful.systems 1 points 6 hours ago

“whatever is really inside ChatGPT”

…is this part of the reason why Documenting AGI’s video on “AI scientists think there’s a monster inside ChatGPT” happened? cause it very much feels like it

[–] lurker@awful.systems 3 points 6 hours ago (2 children)

the key assumption here is that these “superbabies” will naturally hold the “correct” moral values that they will then program into a superintelligent AI system, which will then elevate humanity into a golden period where we get to live in a techno-utopia amongst the stars.

which is pretty weird and has some uncomfortable implications

smart people are still capable of being pieces of shit. Eliezer’s whole “we need to focus everything on augmenting human intelligence” thing pretty much glosses over this. It only takes one group of superbabies/augmented intelligence humans getting into some fascist shit for this to blow up in his face.

[–] lurker@awful.systems 1 points 6 hours ago

late reply but yes Eliezer has avoided hard dates because “predictions are hard”

the closest he’s gotten is his standing bet with Bryan Chaplan that it’ll happen before 2030 (when I looked into this bet Eliezer himself said that he made it so he could “exploit Bryan’s amazing bet-winning ability and my amazing bet-losing ability” to ensure AGI doesn’t wipe everyone out before 2030) he said in a 2024 interview that if you put a gun to his head and forced him to make probabilities, “it would look closer to 5 years than 50” (unhelpfully vague since it puts the ballpark at like 2-27 years) but did say in a more recent interview that he thinks 20 years feels like it’s starting to push it (possible but he doesn’t think so)

So basically, no hard dates but “sooner rather than later” vagueness