I was wondering why Eliezer picked chess of all things in his latest "parable". Even among the lesswrong community, chess playing as a useful analogy for general intelligence has been picked apart. But seeing that this is recent half-assed lesswrong research, that would explain the renewed interest in it.
scruiser
Yud: “Woe is me, a child who was lied to!”
He really can't let down that one go, it keeps coming up. It was at least vaguely relevant to a Harry Potter self-insert, but his frustrated gifted child vibes keep leaking into other weird places. (Like Project Lawful, among it's many digressions, had an aside about how dath ilan raises it's children to avoid this. It almost made me sympathetic towards the child-abusing devil worshipers who had to put up with these asides to get to the main character's chemistry and math lectures.)
Of course this a meandering plug to his book!
Yup, now that he has a book out he's going to keep referencing back to it and it's being added to the canon that must be read before anyone is allowed to dare disagree with him. (At least the sequences were free and all online)
Is that… an incel shape-rotator reference?
I think shape-rotator has generally permeated the rationalist lingo for a certain kind of math aptitude, I wasn't aware the term had ties to the incel community. (But it wouldn't surprise me that much.)
I couldn't even make it through this one, he just kept repeating himself with the most absurd parody strawman he could manage.
This isn't the only obnoxiously heavy handed "parable" he's written recently: https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius
Even the lesswronger's are kind of questioning the point:
I enjoyed this, but don't think there are many people left who can be convinced by Ayn-Rand length explanatory dialogues in a science-fiction guise who aren't already on board with the argument.
A dialogue that references Stanislaw Lem's Cyberiad, no less. But honestly Lem was a lot more terse and concise in making his points. I agree this is probably not very relevant to any discourse at this point (especially here on LW, where everyone would be familiar with the arguments anyway).
Reading this felt like watching someone kick a dead horse for 30 straight minutes, except at the 21st minute the guy forgets for a second that he needs to kick the horse, turns to the camera and makes a couple really good jokes. (The bit where they try and fail to change the topic reminded me of the "who reads this stuff" bit in HPMOR, one of the finest bits you ever wrote in my opinion.) Then the guy remembers himself, resumes kicking the horse and it continues in that manner until the end.
Who does he think he's convincing? Numerous skeptical lesswrong posts have described why general intelligence is not like chess-playing and world-conquering/optimizing is not like a chess game. Even among his core audience this parable isn't convincing. But instead he's stuck on repeating poor analogies (and getting details wrong about the thing he is using for analogies, he messed up some details about chess playing!).
Eh, cuck is kind of the right-winger's word, it's tied to their inceldom and their mix of moral-panic and fetishization of minorities' sexualities.
“You don’t understand how Eliezer has programmed half the people in your company to believe in that stuff,” he is reported to have told Altman at a dinner party in late 2023. “You need to take this more seriously.” Altman “tried not to roll his eyes,” according to Wall Street Journal reporter Keach Hagey.
I wonder exactly when this was. The attempted oust of Sam Altman was November 17, 2023. So either this warning was timely (but something Sam already had the pieces in place to make a counterplay against), or a bit too late (as Sam had recently just beaten an attempt by the true believers to oust him).
Sam Altman has proved adept at keeping the plates spinning and wheedling his way through various deals, I agree with the common sentiment here that he his underlying product just doesn't work well enough, in a unique/proprietary enough way for him to actually use that to get profitable company. Pivot-to-AI and Ed Zitron have a guess of 2027 for the plates to come crashing down, but with an IPO on the way to infuse more cash into OpenAI I wouldn't be that surprised if he delays the bubble pop all the way to 2030, and personally gets away cleanly with no legal liability for it and some stock sales lining his pockets.
“I’m sort of a complex chaotic systems guy, so I have a low estimate that I actually know what the nonlinear dynamic in the memosphere really was,” he said. (Translation: It’s complicated.)
Why do these people have the urge to talk like this? Does it make themselves feel smarter? Do they think it makes them look smart to other people? Are they so caught up in their field they can't code switch to normal person talk?
Remember when a bunch of people poured their life savings into GameStop and started a financial doomsday cult once they lost everything? That will happen again if OpenAI goes public.
I've seen redditors on /r/singularity planning on buying OpenAI stock if it goes public. And judging by Tesla, cultists buying meme stock can keep up their fanaticism through quite a lot.
It seems like a complicated but repeatable formula: Start a non-profit dedicated to some technology, leverage the charity status for influence and tax avoidance and PR and recruiting true believers in the initial stages, and then make a bunch of financial deals conditional on your non-profit changing to for profit, then claim you need to change to for-profit or your organization will collapse!
Although I'm not sure how repeatable it is without the "too big to fail" threat of loss of business to state AGs. OTOH, states often bend the rules to gain (or even just avoid losing) embarrassingly few jobs, so IDK.
i’ve listened to his podcast, i’ve read his articles, he is pretty up front about what his day job is and that he is a disappointed fanboy for tech. the dots are 1/1000th of an inch apart.
For comparison I've only read Ed's articles, not listened to his podcasts, and I was unaware of his PR business. This doesn't make me think his criticisms are wrong, but it does make me concerned he's overlooked critiquing and analyzing some aspects of the GenAI industry because of these connections to those aspects.
This week's southpark makes fun of prediction markets! Hanson and the rationalists can be proud their idea has gone mainstream enough to be made fun of. The episode actually does a good job highlighting some of the issues with the whole concept: the twisted incentives and insider trading and the way it fails to actually create good predictions (as opposed to just getting vibes and degenerate gambling).
and the person who made up the "math pets" allegation claimed no such source
I was about to point out that I think this is the second time he claimed math pets had absolutely no basis in reality (and someone countered with a source that forced him to) but I double checked the posting date and this is the example I was already thinking of. Also, we have supporting sources that didn't say as much directly but implied it heavily: https://www.reddit.com/r/SneerClub/comments/42iv09/a_yudkowsky_blast_from_the_past_his_okcupid/ or like, the entire first two thirds of the plot of Planecrash!
BlueMonday has had a tendency to go off with a half-assed understanding of actual facts and details. Each individual instance wasn't ban worthy, but collectively I can see why it merited a temp ban. (I hope/assume it's not a permanent ban, is there a way to see?)