scruiser

joined 2 years ago
[–] scruiser@awful.systems 4 points 3 months ago (4 children)

I kinda half agree, but I'm going to push back on at least one point. Originally most of reddit's moderation was provided by unpaid volunteers, with paid admins only acting as a last resort. I think this is probably still true even after they purged a bunch of mods that were mad Reddit was being enshittifyied. And the official paid admins were notoriously slow at purging some really blatantly over the line content, like the jailbait subreddit or the original donald trump subreddit. So the argument is that Reddit benefited and still benefits heavily from that free moderation and the content itself generated and provided by users is valuable, so acting like all reddit users are simply entitled free riders isn't true.

[–] scruiser@awful.systems 9 points 3 months ago (7 children)

Going from lazy, sloppy human reviews to absolutely no humans is still a step down. LLMs don't have the capability to generalize outside the (admittedly enormous) training dataset they have, so cutting edge research is one of the worse use cases for them.

[–] scruiser@awful.systems 5 points 3 months ago

Yeah, I think this is an extreme example of a broader rationalist trend of taking their weird in-group beliefs as givens and missing how many people disagree. Like most AI researchers do not believe in the short timelines they do, the median (including their in-group and people that have bought the booster's hype) guess among AI researchers for AGI is 2050. Eliezer apparently assumes short timelines are self evident from ChatGPT (but hasn't actually committed to one or a hard date publicly).

[–] scruiser@awful.systems 6 points 3 months ago

The fixation on their own in-group terms is so cringe. Also I think shoggoth is kind of a dumb term for lLMs. Even accepting the premise that LLMs are some deeply alien process (and not a very wide but shallow pool of different learned heuristics), shoggoths weren't really that bizarre alien, they broke free of their original creators programming and didn't want to be controlled again.

[–] scruiser@awful.systems 13 points 3 months ago (5 children)

Eliezer is mad OpenPhil (EA organization, now called Coefficient Giving)... advocated for longer AI timelines? And apparently he thinks they were unfair to MIRI, or didn't weight MIRI's views highly enough? And doing so for epistemically invalid reasons? IDK, this post is a bit more of a rant and less clear than classic sequence content (but is par for the course for the last 5 years of Eliezer's content). For us sane people, AGI by 2050 is still a pretty radical timeline, it just disagrees with Eliezer's imminent belief in doom. Also, it is notable Eliezer has actually avoided publicly committing to consistent timelines (he actually disagrees with efforts like AI2027) other than a vague certainty we are near doom.

link

Some choice comments

I recall being at a private talk hosted by ~2 people that OpenPhil worked closely with and/or thought of as senior advisors, on AI. It was a confidential event so I can't say who or any specifics, but they were saying that they wanted to take seriously short AI timelines

Ah yes, they were totally secretly agreeing with your short timelines but couldn't say so publicly.

Open Phil decisions were strongly affected by whether they were good according to worldviews where "utter AI ruin" is >10% or timelines are <30 years.

OpenPhil actually did have a belief in a pretty large possibility of near term AGI doom, it just wasn't high enough or acted on strongly enough for Eliezer!

At a meta level, "publishing, in 2025, a public complaint about OpenPhil's publicly promoted timelines and how those may have influenced their funding choices" does not seem like it serves any defensible goal.

Lol, someone noting Eliezer's call out post isn't actually doing anything useful towards Eliezer's goals.

It's not obvious to me that Ajeya's timelines aged worse than Eliezer's. In 2020, Ajeya's median estimate for transformative AI was 2050. [...] As far as I know, Eliezer never made official timeline predictions

Someone actually noting AGI hasn't happened yet and so you can't say a 2050 estimate is wrong! And they also correctly note that Eliezer has been vague on timelines (rationalists are theoretically supposed to be preregistering their predictions in formal statistical language so that they can get better at predicting and people can calculate their accuracy... but we've all seen how that went with AI 2027. My guess is that at least on a subconscious level Eliezer knows harder near term predictions would ruin the grift eventually.)

[–] scruiser@awful.systems 10 points 3 months ago

Image and video generation AI can't create good, novel, art, but it can serve up mediocre remixes of all the standard stuff with only minor defects an acceptable percentage of the time, and that is a value proposition soulless corporate executive are more than eager to take up. And that is just a bonus, I think your last fourth point is Disney's real motive, establish a monetary value of their IP served up as slop, so they can squeeze other AI providers for their money. Disney was never an ally in this fight.

The fact that Sam was slippery enough to finagle this deal makes me doubt the analysts like Ed Zitron... they may be right from a rational perspective, but if Sam can secure a few major revenue streams and build moat through nonsense like this Disney deal... still it will be tough even if he has another dozen tricks like this one up his sleeves, smaller companies without all the debts and valuation of OpenAI can undercut his prices.

[–] scruiser@awful.systems 12 points 3 months ago

the actual fear of “going mad” seems fundamentally disconnected from any real sense of failing to handle the stress of being famously certain that the end times are indeed upon us

I think he actually is failing to handle the stress he has inflicted on himself, and that's why his latest few lesswrong posts hadreally stilted poor parables about Chess and about alien robots visiting earth that were much worse than classic sequences parables. And why he has basically given up trying to think of anything new and instead keeps playing the greatest lesswrong hits on repeat, as if that would convince anyone that isn't already convinced.

[–] scruiser@awful.systems 13 points 3 months ago

Yud, when journalists ask you “How are you coping?”, they don’t expect you to be “going mad facing apocalypse”, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.

I think the way he reads the question is telling on himself. He knows he is sort of doing a half-assed response to the impending apocalypse (going on a podcast tour, making even lower-quality lesswrong posts, making unworkable policy proposals, and continuing to follow the lib-centrist deep down inside himself and rejecting violence or even direct action against the AI companies that are hurling us towards an apocalypse). He knows a character from one of his stories would have a much cooler response, but it might end up getting him labeled a terrorist and sent to prison or whatever, so instead he rationalizes his current set of actions. This is in fact insane by rationalist standards, so when a journalist asks him a harmless question it sends him down a long trail of rationalizations that include failing to empathize with the journalist and understand the question.

[–] scruiser@awful.systems 13 points 3 months ago

One part in particular pissed me off for being blatantly the opposite of reality

and remembering that it's not about me.

And so similarly I did not make a great show of regret about having spent my teenage years trying to accelerate the development of self-improving AI.

Eliezer literally has multiple sequence about his foolish youth where he nearly destroyed the world trying to jump straight to inventing AI instead of figuring out "AI Friendliness" first!

I did not neglect to conduct a review of what I did wrong and update my policies; you know some of those updates as the Sequences.

Nah, you learned nothing from what you did wrong and your sequence posts were the very sort of self aggrandizing bullshit you're mocking here.

Should I promote it to the center of my narrative in order to make the whole thing be about my dramatic regretful feelings? Nah. I had AGI concerns to work on instead.

Eliezer's "AGI concerns to work on" was making a plan for him, personally, to lead a small team, which would solve meta-ethics and figure out how to implement these meta-ethics in a perfectly reliable way in an AI that didn't exist yet (that a theoretical approach didn't exist for yet, that an inkling of how to make traction on a theoretical approach for didn't exist yet). The very plan Eliezer came up with was self aggrandizing bullshit that made everything about Eliezer.

[–] scruiser@awful.systems 6 points 4 months ago

I mean, I assume the bigger the pump the bubble the bigger the burst, but at this point the rationalists aren't really so relevant anymore, they served their role in early incubation.

[–] scruiser@awful.systems 8 points 4 months ago

My poe detection wasn't sure until the last sentence used the "still early" and "inevitably" lines. Nice.

[–] scruiser@awful.systems 19 points 4 months ago (4 children)

Another day, another instance of rationalists struggling to comprehend how they've been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy

A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn't really engage with the fact the Anthropic has lied and broken "AI safety commitments" to rationalist/lesswrongers/EA shamelessly and repeatedly:

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=tBTMWrTejHPHyhTpQ

I feel confused about how to engage with this post. I agree that there's a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is "spun" in uncharitable ways.

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=CogFiu9crBC32Zjdp

I think it's sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.

I would find this all hilarious, except a lot of the regulation and some of the "AI safety commitments" would also address real ethical concerns.

view more: ‹ prev next ›