scruiser

joined 2 years ago
[–] scruiser@awful.systems 16 points 1 month ago (2 children)

It makes total sense if you think markets are magic and thus prediction markets are more magic and also you can decentralize all society into anarcholibertarian resolution methods!

[–] scruiser@awful.systems 5 points 1 month ago

I'm not sure I even want to give Elon that much? Like the lesswrong website is less annoying than twitter!

[–] scruiser@awful.systems 12 points 1 month ago

Very ‘ideological turing test’ failure levels.

Yeah, his rational is something something "threats" something something "decision theory", which has the obvious but insane implication that you should actually ignore all protests (even peaceful protestors that meet his lib centrist ideals of what protests ought to be) because that is giving into the protestors "threats" (i.e. minor inconveniences, at least in the case of lib-brained protests) and thus incentivizing them to threaten you in the first place.

he tosses the animal rights people (partially) under the bus for no reason. EA animal rights will love that.

He's been like this a while, basically assuming that obviously animals don't have qualia and obviously you are stupid and don't understand neurology/philosophy if you think otherwise. No, he did not even explain any details of his certainty about this.

[–] scruiser@awful.systems 20 points 1 month ago* (last edited 1 month ago) (6 children)

I haven't looked into the Zizians in a ton of detail even now, among other reasons because I do not think attention should be a reward for crime.

And it doesn't occur to him to look into the Zizians in order to understand how cults keep springing up from the group he is a major thought leader in? Like if it was just one cult, I would sort of understand the desire just to shut ones eyes (but it certainly wouldn't be a truth-seeking desire), but they are like the third cult (or 5th or 6th if we are counting broadly cult-adjacent group) (and this is not counting the entire rationalist project as cult). (For full on religious cults we have: leverage research, and the rationalist-Buddhist cult; for high-demand groups we have: the Vassarites, Dragon Army's group home, and a few other sketchy group living situations (Nonlinear comes to mind)).

Also, have an xcancel link, because screw Elon and some of the comments are calling Eliezer out on stuff: https://xcancel.com/allTheYud/status/1989825897483194583#m

Funny sneer in the replies:

I read the Sequences and all I got was this lousy thread about the glomarization of Eliezer Yudkowsky's BDSM practices

Serious sneer in the replies

this seems like a good time to point folks towards my articles titled "That Time Eliezer Yudkowsky recommended a really creepy sci-fi book to his audience and called it SFW" and "That Time Eliezer Yudkowsky Wrote A Really Creepy Rationalist Sci-fi Story and called it PG-13

[–] scruiser@awful.systems 8 points 1 month ago (2 children)

Elon is widely known to be a strong engineer, as well as a strong designer

This is just so idiotic I don't know what made up world Habryka lives in. In between blowing up a launch pad, the numerous insane design and engineering choices of the cybertruck, all the animals slaughtered by neuralink, and the outages and technical problems of twitter, you might be tempted to hope that the idea of Elon Musk as a strong engineer or designer would be firmly relegated to the dustbins of early 2010 where out-of-the-loop people could manage to buy the image of his PR firms. I guess Musk-cultist and lesswrong have more overlap than I realized (I knew there was some, but I didn't realize it was that common).

[–] scruiser@awful.systems 3 points 1 month ago

Even taking their story at face value:

  • It seems like they are hyping up LLM agents operating a bunch of scripts?

  • It indicates that their safety measures don't work

  • Anthropic will read your logs, so you don't have any privacy or confidentiality or security using their LLM, but, they will only find any problems months after the fact (this happened in June according to Anthropic but they didn't catch it until September),

If it’s a Chinese state actor … why are they using Claude Code? Why not Chinese chatbots like DeepSeek or Qwen? Those chatbots code just about as well as Claude. Anthropic do not address this really obvious question.

  • Exactly. There are also a bunch of open source models hackers could use for a marginal (if any) tradeoff in performance, with the benefit that they could run locally, so that their entire effort isn't dependent on hardware outsider of their control in the hands of someone that will shut them down if they check the logs.

You are not going to get a chatbot to reliably automate a long attack chain.

  • I don't actually find it that implausible someone managed to direct a bunch of scripts with an LLM? It won't be reliable, but if you can do a much greater volume of attacks maybe that makes up for the unreliability?

But yeah, the whole thing might be BS or at least bad exaggeration from Anthropic, they don't really precisely list what their sources and evidence are vs. what is inference (guesses) from that evidence. For instance, if a hacker tried to setup hacking LLM bots, and they mostly failed and wasted API calls and hallucinated a bunch of shit, if Anthropic just read the logs from their end and didn't do the legwork contacting people who had allegedly been hacked, they might "mistakenly' (a mistake that just so happens to hype up their product) think the logs represent successful hacks.

[–] scruiser@awful.systems 12 points 1 month ago (2 children)

This is somewhat reassuring, as it suggests that he doesn’t fully understand how cultural critiques of LW affect the perception of LW more broadly;

This. On Reddit (which isn't actually mainstream common knowledge per se, but I still find it encouraging and indicative that the common sense perspective is winning out) whenever I see the topic of lesswrong or AI Doom come up on unrelated subreddits, I'll see a bunch of top upvoted comments mentioning the cult spin offs or that the main thinker's biggest achievement is Harry Potter fanfic or Roko's Basilisk or any of the other easily comprehensible indicators that these are not serious thinkers with legitimate thoughts.

[–] scruiser@awful.systems 8 points 1 month ago

Another ironic point... Lesswronger's actually do care about ML interpretability (to the extent they care about real ML at all; and as a solution to making their God AI serve their whims not for anything practical). A lack of interpretability is a major problem (like irl problem, not just scifi skynet problem) in ML, you can models with racism or other bias buried in them and not be able to tell except by manually experimenting with your model with data from outside the training set. But Sam Altman has turned it from a problem into a humble brag intended to imply their LLM is so powerful and mysterious and bordering on AGI.

[–] scruiser@awful.systems 13 points 1 month ago* (last edited 1 month ago)

A lesswronger wrote an blog post about avoiding being overly deferential, using Eliezer as an example of someone that gets overly deferred to. Of course, they can't resist glazing him, even in the context of an blog post on not being too deferential:

Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI

Another lesswronger pushes back on that and is highly upvoted (even among the doomers that think Eliezer is a genius, most of them still think he screwed up in inadvertently helping LLM companies get to where they are): https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1?commentId=MSAkbpgWLsXAiRN6w

The OP gets mad because this is off topic from what they wanted to talk about (they still don't acknowledge the irony).

A few days later they write an entire post, ostensibly about communication norms, but actually aimed at slamming the person that went off topic: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse

And of course the person they are slamming comes back in for another round of drama: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse?commentId=s4GPm9tNmG6AvAAjo

No big point to this, just a microcosm of lesswrongers being blind to irony, sucking up to Eliezer, and using long winded posts about meta-norms and communication as a means of fighting out their petty forum drama. (At least us sneerclubers are direct and come out and say what we mean on the rare occasions we have beef among ourselves.)

[–] scruiser@awful.systems 7 points 1 month ago

Thanks for the information. I won't speculate further.

[–] scruiser@awful.systems 7 points 1 month ago* (last edited 1 month ago) (3 children)

Thanks!

So it wasn't even their random hot takes, it was reporting someone? (My guess would be reporting froztbyte's criticism, which I agree have been valid if a bit harsh in tone)

[–] scruiser@awful.systems 2 points 1 month ago* (last edited 1 month ago)

Some legitimate academic papers and essays have served as fuel for the AI hype and less legitimate follow-up research, but the clearest examples that comes to mind would be either "The Bitter Lesson" essay or one of the "scaling law" papers (I guess Chinchilla scaling in particular?), not "Attention is All You Need". (Hyperscaling LLMs and the bubble fueling it is motivated by the idea that they can just throw more and more training data at bigger and bigger model). And I wouldn't blame the author(s) for that alone.

view more: ‹ prev next ›