scruiser

joined 2 years ago
[–] scruiser@awful.systems 6 points 2 months ago

I mean, I assume the bigger the pump the bubble the bigger the burst, but at this point the rationalists aren't really so relevant anymore, they served their role in early incubation.

[–] scruiser@awful.systems 8 points 2 months ago

My poe detection wasn't sure until the last sentence used the "still early" and "inevitably" lines. Nice.

[–] scruiser@awful.systems 19 points 2 months ago (4 children)

Another day, another instance of rationalists struggling to comprehend how they've been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy

A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn't really engage with the fact the Anthropic has lied and broken "AI safety commitments" to rationalist/lesswrongers/EA shamelessly and repeatedly:

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=tBTMWrTejHPHyhTpQ

I feel confused about how to engage with this post. I agree that there's a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is "spun" in uncharitable ways.

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=CogFiu9crBC32Zjdp

I think it's sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.

I would find this all hilarious, except a lot of the regulation and some of the "AI safety commitments" would also address real ethical concerns.

[–] scruiser@awful.systems 6 points 2 months ago

even assuming sufficient computation power, storage space, and knowledge of physics and neurology

but sufficiently detailed simulation is something we have no reason to think is impossible.

So, I actually agree broadly with you in the abstract principle but I've increasingly come around to it being computationally intractable for various reasons. But even if functionalism is correct...

  • We don't have the neurology knowledge to do a neural-level simulation, and it would be extremely computationally expensive to actually simulate all the neural features properly in full detail, well beyond the biggest super computers we have now and "moore's law" (scare quotes deliberate) has been slowing down such that I don't think we'll get there.

  • A simulation from the physics level up is even more out of reach in terms of computational power required.

As you say:

I think there would be other, more efficient means well before we get to that point

We really really don't have the neuroscience/cognitive science to find a more efficient way. And it is possible all of the neural features really are that important to overall cognition, so you won't be able to do it that much more "efficiently" in the first place...

Lesswrong actually had someone argue that the brain is within an order or magnitude or two of the thermodynamic limit on computational efficiency: https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know

[–] scruiser@awful.systems 5 points 2 months ago

It's not infinite! If you take my cherry picked estimate of the computational power of the human brain, you'll see we're just one more round of scaling to have matched the human brain, and then we're sure to have AGI ~~and make our shareholders immense profits~~! Just one more scaling, bro!

[–] scruiser@awful.systems 17 points 2 months ago (5 children)

Continuation of the lesswrong drama I posted about recently:

https://www.lesswrong.com/posts/HbkNAyAoa4gCnuzwa/wei-dai-s-shortform?commentId=nMaWdu727wh8ukGms

Did you know that post authors can moderate their own comments section? Someone disagreeing with you too much but getting upvoted? You can ban them from your responding to your post (but not block them entirely???)! And, the cherry on top of this questionable moderation "feature", guess why it was implemented? Eliezer Yudkowsky was mad about highly upvoted comments responding to his post that he felt didn't get him or didn't deserve that, so instead of asking moderators to block on a case-by-case basis (or, acasual God forbid, consider maybe if the communication problem was on his end), he asked for a modification to the lesswrong forums to enable authors to ban people (and delete the offending replies!!!) from their posts! It's such a bizarre forum moderation choice, but I guess habryka knew who the real leader is and had it implemented.

Eliezer himself is called to weigh in:

It's indeed the case that I haven't been attracted back to LW by the moderation options that I hoped might accomplish that. Even dealing with Twitter feels better than dealing with LW comments, where people are putting more effort into more complicated misinterpretations and getting more visibly upvoted in a way that feels worse. The last time I wanted to post something that felt like it belonged on LW, I would have only done that if it'd had Twitter's options for turning off commenting entirely.

So yes, I suppose that people could go ahead and make this decision without me. I haven't been using my moderation powers to delete the elaborate-misinterpretation comments because it does not feel like the system is set up to make that seem like a sympathetic decision to the audience, and does waste the effort of the people who perhaps imagine themselves to be dutiful commentators.

Uh, considering his recent twitter post... this sure is something. Also" "it does not feel like the system is set up to make that seem like a sympathetic decision to the audience" no shit sherlock, deleting a highly upvoted reply because it feels like too much effort to respond to is in fact going to make people unsympathetic (at the least).

[–] scruiser@awful.systems 7 points 2 months ago (1 children)

So one point I have to disagree with.

More to the point, we know that thought is possible with far less processing power than a Microsoft Azure datacenter by dint of the fact that people can do it. Exact estimates on the storage capacity of a human brain vary, and aren’t the most useful measurement anyway, but they’re certainly not on the level of sheer computational firepower that venture capitalist money can throw at trying to nuke a problem from space. The problem simply doesn’t appear to be one of raw power, but rather one of basic capability.

There are a lot of ways to try to quantify the human brain's computational power, including storage (as this article focuses on, but I think its the wrong measure, operations, numbers of neural weights, etc.). Obviously it isn't literally a computer and neuroscience still has a long way to go, so the estimates you can get are spread over like 5 orders of magnitude (I've seen arguments from 10^13 flops and to 10^18 or even higher, and flops is of course the wrong way to look at the brain). Datacenter computational power have caught up to the lowers ones, yes, but not the higher ones. The bigger supercomputing clusters, like El Capitan for example, is in the 10^18th range. My own guess would be at the higher end, like 10^18, with the caveat/clarification that evolution has optimized the brain for what it does really really well, so that the compute is being used really really efficiently. Like one talk I went to in grad school that stuck with me... the eyeball's microsaccades are basically acting as a frequency filter on visual input. So literally before the visual signal has even got to the brain the information has already been processed in a clever and efficient way that isn't captured in any naive flop estimate! AI boosters picked estimates on human brain power that would put it in range of just one more scaling as part of their marketing. Likewise for number of neurons/synapses. The human brain has 80 billion neurons with an estimated 100 trillion synapses. GPT 4.5, which is believed to have peaked on number of weights (i.e. they gave up on straight scaling up because it is too pricey), is estimated (because of course they keep it secret) like 10 trillion parameters. Parameters are vaguely analogs to synapses, but synapses are so much more complicated and nuanced. But even accepting that premise, the biggest model was still like 1/10th the size to match a human brain (and they may have lacked the data to even train it right).

So yeah, minor factual issue, overall points are good, I just thought I would point it out, because this factual issue is one distorted by the AI boosters to make it look like they are getting close to human.

[–] scruiser@awful.systems 5 points 3 months ago

I found those quote searching xcancel for Eliezer Yudkowsky

[–] scruiser@awful.systems 17 points 3 months ago (2 children)

It makes total sense if you think markets are magic and thus prediction markets are more magic and also you can decentralize all society into anarcholibertarian resolution methods!

[–] scruiser@awful.systems 5 points 3 months ago

I'm not sure I even want to give Elon that much? Like the lesswrong website is less annoying than twitter!

[–] scruiser@awful.systems 13 points 3 months ago

Very ‘ideological turing test’ failure levels.

Yeah, his rational is something something "threats" something something "decision theory", which has the obvious but insane implication that you should actually ignore all protests (even peaceful protestors that meet his lib centrist ideals of what protests ought to be) because that is giving into the protestors "threats" (i.e. minor inconveniences, at least in the case of lib-brained protests) and thus incentivizing them to threaten you in the first place.

he tosses the animal rights people (partially) under the bus for no reason. EA animal rights will love that.

He's been like this a while, basically assuming that obviously animals don't have qualia and obviously you are stupid and don't understand neurology/philosophy if you think otherwise. No, he did not even explain any details of his certainty about this.

[–] scruiser@awful.systems 21 points 3 months ago* (last edited 3 months ago) (6 children)

I haven't looked into the Zizians in a ton of detail even now, among other reasons because I do not think attention should be a reward for crime.

And it doesn't occur to him to look into the Zizians in order to understand how cults keep springing up from the group he is a major thought leader in? Like if it was just one cult, I would sort of understand the desire just to shut ones eyes (but it certainly wouldn't be a truth-seeking desire), but they are like the third cult (or 5th or 6th if we are counting broadly cult-adjacent group) (and this is not counting the entire rationalist project as cult). (For full on religious cults we have: leverage research, and the rationalist-Buddhist cult; for high-demand groups we have: the Vassarites, Dragon Army's group home, and a few other sketchy group living situations (Nonlinear comes to mind)).

Also, have an xcancel link, because screw Elon and some of the comments are calling Eliezer out on stuff: https://xcancel.com/allTheYud/status/1989825897483194583#m

Funny sneer in the replies:

I read the Sequences and all I got was this lousy thread about the glomarization of Eliezer Yudkowsky's BDSM practices

Serious sneer in the replies

this seems like a good time to point folks towards my articles titled "That Time Eliezer Yudkowsky recommended a really creepy sci-fi book to his audience and called it SFW" and "That Time Eliezer Yudkowsky Wrote A Really Creepy Rationalist Sci-fi Story and called it PG-13

view more: ‹ prev next ›