scruiser

joined 2 years ago
[–] scruiser@awful.systems 10 points 1 week ago

Image and video generation AI can't create good, novel, art, but it can serve up mediocre remixes of all the standard stuff with only minor defects an acceptable percentage of the time, and that is a value proposition soulless corporate executive are more than eager to take up. And that is just a bonus, I think your last fourth point is Disney's real motive, establish a monetary value of their IP served up as slop, so they can squeeze other AI providers for their money. Disney was never an ally in this fight.

The fact that Sam was slippery enough to finagle this deal makes me doubt the analysts like Ed Zitron... they may be right from a rational perspective, but if Sam can secure a few major revenue streams and build moat through nonsense like this Disney deal... still it will be tough even if he has another dozen tricks like this one up his sleeves, smaller companies without all the debts and valuation of OpenAI can undercut his prices.

[–] scruiser@awful.systems 10 points 1 week ago

the actual fear of “going mad” seems fundamentally disconnected from any real sense of failing to handle the stress of being famously certain that the end times are indeed upon us

I think he actually is failing to handle the stress he has inflicted on himself, and that's why his latest few lesswrong posts hadreally stilted poor parables about Chess and about alien robots visiting earth that were much worse than classic sequences parables. And why he has basically given up trying to think of anything new and instead keeps playing the greatest lesswrong hits on repeat, as if that would convince anyone that isn't already convinced.

[–] scruiser@awful.systems 10 points 1 week ago

Yud, when journalists ask you “How are you coping?”, they don’t expect you to be “going mad facing apocalypse”, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.

I think the way he reads the question is telling on himself. He knows he is sort of doing a half-assed response to the impending apocalypse (going on a podcast tour, making even lower-quality lesswrong posts, making unworkable policy proposals, and continuing to follow the lib-centrist deep down inside himself and rejecting violence or even direct action against the AI companies that are hurling us towards an apocalypse). He knows a character from one of his stories would have a much cooler response, but it might end up getting him labeled a terrorist and sent to prison or whatever, so instead he rationalizes his current set of actions. This is in fact insane by rationalist standards, so when a journalist asks him a harmless question it sends him down a long trail of rationalizations that include failing to empathize with the journalist and understand the question.

[–] scruiser@awful.systems 10 points 1 week ago

One part in particular pissed me off for being blatantly the opposite of reality

and remembering that it's not about me.

And so similarly I did not make a great show of regret about having spent my teenage years trying to accelerate the development of self-improving AI.

Eliezer literally has multiple sequence about his foolish youth where he nearly destroyed the world trying to jump straight to inventing AI instead of figuring out "AI Friendliness" first!

I did not neglect to conduct a review of what I did wrong and update my policies; you know some of those updates as the Sequences.

Nah, you learned nothing from what you did wrong and your sequence posts were the very sort of self aggrandizing bullshit you're mocking here.

Should I promote it to the center of my narrative in order to make the whole thing be about my dramatic regretful feelings? Nah. I had AGI concerns to work on instead.

Eliezer's "AGI concerns to work on" was making a plan for him, personally, to lead a small team, which would solve meta-ethics and figure out how to implement these meta-ethics in a perfectly reliable way in an AI that didn't exist yet (that a theoretical approach didn't exist for yet, that an inkling of how to make traction on a theoretical approach for didn't exist yet). The very plan Eliezer came up with was self aggrandizing bullshit that made everything about Eliezer.

[–] scruiser@awful.systems 5 points 2 weeks ago

I mean, I assume the bigger the pump the bubble the bigger the burst, but at this point the rationalists aren't really so relevant anymore, they served their role in early incubation.

[–] scruiser@awful.systems 8 points 2 weeks ago

My poe detection wasn't sure until the last sentence used the "still early" and "inevitably" lines. Nice.

[–] scruiser@awful.systems 18 points 2 weeks ago (4 children)

Another day, another instance of rationalists struggling to comprehend how they've been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy

A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn't really engage with the fact the Anthropic has lied and broken "AI safety commitments" to rationalist/lesswrongers/EA shamelessly and repeatedly:

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=tBTMWrTejHPHyhTpQ

I feel confused about how to engage with this post. I agree that there's a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is "spun" in uncharitable ways.

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=CogFiu9crBC32Zjdp

I think it's sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.

I would find this all hilarious, except a lot of the regulation and some of the "AI safety commitments" would also address real ethical concerns.

[–] scruiser@awful.systems 6 points 1 month ago

even assuming sufficient computation power, storage space, and knowledge of physics and neurology

but sufficiently detailed simulation is something we have no reason to think is impossible.

So, I actually agree broadly with you in the abstract principle but I've increasingly come around to it being computationally intractable for various reasons. But even if functionalism is correct...

  • We don't have the neurology knowledge to do a neural-level simulation, and it would be extremely computationally expensive to actually simulate all the neural features properly in full detail, well beyond the biggest super computers we have now and "moore's law" (scare quotes deliberate) has been slowing down such that I don't think we'll get there.

  • A simulation from the physics level up is even more out of reach in terms of computational power required.

As you say:

I think there would be other, more efficient means well before we get to that point

We really really don't have the neuroscience/cognitive science to find a more efficient way. And it is possible all of the neural features really are that important to overall cognition, so you won't be able to do it that much more "efficiently" in the first place...

Lesswrong actually had someone argue that the brain is within an order or magnitude or two of the thermodynamic limit on computational efficiency: https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know

[–] scruiser@awful.systems 5 points 1 month ago

It's not infinite! If you take my cherry picked estimate of the computational power of the human brain, you'll see we're just one more round of scaling to have matched the human brain, and then we're sure to have AGI ~~and make our shareholders immense profits~~! Just one more scaling, bro!

[–] scruiser@awful.systems 17 points 1 month ago (5 children)

Continuation of the lesswrong drama I posted about recently:

https://www.lesswrong.com/posts/HbkNAyAoa4gCnuzwa/wei-dai-s-shortform?commentId=nMaWdu727wh8ukGms

Did you know that post authors can moderate their own comments section? Someone disagreeing with you too much but getting upvoted? You can ban them from your responding to your post (but not block them entirely???)! And, the cherry on top of this questionable moderation "feature", guess why it was implemented? Eliezer Yudkowsky was mad about highly upvoted comments responding to his post that he felt didn't get him or didn't deserve that, so instead of asking moderators to block on a case-by-case basis (or, acasual God forbid, consider maybe if the communication problem was on his end), he asked for a modification to the lesswrong forums to enable authors to ban people (and delete the offending replies!!!) from their posts! It's such a bizarre forum moderation choice, but I guess habryka knew who the real leader is and had it implemented.

Eliezer himself is called to weigh in:

It's indeed the case that I haven't been attracted back to LW by the moderation options that I hoped might accomplish that. Even dealing with Twitter feels better than dealing with LW comments, where people are putting more effort into more complicated misinterpretations and getting more visibly upvoted in a way that feels worse. The last time I wanted to post something that felt like it belonged on LW, I would have only done that if it'd had Twitter's options for turning off commenting entirely.

So yes, I suppose that people could go ahead and make this decision without me. I haven't been using my moderation powers to delete the elaborate-misinterpretation comments because it does not feel like the system is set up to make that seem like a sympathetic decision to the audience, and does waste the effort of the people who perhaps imagine themselves to be dutiful commentators.

Uh, considering his recent twitter post... this sure is something. Also" "it does not feel like the system is set up to make that seem like a sympathetic decision to the audience" no shit sherlock, deleting a highly upvoted reply because it feels like too much effort to respond to is in fact going to make people unsympathetic (at the least).

[–] scruiser@awful.systems 7 points 1 month ago (1 children)

So one point I have to disagree with.

More to the point, we know that thought is possible with far less processing power than a Microsoft Azure datacenter by dint of the fact that people can do it. Exact estimates on the storage capacity of a human brain vary, and aren’t the most useful measurement anyway, but they’re certainly not on the level of sheer computational firepower that venture capitalist money can throw at trying to nuke a problem from space. The problem simply doesn’t appear to be one of raw power, but rather one of basic capability.

There are a lot of ways to try to quantify the human brain's computational power, including storage (as this article focuses on, but I think its the wrong measure, operations, numbers of neural weights, etc.). Obviously it isn't literally a computer and neuroscience still has a long way to go, so the estimates you can get are spread over like 5 orders of magnitude (I've seen arguments from 10^13 flops and to 10^18 or even higher, and flops is of course the wrong way to look at the brain). Datacenter computational power have caught up to the lowers ones, yes, but not the higher ones. The bigger supercomputing clusters, like El Capitan for example, is in the 10^18th range. My own guess would be at the higher end, like 10^18, with the caveat/clarification that evolution has optimized the brain for what it does really really well, so that the compute is being used really really efficiently. Like one talk I went to in grad school that stuck with me... the eyeball's microsaccades are basically acting as a frequency filter on visual input. So literally before the visual signal has even got to the brain the information has already been processed in a clever and efficient way that isn't captured in any naive flop estimate! AI boosters picked estimates on human brain power that would put it in range of just one more scaling as part of their marketing. Likewise for number of neurons/synapses. The human brain has 80 billion neurons with an estimated 100 trillion synapses. GPT 4.5, which is believed to have peaked on number of weights (i.e. they gave up on straight scaling up because it is too pricey), is estimated (because of course they keep it secret) like 10 trillion parameters. Parameters are vaguely analogs to synapses, but synapses are so much more complicated and nuanced. But even accepting that premise, the biggest model was still like 1/10th the size to match a human brain (and they may have lacked the data to even train it right).

So yeah, minor factual issue, overall points are good, I just thought I would point it out, because this factual issue is one distorted by the AI boosters to make it look like they are getting close to human.

[–] scruiser@awful.systems 5 points 1 month ago

I found those quote searching xcancel for Eliezer Yudkowsky

 

So seeing the reaction on lesswrong to Eliezer's book has been interesting. It turns out, even among people that already mostly agree with him, a lot of them were hoping he would make their case better than he has (either because they aren't as convinced as him, or they are, but were hoping for something more palatable to the general public).

This review (lesswrong discussion here), calls out a really obvious issue: Eliezer's AI doom story was formed before Deep Learning took off, and in fact was mostly focusing on more GOFAI than neural networks, yet somehow, the details of the story haven't changed at all. The reviewer is a rationalist that still believes in AI doom, so I wouldn't give her too much credit, but she does note this is a major discrepancy from someone that espouses a philosophy that (nominally) features a lot of updating your beliefs in response to evidence. The reviewer also notes that "it should be illegal to own more than eight of the most powerful GPUs available in 2024 without international monitoring" is kind of unworkable.

This reviewer liked the book more than they expected to, because Eliezer and Nate Soares gets some details of the AI doom lore closer to the reviewer's current favored headcanon. The reviewer does complain that maybe weird and condescending parables aren't the best outreach strategy!

This reviewer has written their own AI doom explainer which they think is better! From their limited description, I kind of agree, because it sounds like the focus on current real world scenarios and harms (and extrapolate them to doom). But again, I wouldn't give them too much credit, it sounds like they don't understand why existential doom is actually promoted (as a distraction and source of crit-hype). They also note the 8 GPUs thing is batshit.

Overall, it sounds like lesswrongers view the book as an improvement to the sprawling mess of arguments in the sequences (and scattered across other places like Arbital), but still not as well structured as they could be or stylistically quite right for a normy audience (i.e. the condescending parables and diversions into unrelated science-y topics). And some are worried that Nate and Eliezer's focus on an unworkable strategy (shut it all down, 8 GPU max!) with no intermediate steps or goals or options might not be the best.

 

I found a neat essay discussing the history of Doug Lenat, Eurisko, and cyc here. The essay is pretty cool, Doug Lenat made one of the largest and most systematic efforts to make Good Old Fashioned Symbolic AI reach AGI through sheer volume and detail of expert system entries. It didn't work (obviously), but what's interesting (especially in contrast to LLMs), is that Doug made his business, Cycorp actually profitable and actually produce useful products in the form of custom built expert systems to various customers over the decades with a steady level of employees and effort spent (as opposed to LLM companies sucking up massive VC capital to generate crappy products that will probably go bust).

This sparked memories of lesswrong discussion of Eurisko... which leads to some choice sneerable classic lines.

In a sequence classic, Eliezer discusses Eurisko. Having read an essay explaining Eurisko more clearly, a lot of Eliezer's discussion seems a lot emptier now.

To the best of my inexhaustive knowledge, EURISKO may still be the most sophisticated self-improving AI ever built - in the 1980s, by Douglas Lenat before he started wasting his life on Cyc. EURISKO was applied in domains ranging from the Traveller war game (EURISKO became champion without having ever before fought a human) to VLSI circuit design.

This line is classic Eliezer dunning-kruger arrogance. The lesson from Cyc were used in useful expert systems and effort building the expert systems was used to continue to advance Cyc, so I would call Doug really successful actually, much more successful than many AGI efforts (including Eliezer's). And it didn't depend on endless VC funding or hype cycles.

EURISKO used "heuristics" to, for example, design potential space fleets. It also had heuristics for suggesting new heuristics, and metaheuristics could apply to any heuristic, including metaheuristics. E.g. EURISKO started with the heuristic "investigate extreme cases" but moved on to "investigate cases close to extremes". The heuristics were written in RLL, which stands for Representation Language Language. According to Lenat, it was figuring out how to represent the heuristics in such fashion that they could usefully modify themselves without always just breaking, that consumed most of the conceptual effort in creating EURISKO.

...

EURISKO lacked what I called "insight" - that is, the type of abstract knowledge that lets humans fly through the search space. And so its recursive access to its own heuristics proved to be for nought. Unless, y'know, you're counting becoming world champion at Traveller without ever previously playing a human, as some sort of accomplishment.

Eliezer simultaneously mocks Doug's big achievements but exaggerates this one. The detailed essay I linked at the beginning actually explains this properly. Traveller's rules inadvertently encouraged a narrow degenerate (in the mathematical sense) strategy. The second place person actually found the same broken strategy Doug (using Eurisko) did, Doug just did it slightly better because he had gamed it out more and included a few ship designs that countered the opponent doing the same broken strategy. It was a nice feat of a human leveraging a computer to mathematically explore a game, it wasn't an AI independently exploring a game.

Another lesswronger brings up Eurisko here. Eliezer is of course worried:

This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it.

And yes, Eliezer actually is worried a 1970s dead end in AI might lead to FOOM and AGI doom. To a comment here:

Are you really afraid that AI is so easy that it's a very short distance between "ooh, cool" and "oh, shit"?

Eliezer responds:

Depends how cool. I don't know the space of self-modifying programs very well. Anything cooler than anything that's been tried before, even marginally cooler, has a noticeable subjective probability of going to shit. I mean, if you kept on making it marginally cooler and cooler, it'd go to "oh, shit" one day after a sequence of "ooh, cools" and I don't know how long that sequence is.

Fearmongering back in 2008 even before he had given up and gone full doomer.

And this reminds me, Eliezer did not actually predict which paths lead to better AI. In 2008 he was pretty convinced Neural Networks were not a path to AGI.

Not to mention that neural networks have also been "failing" (i.e., not yet succeeding) to produce real AI for 30 years now. I don't think this particular raw fact licenses any conclusions in particular. But at least don't tell me it's still the new revolutionary idea in AI.

Apparently it took all the way until AlphaGo (sometime 2015 to 2017) for Eliezer to start to realize he was wrong. (He never made a major post about changing his mind, I had to reconstruct this process and estimate this date from other lesswronger's discussing it and noticing small comments from him here and there.) Of course, even as late as 2017, MIRI was still neglecting neural networks to focus on abstract frameworks like "Highly Reliable Agent Design".

So yeah. Puts things into context, doesn't it.

Bonus: One of Doug's last papers, which lists out a lot of lessons LLMs could take from cyc and expert systems. You might recognize the co-author, Gary Marcus, from one of the LLM critical blogs: https://garymarcus.substack.com/

 

So, lesswrong Yudkowskian orthodoxy is that any AGI without "alignment" will bootstrap to omnipotence, destroy all mankind, blah, blah, etc. However, there has been the large splinter heresy of accelerationists that want AGI as soon as possible and aren't worried about this at all (we still make fun of them because what they want would result in some cyberpunk dystopian shit in the process of trying to reach it). However, even the accelerationist don't want Chinese AGI, because insert standard sinophobic rhetoric about how they hate freedom and democracy or have world conquering ambitions or they simply lack the creativity, technical ability, or background knowledge (i.e. lesswrong screeds on alignment) to create an aligned AGI.

This is a long running trend in lesswrong writing I've recently noticed while hate-binging and catching up on the sneering I've missed (I had paid less attention to lesswrong over the past year up until Trump started making techno-fascist moves), so I've selected some illustrative posts and quotes for your sneering.

  • Good news, China actually has no chance at competing at AI (this was posted before deepseek was released). Well. they are technically right that China doesn't have the resources to compete in scaling LLMs to AGI because it isn't possible in the first place

China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).

  • The Situational Awareness Essays make sure to get their Yellow Peril fearmongering on! Because clearly China is the threat to freedom and the authoritarian power (pay no attention to the techbro techno-fascist)

In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers?

  • More crap from the same author
  • There are some posts pushing back on having an AGI race with China, but not because they are correcting the sinophobia or the delusions LLMs are a path to AGI, but because it will potentially lead to an unaligned or improperly aligned AGI
  • And of course, AI 2027 features a race with China that either the US can win with a AGI slowdown (and an evil AGI puppeting China) or both lose to the AGI menance. Featuring "legions of CCP spies"

Given the “dangers” of the new model, OpenBrain “responsibly” elects not to release it publicly yet (in fact, they want to focus on internal AI R&D). Knowledge of Agent-2’s full capabilities is limited to an elite silo containing the immediate team, OpenBrain leadership and security, a few dozen US government officials, and the legions of CCP spies who have infiltrated OpenBrain for years.

  • Someone asks the question directly Why Should I Assume CCP AGI is Worse Than USG AGI?. Judging by upvoted comments, lesswrong orthodoxy of all AGI leads to doom is the most common opinion, and a few comments even point out the hypocrisy of promoting fear of Chinese AGI while saying the US should race for AGI to achieve global dominance, but there are still plenty of Red Scare/Yellow Peril comments

Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP's suppression of information related to the origins of COVID feel more salient and relevant.

 

I am still subscribed to slatestarcodex on reddit, and this piece of garbage popped up on my feed. I didn't actually read the whole thing, but basically the author correctly realizes Trump is ruining everything in the process of getting at "DEI" and "wokism", but instead of accepting the blame that rightfully falls on Scott Alexander and the author, deflects and blames the "left" elitists. (I put left in quote marks because the author apparently thinks establishment democrats are actually leftist, I fucking wish).

An illustrative quote (of Scott's that the author agrees with)

We wanted to be able to hold a job without reciting DEI shibboleths or filling in multiple-choice exams about how white people cause earthquakes. Instead we got a thousand scientific studies cancelled because they used the string “trans-” in a sentence on transmembrane proteins.

I don't really follow their subsequent points, they fail to clarify what they mean... In sofar as "left elites" actually refers to centrist democrats, I actually think the establishment Democrats do have a major piece of blame in that their status quo neoliberalism has been rejected by the public but the Democrat establishment refuse to consider genuinely leftist ideas, but that isn't the point this author is actually going for... the author is actually upset about Democrats "virtue signaling" and "canceling" and DEI, so they don't actually have a valid point, if anything the opposite of one.

In case my angry disjointed summary leaves you any doubt the author is a piece of shit:

it feels like Scott has been reading a lot of Richard Hanania, whom I agree with on a lot of points

For reference the ssc discussion: https://www.reddit.com/r/slatestarcodex/comments/1jyjc9z/the_edgelords_were_right_a_response_to_scott/

tldr; author trying to blameshift on Trump fucking everything up while keeping up the exact anti-progressive rhetoric that helped propel Trump to victory.

 

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

 

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›