scruiser

joined 2 years ago
[–] scruiser@awful.systems 12 points 1 year ago (3 children)

The sequence of links hopefully lays things out well enough for normies? I think it it does, but I've been aware of the scene since the mid 2010s, so I'm not the audience that needs it. I can almost feel sympathy for Sam dealing with all the doomers, except he uses the doom and hype to market OpenAI and he lied a bunch so not really. And I can almost feel sympathy for the board, getting lied to and outmaneuvered by a sociopathic CEO, but they are a bunch of doomers from the sound of it so, eh. I would say they deserve each other, its the rest of the world that don't deserve them (from the teacher dealing with the LLM slop plugged into homework, to the Website Admin fending off scrapers, to legitimate ML researchers getting the attention sucked away while another AI winter starts to loom, to the machine cultist not saving a retirement fund and having panic attacks over the upcoming salvation or doom).

[–] scruiser@awful.systems 14 points 1 year ago* (last edited 1 year ago)

As to cryonics... for both LLM doomers and accelerationists, they have no need for a frozen purgatory when the techno-rapture is just a few years around the corner.

As for the rest of the shiny futuristic dreams, they have give way to ugly practical realities:

  • no magic nootropics, just Scott telling people to take adderal and other rationalists telling people to micro dose on LSD

  • no low hanging fruit in terms of gene editing (as epistaxis pointed out over on reddit) so they’re left with eugenics and GeneSmith’s insanity

  • no drexler nanotech so they are left hoping (or fearing) the god-AI can figure it (which is also a problem for ever reviving cryonically frozen people)

  • no exocortex, just over priced google glasses and a hallucinating LLM “assistant”

  • no neural jacks (or neural lace or whatever the cyberpunk term for them is), just Elon murdering a bunch of lab animals and trying out (temporary) hope on paralyzed people

The future is here, and it’s subpar compared to the early 2000s fantasies. But hey, you can rip off Ghibli’s style for your shitty fanfic projects, so there are a few upsides.

[–] scruiser@awful.systems 5 points 1 year ago* (last edited 1 year ago) (1 children)

Even without the Sci-fi nonsense, the political elements of the story also feel absurd: the current administration staying on top of the situation and making reasoned (if not correct) responses and keeping things secret feels implausible given current events. It kind of shows the political biases of the authors that they can manage to imagine the Trump administration acting so normally or competently. Oh and the hyper-competent Chinese spies (and the Chinese having no chance at catching up without them) feels like another one of the authors' biases coming through.

[–] scruiser@awful.systems 7 points 1 year ago

Bonus: a recent comment is skeptical:

well, how do I play democracy with AI? It’s already 2025

[–] scruiser@awful.systems 9 points 1 year ago

We're already behind schedule, we're supposed to have AI agents in two months (actually we were supposed to have them in 2022, but ignore the failed bits of earlier prophecy in favor of the parts you can see success for)!

[–] scruiser@awful.systems 11 points 1 year ago* (last edited 1 year ago) (2 children)

He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.

His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far

My own scoring:

The first prompt programming libraries start to develop, along with the first bureaucracies.

I don't think any sane programmer or scientist would credit the current "prompt engineering" "skill set" with comparison to programming libraries, and AI agents still aren't what he was predicting for 2022.

Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.

There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.

Revenue is high enough to recoup training costs within a year or so.

Hahahaha, no... they are still losing money per customer, much less recouping training costs.

Instead, the AIs just make dumb mistakes, and occasionally “pursue unaligned goals” but in an obvious and straightforward way that quickly and easily gets corrected once people notice

The safety researchers have made this one "true" by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that don't read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.

The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics.

They also try to contrive scenarios

Emphasis on the word"contrive"

The age of the AI assistant has finally dawned.

So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.

I can see how they are trying to anoint his as a prophet, but I don't think anyone not already drinking the kool aid will buy it.

[–] scruiser@awful.systems 9 points 1 year ago (4 children)

I think Eliezer has still avoided hard dates? In the Ted talk, I distinctly recall he used the term "0-2 paradigm shifts" so he can claim prediction success for stuff LLMs do, and paradigm shift is vague enough he could still claim success if its been another decade or two and there has only been one more big paradigm shift in AI (that still fails to make it AGI).

[–] scruiser@awful.systems 10 points 1 year ago* (last edited 1 year ago) (9 children)

Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?

Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced ~~grifters~~ forecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.

[–] scruiser@awful.systems 5 points 1 year ago

I can already imagine the lesswronger response: Something something bad comparison between neural nets and biological neurons, something something bad comparison with how the brain processes pain that fails at neuroscience, something something more rhetorical patter, in conclusion: but achkshually what if the neural network does feel pain.

They know just enough neuroscience to use it for bad comparisons and hyping up their ML approaches but not enough to actually draw any legitimate conclusions.

[–] scruiser@awful.systems 6 points 1 year ago (1 children)

Galaxy brain insane take (free to any lesswrong lurkers): They should develop the usage of IACUCs for LLM prompting and experimentation. This is proof lesswrong needs more biologists! Lesswrong regularly repurpose comp sci and hacker lingo and methods in inane ways (I swear if I see the term red-teaming one more time), biological science has plenty of terminology to steal and repurpose they haven't touched yet.

[–] scruiser@awful.systems 6 points 1 year ago* (last edited 1 year ago)

Yeah there might be something like that going on causing the "screaming". Lesswrong, in it's better moments (in between chatbot anthropomorphizing), does occasionally figure out the mechanics of cool LLM glitches (before it goes back to wacky doom speculation inspired by those glitches), but there isn't any effort to do that here.

[–] scruiser@awful.systems 5 points 1 year ago

I agree. There is intent going into the prompt fondler's efforts to prompt the genAI, it's just not very well developed intent and it is using the laziest shallowest method possible to express itself.

view more: ‹ prev next ›