this post was submitted on 17 Aug 2025
22 points (100.0% liked)

TechTakes

2118 readers
210 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 3) 50 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 10 points 3 days ago (7 children)

TIL some rats have started a literal monastery to try to defeat the robot god with good ole religion (well, Zen buddhism)

here's a mildly critical view that apparently still believes the approach has legs

https://www.lesswrong.com/posts/ENCNHyNEgvz9oo9rr/briefly-on-maple-and-the-broader-community

I note in passing that there seems to be a mild upsurge in religious-friendly posts on LW lately.

[–] Soyweiser@awful.systems 10 points 3 days ago* (last edited 3 days ago)

That opening has a strong 'omg what the fuck happened there, and why are you still friends with these people if it is that bad' vibes.

Lots of ex Maplers I've talked to are variously angry, some (newly) traumatized, confused, etc., but the vibe has generally been "gosh it's fucking complicated

Lot of people told me they were mad, but I got the vibe it was complicated. lol what...

The product of monasteries is saints, at least in small quantities.

Wrong, it is beer.

I haven't seen anti-safetyist arguments that actually address the technical claims made by Eliezer etc.

I agree with him there. But hard to make arguments against something which doesn't exist. ;)

it will take some very unusual kind of virtue and skill

Love how we went from 'you need to learn rationality, and be aware of your biasses' to you need to have virtue. And ignore the screaming in the backgrounds, that is just the academics who studied ethics doing their normal thing again.

Anyway the rest gets pretty dark pretty quickly, and I just see red flags (for people who believe in data this devolves very quickly in just going 'people are prob better off due to this, the new trauma doesn't count because of preexisting conditions, consent and trauma always happening'). And this was the article they wrote not wanting to harm the project.

Wait one more remark:

Furthermore I think that probably with the exception of the one actual AI researcher there, people at Maple basically don't understand what AI is

Hahahaha, perfect.

It also reminds me of the interview with Metz where they got mad Metz used religious terms. (you know, last week).

[–] fnix@awful.systems 9 points 3 days ago (1 children)

well, Zen buddhism

Yeah, this is the Valley after all. Some have used Buddhism as a building block for constructing “metarationality”.

[–] istewart@awful.systems 8 points 3 days ago (1 children)

I have a degree of appreciation for Chapman because he was willing to more-or-less call out Yuddite rationalism as a failure and start to gently guide people away from it. But I also came to the conclusion that his whole project has never fully escaped the self-aggrandizement/self-importance inherent to the rats. That ultimately leads to the performative humility and "radical acceptance" that make so many attempts at appropriating non-Western religions to US culture ring completely hollow.

Broadly, the whole TPOT/post-rationality/meta-rationality thing still stinks like a bunch of people who thought advanced degrees and/or advanced technical skills would earn them a lot more compensation and social status than they actually ended up with, and are still dead-set on getting all that by hook or by crook.

load more comments (1 replies)
[–] scruiser@awful.systems 5 points 3 days ago

I hadn't heard of MAPLE before, is it tied to lesswrong? From the focus on AI it's at least adjacent to it... so I'll add that to the list of cults lesswrong is responsible for. So all in all, we've got the Zizians, Leverage Research, and now Maple for proper cults, and stuff like Dragon Army and Michael Vassar's groupies for "high demand" groups. It really is a cult incubator.

load more comments (4 replies)
[–] gerikson@awful.systems 8 points 3 days ago (1 children)

Michael Hiltzik in LATimes: "Say farewell to the AI bubble, and get ready for the crash"

https://www.latimes.com/business/story/2025-08-20/say-farewell-to-the-ai-bubble-and-get-ready-for-the-crash

https://archive.ph/2025.08.20-113134/https://www.latimes.com/business/story/2025-08-20/say-farewell-to-the-ai-bubble-and-get-ready-for-the-crash

Fun quote:

The rest of [AI 2027], mapping a course to late 2027 when an AI agent “finally understands its own cognition,” is so loopily over the top that I wondered whether it wasn’t meant as a parody of excessive AI hype. I asked its creators if that was so, but haven’t received a reply.

[–] blakestacey@awful.systems 8 points 3 days ago

And because it's the LA Times, there's a chatbot slop section at the bottom to provide false balance.

[–] o7___o7@awful.systems 8 points 3 days ago* (last edited 3 days ago) (1 children)

I'm enjoying the mood today. We're all looking for what the next Big Dumb Thing will be that we'll be dunking on next year, like we're browsing the dessert menu at a fancy restaurant.

[–] BlueMonday1984@awful.systems 7 points 3 days ago

On top of that, there's clear signs that we've grown quite an audience from dunking on AI. Ed Zitron reached 70k subscribers just a couple weeks ago, and Pivot to AI is at nearly 9k on YouTube.

If and when the next Big Dumb Thing comes along, chances are we're gonna have a headstart against the hucksters.

[–] BlueMonday1984@awful.systems 8 points 4 days ago (2 children)
[–] YourNetworkIsHaunted@awful.systems 11 points 4 days ago* (last edited 4 days ago) (1 children)

Even if they aren't actively relying on each other here I would assume that we're reaching a stage where all of the competing LLMs are using basically the entire Internet as their training data, and while there is going to be some difference based on the reinforcement learning process there's still going to be a lot of convergence there.

[–] BlueMonday1984@awful.systems 8 points 4 days ago

Plus, there's the hefty amount of AI slop that's been shat onto the Internet over the years, plus active attempts to sabotage LLM datasets through tarpits like Iocaine and Nepenthes, and media-poisoning tools like Glaze and Nightshade.

So, if and when model collapse fully sets in, its gonna hit all of them at once. Given that freshly trained LLMs are gonna be effectively stillborn, if ChatGPT et al. collapse, it'll likely kill LLMs as a tech for at least the next ten years.

[–] o7___o7@awful.systems 9 points 3 days ago (1 children)

The true meaning of "pooping back and forth forever"

[–] swlabr@awful.systems 5 points 3 days ago (1 children)

sick reference. I don’t even know how I knew this.

[–] o7___o7@awful.systems 6 points 3 days ago

heh, I'm too basic to make a deep cut like that, I just went through a Cards Against Humanity phase in grad school!

[–] BlueMonday1984@awful.systems 12 points 4 days ago (1 children)
[–] Soyweiser@awful.systems 6 points 4 days ago (1 children)

Well if the bubble pops he will have to pivot to people who pivot. (That is what is going to suck when to bubble pops, so many people are going to lose their jobs, and I fear a lot of people holding the bags are not the ones who really should be punished the mosts (really hope not a lot of pension funds bought in). The stock market was a mistake).

[–] BlueMonday1984@awful.systems 6 points 4 days ago (1 children)

I imagine it'll be a pretty lucrative pivot - the public's ravenous to see AI bros and hypesters get humiliated, and Zitron can provide that in spades.

Plus, he'll have a major headstart on whatever bubble the hucksters attempt to inflate next.

[–] Soyweiser@awful.systems 6 points 4 days ago (1 children)
[–] BlueMonday1984@awful.systems 5 points 3 days ago (1 children)

Y'know, I was predicting at least a few years without a tech bubble, but I guess I was dead wrong on that. Part of me suspects the hucksters are gonna fail to inflate a quantum bubble this time around, though.

[–] blakestacey@awful.systems 9 points 3 days ago (2 children)

Quantum computing is still too far out from having even a niche industrial application, let alone something you can sell to middle managers the world over. Anybody who day-traded could get into Bitcoin; millions of people can type questions at a chatbot. Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I'm doubtful.

[–] o7___o7@awful.systems 6 points 3 days ago

Quantum computing isn't just hard, it's hadamard

[–] BlueMonday1984@awful.systems 6 points 3 days ago (3 children)

Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I’m doubtful.

By my guess, no. AI earned its investor/VC dollars by providing bosses and CEOs alike a cudgel to use against labour, either by deskilling workers, degrading their work conditions, or killing their jobs outright.

Quantum doesn't really have that - the only Big Claim™ I know it has going for it is its supposed ability to break pre-existing encryption clean in half, but that's near-certainly gonna be useless for hypebuilding.

load more comments (3 replies)
[–] BlueMonday1984@awful.systems 12 points 4 days ago (1 children)

New Atlantic article regarding AI, titled "AI Is a Mass-Delusion Event". Its primarily about the author's feelings of confusion and anxiety about the general clusterfuck that is the bubble.

[–] istewart@awful.systems 9 points 4 days ago

better, or equivalent to, a mass defecation event?

load more comments
view more: ‹ prev next ›