this post was submitted on 28 Sep 2025
27 points (96.6% liked)

TechTakes

2276 readers
154 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Soyweiser@awful.systems 11 points 1 month ago (1 children)
load more comments (1 replies)
[–] gerikson@awful.systems 11 points 1 month ago (6 children)

Some Rat content got shared on HN, and the rats there are surprised and outraged not everyone shares their deathly fear of the AI god:

https://news.ycombinator.com/item?id=45451971

"Stop bringing up Roko's Basilisk!!!" they sputter https://news.ycombinator.com/item?id=45452426

"The usual suspects are very very worried!!!" - https://news.ycombinator.com/item?id=45452348 (username 'reducesuffering checks out!)

``Think for at least 5 seconds before typing.'' - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743

[–] dgerard@awful.systems 11 points 1 month ago* (last edited 1 month ago) (3 children)

https://news.ycombinator.com/item?id=45453386

nobody mentioned this particular incident, dude just threw it into the discussion himself

[–] sc_griffith@awful.systems 10 points 1 month ago

incredible how he rushes to assure us that this was "a really hot 17 year old"

load more comments (2 replies)
[–] corbin@awful.systems 9 points 1 month ago (1 children)

The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:

If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

  1. If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
  2. There’s no way that AI with an IQ of 300 will arrive within the next few decades.
  3. We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.

Ignoring that IQ doesn't really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn't stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.

Frankly I wish that they'd understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how systemctl works.

load more comments (1 replies)
load more comments (4 replies)
[–] BigMuffN69@awful.systems 11 points 1 month ago (1 children)

https://scottaaronson.blog/?p=9183

Quantum scoot is quantum spooked 😱 after GPT-5 manages to solve a subproblem for him (after multiple attempts), thanks the powers that be for his tenure!

… even though GPT-5 probably generates the answer via websearch

[–] lagrangeinterpolator@awful.systems 10 points 1 month ago* (last edited 1 month ago) (1 children)

After seeing this, I reminded myself that I've seen this type of thing happen before. Over the past half year, so many programmers enthusiastically embraced vibe coding after seeing one or two impressive results when trying it out for themselves. We all know how that is going right now. Baldur Bjarnason had some great essays (1, 2) about the dangers of relying on self-experimentation when judging something, especially if you're already predisposed into believing it. It's like a mark believing in a psychic after he throws out a couple dozen vague statements and the last one happens to match with something meaningful, after the mark interprets it for him.

Edit: Accidentally hit reply too early.

[–] BigMuffN69@awful.systems 9 points 1 month ago (3 children)

You think he would maybe, idk, search around to see if this was a known formula before making such a bombastic statement…

load more comments (3 replies)
[–] rook@awful.systems 11 points 1 month ago (1 children)

In a move that is not in any way ominous, and everyone involved has carefully thought through all the consequences, there’s a sora-generated video of sam altman shoplifting gpus that’s apparently quite popular right now.

https://bsky.app/profile/drewharwell.com/post/3m23ob342h22a

(no embed because safari on ipad is weird about downloading or linking video)

load more comments (1 replies)
[–] fasterandworse@awful.systems 11 points 1 month ago (3 children)

Microsoft launches ‘vibe working’ in Excel and Word

A new Office Agent in Copilot chat, powered by Anthropic models, is also launching today that can create PowerPoint presentations and Word documents from a “vibe working” chatbot.

[–] Jayjader@jlai.lu 11 points 1 month ago

McKinsey about to slash it's own headcount after slashing everyone else's

load more comments (2 replies)
[–] blakestacey@awful.systems 10 points 1 month ago

While most bullish outlooks are premised on economic reacceleration, it’s difficult to ignore the market’s reliance on AI capex. In market-pricing terms, we believe we’re closer to the seventh inning than the first, and several developments indicate we may be entering the later phases of the boom. First, AI hyperscaler free-cash-flow growth has turned negative. Second, price competition in the "monopoly-feeder businesses” seems to be accelerating. Finally, recent deal-making smacks of speculation and vendor-financing strategies of old.

https://www.morganstanley.com/pub/content/dam/mscampaign/wealth-management/wmir-assets/gic-weekly.pdf

[–] nfultz@awful.systems 9 points 1 month ago

Newsom signed the AI Bill (https://www.nbcnews.com/tech/tech-news/ai-law-california-ca-companies-regulation-newsom-rcna234562) but looks like they took out the private right of action vs the one last year that he vetoed, so basically defanged.

I do predict even more compliance pop ups in the near future though.

[–] gerikson@awful.systems 9 points 1 month ago (4 children)

Guys according to LW you’re reading Omelas all wrong (just like LeGuin was wrong)

https://www.lesswrong.com/posts/n83HssLfFicx3JnKT/omelas-is-perfectly-misread

[–] corbin@awful.systems 11 points 1 month ago (1 children)

Choice sneer from the comments:

Omelas: how we talk about utopia [by Big Joel, a patient and straightforward Youtube humanist,] [has a] pretty much identical thesis, does this count?

Another solid one which aligns with my local knowledge:

It's also about literal child molesters living in Salem Oregon.

The story is meant to be given to high schoolers to challenge their ethics, and in that sense we should read it with the following meta-narrative: imagine that one is a high schooler in Omelas and is learning about The Plight and The Child for the first time, and then realize that one is a high schooler in Salem learning about local history. It's not intended for libertarian gotchas because it wasn't written in a philosophical style; it's a narrative that conveys a mood and an ethical framing.

[–] gerikson@awful.systems 11 points 1 month ago

One of the many annoying traits of rationalists is their tendency to backproject classic pieces of literature onto their chosen worldview.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›