this post was submitted on 07 Sep 2025
22 points (95.8% liked)

TechTakes

2163 readers
80 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] fullsquare@awful.systems 8 points 2 days ago* (last edited 2 days ago) (1 children)

there was a post shared out there recentlyish (blogpost? substack? can't find it. enjoy vagueposting) that was about how ai companies have no clue what they're doing, and compared them to alchemy in that they also had no idea what they were doing, and over time moved on to more realistic goals, but while they had funding for these unrealistic goals they invented distillation and crystallization and black powder and shit. and the same for ai would be buildout of infra that can be presumably used for something later (citation needed).

so comparison of this entire ea/lw/openai milieu to alchemists is unjust for alchemists. alchemy has that benefit on its side that it was developed before scientific method was a proper thing, modern chatbot peddlers can't really claim that what they're doing is protoscience. what is similar is that alchemy and failed ea scifi writers claim that magic tech will get you similar things. cure for all disease (nanobots), immortality (cryonics or mind uploading or nanobots), infinite wisdom (chatbots), transformation of any matter at will (nanobots again), mind control derived from supreme rationality (ok this one comes from magic), synthetic life (implied by ai bioweapons, but also agi itself). when chinese alchemists figured out that mercury pills kill people and don't make them immortal, there was a shift to "inner alchemy" that is spiritual practices (mental tech). maybe eliezer &co are last alchemists (so far) and not first ai-safety-researchers

[–] Soyweiser@awful.systems 2 points 2 days ago (1 children)

That was in relation to nanotech right? Or am I confusing articles here.

[–] fullsquare@awful.systems 2 points 2 days ago

the piece was about ai dc buildout

[–] BlueMonday1984@awful.systems 5 points 3 days ago (1 children)

Saw an AI-extruded "art" "timelapse" in the wild recently - the "timelapse" in question isn't gonna fool anyone who actually cares about art, but it's Good Enough^tm^ to pass muster on someone mindlessly scrolling, and its creation serves only to attack artists' ability to prove their work was human made.

This isn't the first time AI bros have pulled this shit (Exhibit A, Exhibit B), by the way.

[–] o7___o7@awful.systems 5 points 3 days ago

The dingus who got ejected from dragoncon pulled the same shit

[–] swlabr@awful.systems 5 points 3 days ago

I was wondering about the origins of sneerclub and discovered something kinda fun: “r/SneerClub” pre-dates “r/BlogSnark”, the first example of a “snark subreddit” listed on the wiki page! The vibe of snark subreddits seem to be very different to that of sneerclub etc. (read: toxic!!!) but I wouldn’t know the specifics as I’m not a snark participant.

[–] blakestacey@awful.systems 14 points 4 days ago (1 children)
[–] BlueMonday1984@awful.systems 6 points 4 days ago

The report claims its about ethical AI use, but all I see is evidence that AI is inherently unethical, and an argument for banning AI from education forever.

[–] JFranek@awful.systems 6 points 4 days ago (1 children)

Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.

It talked about how it's almost impossible to detect whether a model was deliberately trained to output some "bad" output (like vulnerable code) for some specific set of inputs.

Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a "sleeper agent". But maybe some of y'all will find it interesting.

link

[–] BlueMonday1984@awful.systems 5 points 4 days ago

This isn't the first time I've heard about this - Baldur Bjarnason's talked about how text extruders can be poisoned to alter their outputs before, noting its potential for manipulating search results and/or serving propaganda.

Funnily enough, calling a poisoned LLM as a "sleeper agent" wouldn't be entirely inaccurate - spicy autocomplete, by definition, cannot be aware that their word-prediction attempts are being manipulated to produce specific output. Its still treating these spicy autocompletes with more sentience than they actually have, though

[–] TinyTimmyTokyo@awful.systems 9 points 4 days ago (2 children)

Now that his new book is out, Big Yud is on the interview circuit. I hope everyone is prepared for a lot of annoying articles in the next few weeks.

Today he was on the Hard Fork podcast with Kevin Roose and Casey Newton (didn't listen to it yet). There's also a milquetoast profile in the NYT written by Kevin Roose, where Roose admits his P(doom) is between 5 and 10 percent.

[–] Architeuthis@awful.systems 10 points 4 days ago* (last edited 4 days ago) (5 children)

Siskind did a review too, basically gives it the 'their hearts in the right place but... [read AI2027 instead]' treatment. Then they go at it a bit with Yud in the comments where Yud comes off as a bitter dick, but their actual disagreements are just filioque shit. Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.

https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154920454

https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154927504

Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it rather than being scared shitless of MAD, so AI non-proliferation by presumably appointing a rationalist Grand Inquisitor in charge of all human scientific progress is an obvious solution.

[–] istewart@awful.systems 8 points 3 days ago

Also they both seem to agree that a worldwide moratorium on AI research that will give us time to breed/genetically engineer superior brained humans to fix our shit is the way to go.

This century deserves a better class of thought-criminal

[–] Soyweiser@awful.systems 12 points 4 days ago

Also notable that apparently Siskind thinks nuclear non-proliferation sorta worked because people talked it out and decided to be mature about it

This is his claim about everything, including how we got gay rights. Real if all you have is a hammer stuff.

[–] fullsquare@awful.systems 7 points 4 days ago

assuming that nuclear nonproliferation is gonna hold up indefinitely for any reason is some real fukuyama's end of history shit

let alone "because it's Rational™ thing to do", it's only in rational interest of already-nuclear states to keep things this way. couple of states that could make a good point for having nuclear arsenal and having capability to manufacture it are effectively dissuaded from this by american diplomacy (mostly nuclear umbrella for allies and sanctions or fucking with their facilities for enemies). with demented pedo in chief and his idiot underlings trying their hardest to undo this all, i really wouldn't be surprised if, say, south korea decides to get nuclear

[–] TinyTimmyTokyo@awful.systems 7 points 4 days ago (2 children)

Yud: "That's not going to asymptote to a great final answer if you just run them for longer."

Asymptote is a noun, you git. I know in the grand scheme of things this is a trivial thing to be annoyed by, but what is it it with Yud's weird tendency to verbify nouns? Most rationalists seem to emulate him on this. It's like a cult signifier.

[–] zogwarg@awful.systems 6 points 4 days ago

It's also inherently-begging-the-question-silly, like it assumes that the Ideal of Alignment™, can never be reached but only approached. (I verb nouns quite often so I have to be more picky at what I get annoyed at)

[–] saucerwizard@awful.systems 3 points 3 days ago (3 children)

They think Yud is a world-historical intellect (I’ve seen claims on twitter he has a iq of 190 - yeah really) and by emulation a little of the old smartness can rub off on them.

[–] Soyweiser@awful.systems 3 points 3 days ago (1 children)

The normal max if an iq test is ~160 and from what I can tell nobody tests above it basically because it is not relevant. (And I assume testing problems and variance become to big statistical problems at this level). Not even sure how rare a 190 iq would be statistically, prob laughably rare.

[–] saucerwizard@awful.systems 3 points 2 days ago (1 children)

I don’t think these people have a good handle on how stuff actually works.

[–] Soyweiser@awful.systems 1 points 2 days ago

For a snicker I looked it up: https://iqcomparisonsite.com/iqtable.aspx

One in 100 million. So he would be in the top 80 smartest people alive right now. Which includes third world, children, elderly etc.

[–] Architeuthis@awful.systems 2 points 3 days ago* (last edited 3 days ago)

190IQ is when you verb asymptote to avoid saying 'almost'.

[–] fullsquare@awful.systems 1 points 3 days ago* (last edited 3 days ago) (1 children)

that's in practical terms meaningless, but just looking at statistics of it iq on the order of 190 would mean 1 in billion (1E9) per ever reliable rationalwiki https://rationalwiki.org/wiki/High_IQ_society

[–] Architeuthis@awful.systems 2 points 3 days ago

It's possible someone specifically picked the highest IQ that wouldn't need a second planet earth to make the statistics work.

[–] swlabr@awful.systems 5 points 4 days ago (1 children)

Trying to figure out if that Siskind take comes from a) a lack of criticality and/or the ability to read subtext or b) some ideological agenda to erase the role of violence (threats of violence are also violence!) in change happening, or both

I mean, he has admitted to believing a bunch of eugenics-y things that would be inarguably terrible if implemented by force, and maintaining this weird fig leaf that we could do it all voluntarily feels less blatantly dystopian. He has also admitted to being dishonest about his actual beliefs in his writing in order to advance those ideas, presumably in the same way that hardcore neonazis viewed Alex Jones and the "globalists" crowd as useful stepping stones for getting people into the right conspiratorial mindset so that they can just shift the target from "globalists" to "the Jews".

[–] CinnasVerses@awful.systems 5 points 4 days ago

When you are running a con like crypto or chatbot companies, it helps to know someone who is utterly naive and can't stop talking about whatever line you feed him. If this were the middle ages Kevin Roose would have an excellent collection of pigges bones and scraps of linen that the nice friar promised were relics of St Margaret of Antioch.

[–] Soyweiser@awful.systems 4 points 3 days ago (1 children)
[–] sailor_sega_saturn@awful.systems 6 points 3 days ago* (last edited 3 days ago)

The article claims that Google didn't "fall for the same trap" but that's not correct, all this garbage is indeterministic so the author just got "lucky".

It's like saying "four out of five coin-flips claimed that an eagle was the first US president" -- just because the fifth landed on heads and showed George Washington doesn't mean it's any different than the rest.

But here I'm preaching to the choir.

[–] antifuchs@awful.systems 20 points 5 days ago (2 children)

Whichever one of you did https://alignmentalignment.ai/caaac/jobs, well done, and many lols.

CAAAC is an open, dynamic, inclusive environment, where all perspectives are welcomed as long as you believe AGI will annihilate all humans in the next six months.

Alright, I can pretend to believe that, go on…

We offer competitive salaries and generous benefits, including no performance management because we have no way to assess whether the work you do is at all useful.

Incredible. I hope I get the job!

[–] V0ldek@awful.systems 2 points 2 days ago (1 children)

Not really any more insane than any other white-collar corporate job I've seen in my life

[–] antifuchs@awful.systems 2 points 2 days ago

Less insane and more honest, truly

[–] TinyTimmyTokyo@awful.systems 8 points 4 days ago

Make sure to click the "Apply Now" button at the bottom for a special treat.

[–] macroplastic@sh.itjust.works 4 points 4 days ago (1 children)
[–] Soyweiser@awful.systems 6 points 4 days ago

Reality has an anti-robot bias.

[–] blakestacey@awful.systems 14 points 5 days ago (12 children)

The Wall Street Journal came out with a story on "conspiracy physics", noting Eric Weinstein and Sabine Hossenfelder as examples. Sadly, one of their quoted voices of sanity is Scott Aaronson, baking-soda volcano of genocide apologism.

load more comments (12 replies)
load more comments
view more: next ›