this post was submitted on 26 Jan 2026
14 points (100.0% liked)

TechTakes

2388 readers
55 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. What a year, huh?)

top 50 comments
sorted by: hot top controversial new old
[–] corbin@awful.systems 1 points 2 minutes ago

Kyle Hill has gone full doomer after reading too much Big Yud and the Yud & Soares book. His latest video is titled "Artificial Superintelligence Must Be Illegal." Previously, on Awful, he was cozying up to effective altruists and longtermists. He used to have a robotic companion character who would banter with him, but it seems like he's no longer in that sort of jocular mood; he doesn't trust his waifu anymore.

[–] BlueMonday1984@awful.systems 2 points 27 minutes ago
[–] blakestacey@awful.systems 4 points 5 hours ago (2 children)

Is Pee Stored in the Balls? Vibe Coding Science with OpenAI's Prism

https://bsky.app/profile/carlbergstrom.com/post/3mdgtf2e6vc2c

[–] blakestacey@awful.systems 3 points 4 hours ago* (last edited 4 hours ago) (1 children)

Chris Lintott (@chrislintott.bsky.social‬):

We’re getting so many journal submissions from people who think ‘it kinda works’ is the standard to aim for.

Research Notes of the AAS in particular, which was set up to handle short, moderated contributions especially from students, is getting swamped. Often the authors clearly haven’t read what they’ve submitting, (Descriptions of figures that don’t exist or don’t show what they purport to)

I’m also getting wild swings in topic. A rejection of one paper will instantly generate a submission of another, usually on something quite different.

Many of these submissions are dense with equations and pseudo-technological language which makes it hard to give rapid, useful feedback. And when I do give feedback, often I get back whatever their LLM says.

Including the very LLM responses like ‘Oh yes, I see that is wrong, I’ve removed it. Here’s something else’

Research Notes is free to publish in and I think provides a very valuable service to the community. But I think we’re a month or two from being completely swamped.

[–] BlueMonday1984@awful.systems 3 points 29 minutes ago

It gets worse:

One of the great tragedies of AI and science is that the proliferation of garbage papers and journals is creating pressure to return to more closed systems based on interpersonal connections and established prestige hierarchies that had only recently been opened up somewhat to greater diversity.

[–] gerikson@awful.systems 4 points 5 hours ago

Ow! My Balls

[–] gerikson@awful.systems 8 points 11 hours ago (6 children)

enjoy this glorious piece of LW lingo

Aumann's agreement is pragmatically wrong. For bounded levels of compute you can't necessarily converge on the meta level of evidence convergence procedures.

src

no I don't know what it means, and I don't want it to be explained to me. Just let me bask in its inscrutibility.

[–] lagrangeinterpolator@awful.systems 7 points 6 hours ago* (last edited 6 hours ago) (2 children)

The sad thing is I have some idea of what it's trying to say. One of the many weird habits of the Rationalists is that they fixate on a few obscure mathematical theorems and then come up with their own ideas of what these theorems really mean. Their interpretations may be only loosely inspired by the actual statements of the theorems, but it does feel real good when your ideas feel as solid as math.

One of these theorems is Aumann's agreement theorem. I don't know what the actual theorem says, but the LW interpretation is that any two "rational" people must eventually agree on every issue after enough discussion, whatever rational means. So if you disagree with any LW principles, you just haven't read enough 20k word blog posts. Unfortunately, most people with "bounded levels of compute" ain't got the time, so they can't necessarily converge on the meta level of, never mind, screw this, I'm not explaining this shit. I don't want to figure this out anymore.

[–] blakestacey@awful.systems 5 points 6 hours ago (1 children)
[–] zogwarg@awful.systems 3 points 28 minutes ago

Honestly even the original paper is a bit silly, are all game theory mathematics papers this needlessly farfetched?

[–] pikesley@mastodon.me.uk 4 points 6 hours ago

@gerikson @lagrangeinterpolator

> but it does feel real good when your ideas feel as solid as math

Misread this as "meth", perfect, no further questions

[–] mirrorwitch@awful.systems 10 points 7 hours ago

this sounds exactly like the sentence right before "they have played us for absolute fools!" in that meme.

[–] istewart@awful.systems 8 points 8 hours ago

oh man, it's Aumann's

[–] nightsky@awful.systems 3 points 7 hours ago

Are you trying to say that you are not regularly thinking about the meta level of evidence convergence procedures?

[–] mawhrin@awful.systems 10 points 10 hours ago

retains the same informational content after running through rot13

[–] Soyweiser@awful.systems 2 points 7 hours ago* (last edited 6 hours ago)

Tbh, this is pretty convincing, I agree a lot more with parts of the LW space now. (Just look at the title, the content isn't that interesting).

[–] mirrorwitch@awful.systems 9 points 13 hours ago* (last edited 13 hours ago) (1 children)

I gave the new ChatGPT Health access to 29 million steps and 6 million heartbeat measurements ["a decade of my Apple Watch data"]. It drew questionable conclusions that changed each time I asked.

WaPo. Paywalled but I like how everything I need to know is already in the blurb above.

[–] TrashGoblin@awful.systems 3 points 6 hours ago

Archive link, but you can extrapolate the whole article from the blurb. Mostly. It's actually slightly worse than the blurb suggests.

[–] JFranek@awful.systems 9 points 17 hours ago (1 children)

I think I installed the cursed Windows 11 update on my work machine, because after taking several tries to boot, my second monitor stopped working (detected, but showing a black screen).

Tried some different configurations, and could make only 0-1 screens work.

Uninstalled the update and everything worked correctly again.

Thanks for nothing Microslop.

[–] flaviat@awful.systems 5 points 15 hours ago

I also had a computer not boot. Tried installing windows 11 but the iso does not include network card drivers and requires a second drive that has them. I just happened to have another but it malfunctioned. Was assured IT would fix it but it still doesn't boot. :(

[–] CinnasVerses@awful.systems 12 points 1 day ago (2 children)

A few people in LessWrong and Effectlve Altruism seem to want Yud to stick in the background while they get on with organizing his teachings into doctrine, dumping the awkward ones down the memory hole, and organizing a movement that can last when he goes to the Great Anime Convention in the Sky. In 2022 someone on the EA forum posted On Deference and Yudkowsky's AI Risk Estimates (ie. "Yud has been bad at predictions in the past so we should be skeptical of his predictions today")

[–] lurker@awful.systems 2 points 4 hours ago

that post got way funnier with Eliezer’s recent twitter post about “EAs developing more complex opinions on AI other than itll kill everyone is a net negative and cancelled out all the good they ever did”

[–] Architeuthis@awful.systems 8 points 18 hours ago

A religion is just a cult that survived its founder -- someone, at some point.

[–] nightsky@awful.systems 9 points 1 day ago* (last edited 1 day ago) (1 children)

The AI craze might end up killing graphics card makers:

Zotac SK's message: "(this) current situation threatens the very existence of (add-in-board partners) AIBs and distributors."

The current situation is so serious that it is worrisome for the future existence of graphics card manufacturers and distributors. They announced that memory supply will not be sufficient and that GPU supply will also be reduced.

Curiously, Zotac Korea has included lowly GeForce RTX 5060 SKUs in its short list of upcoming "staggering" price increases.

(Source)

I wonder if the AI companies realize how many people will be really pissed off at them when so many tech-related things become expensive or even unavailable, and everyone will know that it's only because of useless AI data centers?

[–] istewart@awful.systems 13 points 1 day ago (1 children)

I am confident that Altman in particular has a poor-to-nonexistent grasp of second-order effects.

[–] mirrorwitch@awful.systems 9 points 13 hours ago

I mean you don't have to grasp, know of, or care about the consequences when none of the consequences will touch you, and after the bubble pops and the company bankrupts catastrophically, you will remain comfortably a billionaire with several more billions in your aire than the ones you had when you started the bubble in the first place. Consequences are for the working class, capitalists fall upwards.

[–] mirrorwitch@awful.systems 15 points 1 day ago (1 children)

Cloudflare just announced in a blog post that they built:

a serverless, post-quantum Matrix homeserver.

it's a vibe-coded pile of slop where most of the functions are placeholders like // TODO: check authorization.

Full thread: https://tech.lgbt/@JadedBlueEyes/115967791152135761

[–] antifuchs@awful.systems 4 points 21 hours ago (1 children)

And of all possible things to implement, they chose Matrix. lol and lmao.

[–] mirrorwitch@awful.systems 8 points 13 hours ago* (last edited 11 hours ago)

The interesting thing in this case for me is how did anyone think it was a good idea to draw attention to their placeholder code with a blog post. Like how did they went all the way to vibe a full post without even cursorily glancing at the slop commits.

I'm convinced by now that at least mild forms of "AI psychosis" affect all chatbots users; after a period of time interacting with what Angela Collier called "Dr. Flattery the Always Wrong Robot", people will hallucinate fully working projects without even trying to test whether it compiles.

[–] mawhrin@awful.systems 17 points 1 day ago (3 children)

just to note that reportedly the palantir employees are for whatever reason going through a massive “hans, are we the baddies” moment, almost a whole year into the second trump administration.

as i wrote elsewhere, those people need to be subjected to actual social consequences of choosing to work with and for the u.s. concentration camp administration office.

[–] sc_griffith@awful.systems 8 points 23 hours ago (1 children)

this happens like clockwork

13 ex-Schutzstaffel employees condemn work as violating the SS code of conduct. "Don't let this be what the Totenkopf stands for."

[–] Architeuthis@awful.systems 6 points 18 hours ago* (last edited 18 hours ago)

It's so blindingly obvious that it's become obscure again so it bears pointing out, someone really went ahead and named a tech company after a fantasy torment nexus and people thought it wouldn't be sketch.

[–] aninjury2all@awful.systems 10 points 1 day ago

On a semi-adjacent note I came across an attorney who helped to establish and run the Department of Homeland Security (under Bush AND Trump 1)

Who wants you to know he’s ENRAGED. And EMBARRASSED. How the American Schutzstaffel is doing Schutzstaffel things

He also wants you to know he’s Jewish (so am I, and I know our history enough that Homeland Security always had ‘Blood and Soil’ connotations you fucking shande)

[–] BigMuffN69@awful.systems 12 points 1 day ago* (last edited 1 day ago) (1 children)

I have family working there, who told me during the holidays, “Current leadership makes me uncomfortable, but money is good”

Every impression I had of them completely shattered, cannot fathom that level out sell out exists in people I thought I knew.

As a bonus, their former partner was a former employee who became a whistleblower and has now gone full howard hughes

[–] sansruse@awful.systems 10 points 1 day ago (1 children)

anyone who can get a job at palantir can get an equivalent paying job at a company that's at least measurably less evil. what a lazy copout

[–] BigMuffN69@awful.systems 6 points 1 day ago

On one hand as a poor grad student in the past, I could imagine working for a truly repugnant corp. but like if you’ve already made millions from your stock options, wtf are you doing. Idk, i really thought they’d have some shame over it, but they said shit like “our customers really like our deliverables” and i just fucking left with my wife

[–] rook@awful.systems 13 points 1 day ago (2 children)

I have mixed feelings about this one: The Enclosure feedback loop (or how LLMs sabotage existing programming practices by privatizing a public good).

The author is right that stack overflow has basically shrivelled up and died, and that llm vendors are trying to replace it with private sources of data they'll never freely share with the rest of us, but I don’t think that chatbot dev sessions are in any way “high quality data”. The number of occasions when a chatbot-user actually introduces genuinely useful and novel information will be low, and the ability of chatbot companies to even detect that circumstance will be lower still. It isn’t enclosing valuable commons, it is squirting sealant around all the doors so the automated fart-huffing system and its audience can’t get any fresh air.

[–] istewart@awful.systems 6 points 1 day ago

I don’t think that chatbot dev sessions are in any way “high quality data”.

Yeah, Gas Town is being belabored to death, but it must be reiterated that I doubt the long-term value proposition of "Kubernetes fan fiction"

[–] gerikson@awful.systems 5 points 1 day ago (1 children)

I also didn't find the argument very persuasive.

The LLM companies aren't paying anythnig for content. Why should they stop scraping now?

[–] rook@awful.systems 9 points 1 day ago

Oh, they won’t. It’s just that they’ve already killed the golden goose, and no-one is breeding new ones, and they need an awful lot of gold still.

[–] BlueMonday1984@awful.systems 10 points 2 days ago

Daniel Stenberg has written the cURL bug bounty's obituary, and discussed his plans for dealing with the slop-nami going forward.

[–] o7___o7@awful.systems 3 points 2 days ago (1 children)
[–] istewart@awful.systems 4 points 1 day ago

I spent the last few years working in a prototype testing role on an active cattle ranch (don't ask) and this phenomenon reminds me of what's left on the ground after the herd moves through on their way up the canyon

load more comments
view more: next ›