this post was submitted on 08 Dec 2025
19 points (100.0% liked)

TechTakes

2334 readers
36 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] gerikson@awful.systems 15 points 5 days ago (3 children)

Some rando mathematician checks out some IQ related twins studies and find out (gasp) that Cremieux is a manipulative liar with an agenda

https://davidbessis.substack.com/p/twins-reared-apart-do-not-exist

Said author is sad that Paul Graham retweets the claim and does not draw the obvious conclusion that PG is just a bog-standard Silicon Valley VC pseudo-racist.

HN is not happy and a green username calling themselves "jagoff" leads the charge

https://news.ycombinator.com/item?id=46195226

[–] YourNetworkIsHaunted@awful.systems 8 points 5 days ago* (last edited 5 days ago)

Honestly I'm kinda grateful for people who dig into and analyze the actual data in seeming ignorance of the political context of the people pushing the other side. It's one thing to know that Jordan "Crimieux" Lasker and friends are out here doing Deutsch Genetik and another to have someone cleanly and clearly lay out exactly why they're wrong, especially when they do the work of assembling a more realistic and useful story about the same studies.

[–] Architeuthis@awful.systems 12 points 5 days ago (1 children)

Very good read actually.

Except, from the epilogue:

People are working to resolve [intelligence heritability issue] with new techniques and meta-arguments. As far as I understand, the frontline seems to be stabilizing around the 30-50% range. Sasha Gusev argues for the lower end of that band, but not everyone agrees.

The not-everyone-agrees link is to acx and siskind's take on the matter, who unfortunately seems to continue to fly under the radar as a disingenuous eugenicist shitweasel with a long-term project of using his platform to sane-wash gutter racists who pretend at doing science.

[–] gerikson@awful.systems 10 points 5 days ago

Yeah, the substacker seems either naive or genuinely misinformed about Siskind's ultimate agenda, but in their defense Scott is really really good at vomiting forth torrents of beige prose that obscure it.

[–] Soyweiser@awful.systems 9 points 5 days ago* (last edited 5 days ago) (1 children)

Here’s the bottom line: I have no idea what motivated Cremieux to include Burt’s fraudulent data, but even without it his visual is highly misleading, if not manipulative

Well, the latter part of this sentence gives a hint at the actual reason.

And the first comment is by Hanania lol, trying to debunk the fraud allegations by saying that is just how things were done back then. While also not realizing he didnt understand the first part of the article. Amazing how these iq anon guys always react quick and to everything. Also was quite an issue on Reddit, where just a small dismissal of IQ could lead to huge (copy pasted) rambling defenses of IQ.

The author is also calling Richard out on his weird framing.

It's fascinating to watch Hanania try and do politics in a comment space more focused on academic inquiry, and how silly he looks here. He can't participate in this conversation without trying to make it about social interventions and class warfare (against the poor), even though I don't know that Bessis would disagree that the thing social interventions can't significantly increase the number of mathematical or scientific geniuses in a country (1). Instead, Hanania throws out a few brief, unsupported arguments, gets asked for clarification and validation, accuses everyone of being woke, and gets basically ignored as the conversation continues around him.

This feels like the kind of environment that Siskind and friends claim to be wanting to create, but it feels like they're constitutionally incapable of actually doing the "ignore Nazis until they go away" part.

  1. From his other post linked in the thread he credits that level of aptitude to idiosyncratic ways of thinking that are neither genetically nor socially determined, but can be cultivated actively through various means. The reason that the average poor Indian boy doesn't become Ramanujan is the same reason you or I or his own hypothetical twin brother didn't; we're not Ramanujan. This doesn't mean that we can't significantly improve our own ability to understand and use mathematical thinking.)
[–] Architeuthis@awful.systems 6 points 5 days ago (1 children)

Apparently you can ask gpt-5.2 to make you a zip of /home/oai and it will just do it:

https://old.reddit.com/r/OpenAI/comments/1pmb5n0/i_dug_deeper_into_the_openai_file_dump_its_not/

An important takeaway I think is that instead of Actually Indian it's more like Actually a series rushed scriptjobs - they seem to be trying hard to not let the llm do technical work itself.

Also, it seems their sandboxing amounts to filtering paths that star with /.

[–] corbin@awful.systems 5 points 5 days ago (1 children)

They (or the LLM that summarized their findings and may have hallucinated part of the post) say:

It is a fascinating example of "Glue Code" engineering, but it debunks the idea that the LLM is natively "understanding" or manipulating files. It's just pushing buttons on a very complex, very human-made machine.

Literally nothing that they show here is bad software engineering. It sounds like they expected that the LLM's internals would be 100% token-driven inference-oriented programming, or perhaps a mix of that and vibe code, and they are disappointed that it's merely a standard Silicon Valley cloudy product.

My analysis is that Bobby and Vicky should get raises; they aren't paid enough for this bullshit.

By the way, the post probably isn't faked. Google-internal go/ URLs do leak out sometimes, usually in comments. Searching GitHub for that specific URL turns up one hit in a repository which claims to hold a partial dump of the OpenAI agents. Here is combined_apply_patch_cli.py. The agent includes a copy of ImageMagick; truly, ImageMagick is our ecosystem's cockroach.

[–] Architeuthis@awful.systems 5 points 5 days ago

OpenAi yearly payroll runs in the billions, so they probably aren't hurting.

That Almsost AGI is short for Actually Bob and Vicky seems like quite the embarrassment, however.

[–] corbin@awful.systems 5 points 5 days ago (1 children)

I got jumpscared by Gavin D. Howard today; apparently his version of bc appeared on my system somehow, and his name's in the copyright notice. Who is Gavin anyway? Well, he used to have a blog post that straight-up admitted his fascism, but I can't find it. I could only find, say, the following five articles, presented chronologically:

Also, while he's apparently not caused issues for NixOS maintainers yet, he's written An Apology to the Gentoo Authors for not following their rules when it comes to that same bc package. So this might be worth removing for other reasons than the Christofascist authorship.

BTW his code shows up because it's in upstream BusyBox and I have a BusyBox on my system for emergency purposes. I suppose it's time to look at whether there is a better BusyBox out there. Also, it looks like Denys Vlasenko has made over one hundred edits to this code to integrate it with BusyBox, fix correctness and safety bugs, and improve performance; Gavin only made the initial commit.

[–] gerikson@awful.systems 3 points 5 days ago

Pretty sure this guy has some serious mental health issues.

[–] fullsquare@awful.systems 4 points 5 days ago
[–] swlabr@awful.systems 24 points 1 week ago* (last edited 1 week ago) (2 children)

After finding out about her here, I’ve been watching a lot of Angela Collier videos lately. Here’s the most recent one which talks about our life extending friends.

E: just expressing my general appreciation for her vids. Things that I like:

  • low frequency of cuts/her speech isn’t broken up into 5 second clips
  • lack of kowtowing to algorithmic suggestion
  • subtle, dry humour

Which I’m now realising is somewhat counter to current trends in content, which might be contributing to why I like these.

[–] Amoeba_Girl@awful.systems 12 points 1 week ago

she's great

load more comments (1 replies)
[–] e8d79@discuss.tchncs.de 19 points 1 week ago (3 children)

It might have already been posted here, but this Wikipedia guide to recognizing AI slop is such a good resource.

[–] zogwarg@awful.systems 19 points 1 week ago* (last edited 1 week ago) (2 children)

A fairly good and nuanced guide. No magic silver-bullet shibboleths for us.

I particularly like this section:

Consequently, the LLM tends to omit specific, unusual, nuanced facts (which are statistically rare) and replace them with more generic, positive descriptions (which are statistically common). Thus the highly specific "inventor of the first train-coupling device" might become "a revolutionary titan of industry." It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated.

I think it's an excellent summary, and connects with the "Barnum-effect" of LLMs, making them appear smarter than they are. And that it's not the presence of certain words, but the absence of certain others (and well content) that is a good indicator of LLM extruded garbage.

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 13 points 1 week ago (4 children)

Doing a quick search, it hasn't been posted here until now - thanks for dropping it.

In a similar vein, there's a guide to recognising AI-extruded music on Newgrounds, written by two of the site's Audio Moderators. This has been posted here before, but having every "slop tell guide" in one place is more convenient.

load more comments (4 replies)
load more comments (1 replies)
[–] nfultz@awful.systems 18 points 1 week ago (2 children)

I did it, I went and made a Official Public Comment IRL:

In UCLA's Strategic Plan, Goal 1 is to "Deepen our engagement with Los Angeles" and Goal 5 is to "Become a more effective institution". By engaging with Los Angeles businesses, UCLA can get both better terms, prices, and services, and support the local economy. Buy Local, Spend Local.

The federal government encourages this with Small Business Innovation Research and Small Business Technology Transfer grants, among other things. Furthermore, the State of California requires a portion of its spending go toward certified Small Businesses.

And yet, the University apparently awarded a contract reportedly worth hundreds of thousands to millions of dollars to OpenAI. I have not found any documentation of an open Request for Proposals or competitive process for that award.

My question is:

If there was an RFP, where was it publicly posted, and if there was no RFP, why not, and were Los Angeles vendors or small businesses evaluated as alternatives, as recommended by UC policy and state law?

Given the scale of this spending and the context of a budget crisis, transparency, compliance, and small-business participation are critical to our effectiveness and engagement.

I’m asking for clarity on how this decision was made, how it aligns with procurement guidelines and University goals, and how DTS plans to ensure that local and small businesses are meaningfully included moving forward.

Thank you.

[–] istewart@awful.systems 10 points 6 days ago

The response has a high probability of being evasive bullshit, but will be worth archiving no matter what.

load more comments (1 replies)
[–] BioMan@awful.systems 17 points 1 week ago* (last edited 1 week ago) (8 children)

The Great Leader himself, on how he avoids going insane during the onging End of the World because among other things that's not what an intelligent character would do in a story, but you might not be capable of that.

[–] swlabr@awful.systems 14 points 1 week ago* (last edited 1 week ago) (5 children)

The first and oldest reason I stay sane is that I am an author, and above tropes.

Nobody is above tropes. Tropes are just patterns you see in narratives. Everything you can describe is a trope. To say you are above tropes means you don’t live and exist.

Going mad in the face of the oncoming end of the world is a trope.

Not going mad as the world ends is also a trope, you fuck!

This sense -- which I might call, genre-savviness about the genre of real life -- is historically where I began; it is where I began, somewhere around age nine, to choose not to become the boringly obvious dramatic version of Eliezer Yudkowsky that a cliche author would instantly pattern-complete about a literary character facing my experiences.

We now have a canon mental age for Yud of drumroll nine.

Just decide to be sane

That isn’t how it works, idiot. You can’t “decide to be sane”, that’s like having a private language.

Anyway, just to make the subtext of my other comments into text. Acting like you are a character in a story is a dissociative delusion and counter to reality. It is definitively not sane. Insane, if you will.

[–] fullsquare@awful.systems 12 points 1 week ago (1 children)

they really thonk that people work just like chatbots, are they

load more comments (1 replies)
[–] swlabr@awful.systems 11 points 1 week ago* (last edited 1 week ago)

Followup:

Look, the world is fucked. All kinds of paradigms we've been taught have been broken left and right. The world has ended many times over in this regard. In place of anything interesting or helpful to address this, Yud's encoded a giant turd into a blog post. How to stay sane? Just stay sane, bro. Easy to say if the only thing threatening your worldview is a made-up robodemon that will never exist.

Here's Yud's actually-quite-easy-to-understand suggestions:

  1. detach from reality by pretending you are a character in a story as a coping mechanism.
  2. assume no personal responsibility or agency.
  3. don't go insane, i.e. make sure you try and fulfil society's expectations of what sanity is.

All of these are terrible. In general, you want to stay grounded in reality, be aware of the agency you have in the world, and don't feel pressured to performatively participate in society, especially if that means doing arbitrary rituals to prove that you are "sane".

Here are my thoughts on "how to stay sane" and "how to cope":

It's entirely reasonable to crash out. I don't want anyone to go insane, but fucking look at all this shit. Datacenters are boiling the oceans. Liberalism is starting its endgame into fascism. All the fucking genocides! Dissociating is acceptable and expected as an emotional response. All of this has been happening in (modern) human history to a degree where crashing out has been reasonable. Yet, many people have been able to "stay sane" in the face of this. If you see someone who appears to be sane, either they're fucked in the head, or they have some perspective or have built up some level of resilience. Whether or not those things can be helpful to someone else is not deterministic. If you are someone who has "stayed sane", please remember to show some empathy and some awareness that it's fine if someone is miserable, because again, everything is fucked.

Putting the above together, I accept basically any reaction to the state of the world. It's reasonable to go either way, and you shouldn't feel bad either way. "Sanity" has different meanings depending on where you look. I think there's a common, unspoken definition that basically boils down to "a sane person is someone who can productively participate in society." This is not a standard you always need to hold yourself to. I think it's helpful to introspect and, uh, "extrospect", here. Like, figure out what you think it means to be sane, what you want it to mean, and what you want. And bounce these ideas off of someone else, because that usually helps.

I think there is another common definition of sanity that might just be "mentally healthy". To that end, things that have helped me, aside from therapy, that aren't particularly insightful or unique:

  1. Talking to friends
  2. Finding places to talk about the world going to shit.
  3. Participating in community, online or irl.
  4. Basically just finding spaces where stupid shit gets dunked on.
  5. Leftist meme pages

I mean, is that so fucking hard to say?

[–] JFranek@awful.systems 11 points 1 week ago

Joke's on him, Know-Nothing Know-It-All is also a trope.

load more comments (2 replies)
[–] zogwarg@awful.systems 10 points 1 week ago (10 children)

Screaming at the void towards Chuunibyou (wiki) Eliezer: YOU ARE NOT A NOVEL CHARACTER, THINKING OF WHAT BENEFITS THE NOVELIST vs THE CHARACTER HAS NO BEARING ON REAL LIFE.

Sorry for yelling.

Minor notes:

But thinks I should say it, so I will say it. [...] asked me to speak them anyways, so I will.

It's quite petty of Yud to be so passive-aggressive towards his employee insisted he at least try to discuss coping. Name dropping him not once but twice (although that is also likely to just be poor editing)

"How are you coping with the end of the world?" [...Blah...Blah...Spiel about going mad tropes...]

Yud, when journalists ask you "How are you coping?", they don't expect you to be "going mad facing apocalypse", that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.

Alternatively it's also a question to gauge how full of shit you may be. (By gauging how emotionally invested you are)

The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn't tempt me to write, and it doesn't tempt me to be.

Emotional turmoil and how characters cope, or fail to cope makes excellent literature! That all you can think of is "going mad", reflects only your poor imagination as both a writer and a reader.

I predict, because to them I am the subject of the story and it has not occurred to them that there's a whole planet out there too to be the story-subject.

This is only true if they actually accept the premise of what you are trying to sell them.

[...] I was rolling my eyes about how they'd now found a new way of being the story's subject.

That is deeply Ironic, coming from someone who makes choice based on him being the main character of a novel.

Besides being a thing I can just decide, my decision to stay sane is also something that I implement by not writing an expectation of future insanity into my internal script / pseudo-predictive sort-of-world-model that instead connects to motor output.

If you are truly doing this, I would say that means you are expecting insanity wayyyyy to much. (also psychobabble)

[...Too painful to actually quote psychobabble about getting out of bed in the morning...]

In which Yud goes in depth, and self-aggrandizing nonsensical detail about a very mundane trick about getting out of bed in the morning.

[–] YourNetworkIsHaunted@awful.systems 11 points 1 week ago (2 children)

Yud seems to have the same conception of insanity that Lovecraft did, where you learn too much and end up gibbering in a heap on the floor and needing to be fed through a tube in an asylum or whatever. Even beyond the absurdity of pretending that your authorial intent has some kind of ability to manifest reality as long as you don't let yourself be the subject (this is what no postmodernism does to a person), the actual fear of "going mad" seems fundamentally disconnected from any real sense of failing to handle the stress of being famously certain that the end times are indeed upon us. I guess prophets of doom aren't really known for being stable or immune to narcissistic flights of fancy.

[–] gerikson@awful.systems 10 points 1 week ago

Having a SAN stat act like an INT (IQ) stat is very on brand for rationalists (except ofc the INT stat is immutable duh)

[–] scruiser@awful.systems 10 points 1 week ago

the actual fear of “going mad” seems fundamentally disconnected from any real sense of failing to handle the stress of being famously certain that the end times are indeed upon us

I think he actually is failing to handle the stress he has inflicted on himself, and that's why his latest few lesswrong posts hadreally stilted poor parables about Chess and about alien robots visiting earth that were much worse than classic sequences parables. And why he has basically given up trying to think of anything new and instead keeps playing the greatest lesswrong hits on repeat, as if that would convince anyone that isn't already convinced.

[–] blakestacey@awful.systems 11 points 1 week ago* (last edited 1 week ago) (1 children)

"How do you keep yourself from going insane?"

"I tell myself I'm a character from a book who comes to life and is also a robot!" (Hubert Farnsworth giggle)

[–] swlabr@awful.systems 11 points 1 week ago

“I like to dissociate completely! Wait, what was the question?”

[–] scruiser@awful.systems 10 points 1 week ago

Yud, when journalists ask you “How are you coping?”, they don’t expect you to be “going mad facing apocalypse”, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.

I think the way he reads the question is telling on himself. He knows he is sort of doing a half-assed response to the impending apocalypse (going on a podcast tour, making even lower-quality lesswrong posts, making unworkable policy proposals, and continuing to follow the lib-centrist deep down inside himself and rejecting violence or even direct action against the AI companies that are hurling us towards an apocalypse). He knows a character from one of his stories would have a much cooler response, but it might end up getting him labeled a terrorist and sent to prison or whatever, so instead he rationalizes his current set of actions. This is in fact insane by rationalist standards, so when a journalist asks him a harmless question it sends him down a long trail of rationalizations that include failing to empathize with the journalist and understand the question.

load more comments (7 replies)
[–] Amoeba_Girl@awful.systems 10 points 1 week ago (2 children)

tl;dr i don't actually believe the world is going to end but more importantly i'm Ender Wiggin

load more comments (2 replies)

It's very meta for Yud to write a story all about how the story isn't all about himself.

load more comments (3 replies)
[–] gerikson@awful.systems 16 points 1 week ago (7 children)

Old Man Stallman comes out swinging against ChatGPT specifically, adding it to the long long list of stuff he doesn't like. For some reason HN is mad at this, as if RMS saying slop is good actually would convince anyone normal to start using it

https://news.ycombinator.com/item?id=46203591

load more comments (7 replies)
[–] BlueMonday1984@awful.systems 14 points 1 week ago (1 children)

Heartbreaking news today.

In a major setback for right-to-repair, iFixit has jumped on the slop bandwagon, introducing an "AI repair helper" to their website that steals "the knowledge base of over 20 years of repair experts" (to quote their dogshit announcement on YouTube) and uses it to hallucinate "repair guides" and "step-by-step instructions" for its users.

load more comments (1 replies)
[–] o7___o7@awful.systems 13 points 1 week ago* (last edited 1 week ago)
[–] sc_griffith@awful.systems 13 points 1 week ago* (last edited 1 week ago) (5 children)

"you should be able to provide an LLM as a job reference"

source https://x.com/ID_AA_Carmack/status/1998753499002048589

[–] swlabr@awful.systems 15 points 1 week ago (2 children)

This is offensively stupid lol

load more comments (2 replies)
[–] YourNetworkIsHaunted@awful.systems 11 points 1 week ago (3 children)

I'm legitimately disappointed in John Carmack here. He should be a good enough programmer to understand the limitations here, but I guess his business career has driven in a different direction.

load more comments (3 replies)
load more comments (3 replies)
[–] YourNetworkIsHaunted@awful.systems 11 points 1 week ago (1 children)

Patrick Boyle on YouTube has a breakdown of the breakdown of the Microstrategy flywheel scheme. Decent financial analysis of this nonsense combined with some of the driest humor on the internet.

load more comments (1 replies)
load more comments
view more: next ›