this post was submitted on 01 Dec 2025
28 points (100.0% liked)

TechTakes

2334 readers
38 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(December's finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] bitofhope@awful.systems 22 points 2 weeks ago
[–] TinyTimmyTokyo@awful.systems 18 points 2 weeks ago (4 children)

Bay Area rationalist Sam Kirchner, cofounder of the Berkeley "Stop AI" group, claims "nonviolence isn't working anymore" and goes off the grid. Hasn't been heard from in weeks.

Article has some quotes from Emile Torres.

https://archive.is/20251205074622/https://www.theatlantic.com/technology/2025/12/sam-kirchner-missing-stop-ai/685144/

[–] YourNetworkIsHaunted@awful.systems 15 points 1 week ago (1 children)

Jesus, it could be like the Zizians all over again. These guys are all such fucking clowns right up until they very much are not.

[–] swlabr@awful.systems 18 points 1 week ago (3 children)

Yud’s whole project is a pipeline intended to create zizians, if you believe that Yud is serious about his alignment beliefs. If he isn’t serious then it’s just an unfortunate consequence that he is not trying to address in any meaningful way.

[–] sc_griffith@awful.systems 18 points 1 week ago (1 children)

fortunately, yud clarified everything in his recent post concerning the zizians, which indicated that... uh, hmm, that we should use a prediction market to determine whether it's moral to sell LSD to children. maybe he got off track a little

load more comments (1 replies)
[–] blakestacey@awful.systems 15 points 1 week ago (1 children)

A belief system that inculates the believer into thinking that the work is the most important duty a human can perform, while also isolating them behind impenetrable pseudo-intellectual esoterica, while also funneling them into economic precarity... sounds like a recipe for ~~delicious brownies~~ trouble.

load more comments (1 replies)
[–] BurgersMcSlopshot@awful.systems 11 points 1 week ago* (last edited 1 week ago) (1 children)

the real xrisk was the terror clowns we made along the way.

[–] swlabr@awful.systems 18 points 1 week ago (3 children)
load more comments (3 replies)
[–] fullsquare@awful.systems 9 points 2 weeks ago (1 children)

just one rationalist got lost in the wilderness? that's nothing, tell me when all of them are gone

[–] jonhendry@iosdev.space 13 points 2 weeks ago (1 children)

@fullsquare

The concern is that they're not "lost in the wilderness" but rather are going to turn up in the vicinity of some newly dead people.

load more comments (1 replies)
load more comments (2 replies)
[–] scruiser@awful.systems 18 points 2 weeks ago (3 children)

Another day, another instance of rationalists struggling to comprehend how they've been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy

A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn't really engage with the fact the Anthropic has lied and broken "AI safety commitments" to rationalist/lesswrongers/EA shamelessly and repeatedly:

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=tBTMWrTejHPHyhTpQ

I feel confused about how to engage with this post. I agree that there's a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is "spun" in uncharitable ways.

https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=CogFiu9crBC32Zjdp

I think it's sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.

I would find this all hilarious, except a lot of the regulation and some of the "AI safety commitments" would also address real ethical concerns.

[–] lagrangeinterpolator@awful.systems 11 points 2 weeks ago (1 children)

If rationalists could benefit from just one piece of advice, it would be: actions speak louder than words. Right now, I don't think they understand that, given their penchant for 10k word blog posts.

One non-AI example of this is the most expensive fireworks show in history, I mean, the SpaceX Starship program. So far, they have had 11 or 12 test flights (I don't care to count the exact number by this point), and not a single one of them has delivered anything into orbit. Fans generally tend to cling on to a few parlor tricks like the "chopstick" stuff. They seem to have forgotten that their goal was to land people on the moon. This goal had already been accomplished over 50 years ago with the 11th flight of the Apollo program.

I saw this coming from their very first Starship test flight. They destroyed the launchpad as soon as the rocket lifted off, with massive chunks of concrete flying hundreds of feet into the air. The rocket itself lost control and exploded 4 minutes later. But by far the most damning part was when the camera cut to the SpaceX employees wildly cheering. Later on there were countless spin articles about how this test flight was successful because they collected so much data.

I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc. Now, I choose to look at the actions of the AI companies, and I can easily see that they do not have any ethics. Meanwhile, the rationalists are hypnotized by the Anthropic critihype blog posts about how their AI is dangerous.

load more comments (1 replies)
load more comments (2 replies)
[–] froztbyte@awful.systems 16 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

(e, cw: genocide and culturally-targeted hate by the felon bot)

world's most divorced man continues outperforming black holes at sucking

404 also recently did a piece on his ego-maintenance society-destroying vainglory projects

imagine what it's like in his head. era-defining levels of vacuous.

load more comments (1 replies)
[–] froztbyte@awful.systems 16 points 2 weeks ago (6 children)
[–] bitofhope@awful.systems 11 points 2 weeks ago (8 children)

If TCP/IP stack had feelings, it would have a great reason to feel insulted.

[–] swlabr@awful.systems 11 points 2 weeks ago

hi hi I am budweiser jabrony please join my new famous and good website 'tapering incorrectness dot com' where we speculate about which OSI layers have the most consciousness (zero is not a valid amount of consciousness) also give money and prima nocta. thanks

load more comments (7 replies)
load more comments (5 replies)
[–] e8d79@discuss.tchncs.de 16 points 2 weeks ago (4 children)

Hey Google, did I give you permission to delete my entire D drive?

It's almost as if letting an automated plagiarism machine execute arbitrary commands on your computer is a bad idea.

[–] sailor_sega_saturn@awful.systems 16 points 2 weeks ago

The documentation for "Turbo mode" for Google Antigravity:

Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)

No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It's not even named similarly to dangerous modes in other software (like "force" or "yolo" or "danger")

Just a cool marketing name that makes users want to turn it on. Heck if I'm using some software and I see any button called "turbo" I'm pressing that.

It's hard not to give the user a hard time when they write:

Bro, I didn’t know I needed a seatbelt for AI.

But really they're up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user "well in our small print somewhere we used the phrase 'Gemini can make mistakes' so why did you enable turbo mode??"

[–] froztbyte@awful.systems 12 points 2 weeks ago

yeah as I posted on mastodong.soc, it continues to make me boggle that people think these fucking ridiculous autoplag liarsynth machines are any good

but it is very fucking funny to watch them FAFO

[–] lagrangeinterpolator@awful.systems 10 points 2 weeks ago (1 children)

After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: "Never let an LLM have any decision-making power." At most, LLMs will serve as a heuristic function for an algorithm that actually works.

Unlike the railroads of the First Gilded Age, I don't think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it's not worth spending lots of money on a task where you don't need reliability.

The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?

The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true "use cases" to be mainly spam, and perhaps students cheating on homework.

load more comments (1 replies)
load more comments (1 replies)
[–] BlueMonday1984@awful.systems 15 points 1 week ago (2 children)
[–] Seminar2250@awful.systems 13 points 1 week ago* (last edited 1 week ago) (2 children)

https://awful.systems/post/5776862/8966942 😭

also this guy is a bit of a doofus, e.g. https://bugs.launchpad.net/calibre/+bug/853934, where he is a dick to someone reporting a bug, and https://bugs.launchpad.net/calibre/+bug/885027, where someone points out that you can execute anything as root because of a security issue, and he argues like a total shithead

You mean that a program designed to let an unprivileged user
mount/unmount/eject anything he wants has a security flaw because it allows
him to mount/unmount/eject anything he wants? I'm shocked.

Implement a system that allows an appilcation to mount/unmount/eject USB
devices connected to the system securely, then make sure that system is
universally adopted on every linux install in the universe. Once you've done that, feel free to
re-open this ticket.

i would not invite this person to my birthday

load more comments (2 replies)
load more comments (1 replies)
[–] rook@awful.systems 15 points 2 weeks ago (5 children)

Reposted from sunday, for those of you who might find it interesting but didn’t see it: here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.

Few IT projects are displays of rational decision-making from which AI can or should learn.

Which, haha, is a great quote but highlights an interesting issue that I hadn’t really thought about before: if your training data doesn’t have any examples of what “good” actually is, then even if your llm could tell the difference between good and bad, which it can’t, you’re still going to get mediocrity out (at best). Whole new vistas of inflexible managerial fashion are opening up ahead of us.

The article continues to talk about how we can’t do IT, and wraps up with

It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined

It is probably healthy to be reminded that the software industry was in a sorry state before the llms joined in.

https://spectrum.ieee.org/it-management-software-failures

load more comments (5 replies)
[–] gerikson@awful.systems 14 points 2 weeks ago (3 children)

This looks like it's relevant to our interests

Hayek's Bastards: Race, Gold, IQ, and the Capitalism of the Far Right by Quinn Slobodian

https://press.princeton.edu/books/hardcover/9781890951917/hayeks-bastards

load more comments (3 replies)
[–] fullsquare@awful.systems 14 points 1 week ago* (last edited 1 week ago) (2 children)

anyone else spent their saturday looking for gas turbine datasheets? no?

anyway, the bad, no good, haphazard power engineering of crusoe

neoclouds on top of silicon need a lot of power that they can't get because they can't get substation big enough, or maybe provider denied it, so they decided that homemade is just as fine. in order to turn some kind of fuel (could be methane, or maybe not, who knows) into electricity they need gas turbines and a couple of weeks back there was a story that crusoe got their first aeroderivative gas turbines from GE https://www.tomshardware.com/tech-industry/data-centers-turn-to-ex-airliner-engines-as-ai-power-crunch-bites this means that these are old, refurbished, modified jet engines put in a chassis with generator and with turbofan removed. in total they booked 29 turbines from GE, LM2500 series, and some other, PE6000 from other company called proenergy* and probably others (?) for alleged 4.5GW total. for neoclouds generators of this type have major advantage that 1. they exist and backlog isn't horrific, the first ones delivered were contracted in december 2024, so about 10 months, and onsite construction is limited (sometimes less than month) 2. these things are compact and reasonably powerful, can be loaded on trailer in parts and just delivered wherever 3. at the same time these are small enough that piecewise installation is reasonable (34.4MW per, so just from GE 1GW total spread across 29)

and that's about it from advantages. these choices are fucking weird really. the state of the art in turning gas to electricity is to first, take as big gas turbine as practical, which might be 100MW, 350MW, there are even bigger ones. this is because efficiency of gas turbines increases with size, because big part of losses comes from gas slipping through the gap between blades and stator/rotor. the bigger turbine, the bigger cross-sectional area occupied by blades (~ r^2), and so gap (~ r) is less important. this effect is responsible for differences in efficiency of couple of percent just for gas turbine, for example for GE, aeroderivative 35MW-ish turbine (LM2500) we're looking at 39.8% efficiency, while another GE aeroderivative turbine (LMS100) at 115MW has 43.9% efficiency. our neocloud disruptors stop there, with their just under 40% efficient turbines (and probably lower*) while exhaust is well over 500C and can be used to boil water, which is what any serious powerplant does in combined cycle. this additional steam turbine gives about third of total generated energy, bringing total efficiency to some 60-63%.

so right off the bat, crusoe throws away about third of usable energy, or alternatively for the same amount of power they burn 50-70% more gas, if they even use gas and not for example diesel. they specifically didn't order turbines with this extra heat recovery mechanism, because, based on datasheet https://www.gevernova.com/content/dam/gepower-new/global/en_US/downloads/gas-new-site/products/gas-turbines/gev-aero-fact-sheets/GEA35746-GEV-LM2500XPRESS-Product-Factsheet.pdf they would get over 1.37GW, while GE press announcement talked about "just under 1GW" which matches only with the oldest type of turbine there (guess: cheapest), or maybe some mix with even older ones than what is shown. this is not what serious power generating business would do, because for them every fraction of percent matters. while it might be possible to get heat recovery steam boiler and steam turbine units there later, this means extra installation time (capex per MW turns out to be similar) and more backlog, and requires more planning and real estate and foresight, and if they had that they wouldn't be there in the first place, would they. even then, efficiencies get to maybe 55% because turns out that these heat exchangers required for for professional stuff are huge and can't be loaded on trailer, so they have to go with less

so it sorta gets them power short term, and financially it doesn't look well long term, but maybe they know that and don't care because they know they won't be there to pay bills for gas, but also if these glorified gensets are only used during outages or otherwise not to their full capacity then it doesn't matter that much. also gas turbines in order to run efficiently need to run hot, but the hottest possible temperature with normal fuels would melt any material we can make blades of, so the solution is to take double or triple amount of air than needed and dilute hot gases this way, which also means these are perfect conditions for nitric oxide synthesis, which means smog downwind. now there are SCRs which are supposed to deal with it, but it didn't stop musk from poisoning people of memphis when he did very similar thing

* proenergy takes the same jet engine that GE does and turns it into PE6000, which is probably mostly the same stuff as LM6000, except that GE version is 51MW and proenergy 48MW. i don't know whether it's derated or less efficient still, but for the same gas consumption it would be 37.5%

e: proenegy was contracted for 1GW, 21x48MW turbines https://spectrum.ieee.org/ai-data-centers GE another 1GW, 29x34.4MW https://www.gevernova.com/news/articles/going-big-support-data-center-growth-rising-renewables-crusoe-ordering-flexible-gas this leaves 2.5GW unaccounted for. another big one is siemens but they haven't said anything. then 1.5GW nuclear??? from blue energy and from 2031 on (lol)

load more comments (2 replies)
[–] Seminar2250@awful.systems 14 points 2 weeks ago* (last edited 2 weeks ago) (5 children)

something i was thinking about yesterday: so many people i ~~respect~~ used to respect have admitted to using llms as a search engine. even after i explain the seven problems with using a chatbot this way:

  1. wrong tool for the job
  2. bad tool
  3. are you fucking serious?
  4. environmental impact
  5. ethics of how the data was gathered/curated to generate^[they call this "training" but i try to avoid anthropomorphising chatbots] the model
  6. privacy policy of these companies is a nightmare
  7. seriously what is wrong with you

they continue to do it. the ease of use, together with the valid syntax output by the llm, seems to short-circuit something in the end-user's brain.

anyway, in the same way that some vibe-coded bullshit will end up exploding down the line, i wonder whether the use of llms as a search engine is going to have some similar unintended consequences


"oh, yeah, sorry boss, the ai told me that mr. robot was pretty accurate, idk why all of our secrets got leaked. i watched the entire series."

additionally, i wonder about the timing. will we see sporadic incidents of shit exploding, or will there be a cascade of chickens coming home to roost?

[–] o7___o7@awful.systems 16 points 2 weeks ago* (last edited 2 weeks ago)

Yes i know the kid in the omelas hole gets tortured each time i use the woe engine to generate an email. Is that bad?

[–] yellowcake@awful.systems 13 points 2 weeks ago (1 children)

Is there any search engine that isn't pushing an "AI mode" of sorts? Some are more sneaky or give option to "opt out" like duckduckgo, but this all feels temporary until it is the only option.

I have found it strange how many people will say "I asked chatgpt" with the same normalcy as "googling" was.

load more comments (1 replies)
load more comments (3 replies)
[–] antifuchs@awful.systems 14 points 2 weeks ago (1 children)
[–] froztbyte@awful.systems 10 points 2 weeks ago

that being a hung banner (rather than wall-mount or so) borders on being a tacit acknowledgement that they know their shit is unpopular and would get vandalised in a fucking second if it were easy (or easier!) to get to

even then, I suspect that banner will not stay unscathed for long

[–] blakestacey@awful.systems 13 points 2 weeks ago* (last edited 2 weeks ago) (6 children)
load more comments (6 replies)
[–] BlueMonday1984@awful.systems 12 points 2 weeks ago

Dexerto has reported on an unnamed Japanese game studio weeding out promptfondlers (by having applicants draw something in-person during interviews).

Unsurprisingly, the replies have become a promptfondler shooting gallery. Personal "favourite" goes to the guy who casually admits he can't tell art from slop:

[–] Architeuthis@awful.systems 12 points 2 weeks ago (1 children)

/r/SneerClub discusses MIRI financials and how Yud ended up getting paid $600K per year from their cache.

Malo Bourgon, MIRI CEO, makes a cameo in the comments to discuss Ziz's claims about SA payoffs and how he thinks Yud's salary (the equivalent of like 150.000 malaria vaccines) is defensible for reasons that definitely exist, but they live in Canada, you can't see them.

[–] swlabr@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago) (11 children)

Guy does a terrible job explaining literally anything. Why, when trying to explain all the SA based drama, does he choose to create an analogy where the former employee is heavily implied to have murdered his wife?

S/o to cinnaverses for mixing it up in there.

load more comments (11 replies)
[–] antifuchs@awful.systems 11 points 2 weeks ago (1 children)

Workers organizing against genai policies in the workplace: http://workersdecide.tech/

Sounds like exactly the thing unions and labor organizing is good for. Glad to see it.

load more comments (1 replies)
[–] gerikson@awful.systems 11 points 2 weeks ago (8 children)

A lobster wonders why the news that a centi-millionaire amateur jet pilot has decided to offload the cost of developing his pet terminal software onto peons begging for contributions has almost 100 upvotes, and is absolutely savaged for being rude to their betters

https://lobste.rs/s/dxqyh4/ghostty_is_now_non_profit#c_b0yttk

[–] swlabr@awful.systems 12 points 2 weeks ago

bring back rich people rolling their own submarines and getting crushed to death in the bathyal zone

load more comments (7 replies)
[–] gerikson@awful.systems 11 points 2 weeks ago (4 children)

Apparently we are part of the rising trend of AI denialism

The rise of AI denialism

Author Louis Rosenberg is "an engineer, researcher, inventor, and entrepreneur" according to his PR-stinking Wikipage: https://en.wikipedia.org/wiki/Louis_B._Rosenberg. I am sure he is utterly impartial and fair with regards to AI.

[–] mawhrin@awful.systems 25 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

i hereby propose a new metric for a popular publication, the epstein number (Ē), denoting the number of authors who took flights to epstein's rape island. generally, credible publications should have Ē=0. this one, after a very quick look, has Ē=2, and also hosts sabine hossenfelder.

load more comments (1 replies)
load more comments (3 replies)
[–] Soyweiser@awful.systems 11 points 2 weeks ago (2 children)

More grok shit: https://futurism.com/artificial-intelligence/grok-doxxing it in contrast to most other models, is very good at doxing people.

Amazing how everything Musk makes is the worst in class (and somehow the Rationalists think he will be their saviour (that is because he is a eugenicist)).

[–] blakestacey@awful.systems 11 points 2 weeks ago

(thinks) groxxing

[–] gerikson@awful.systems 11 points 2 weeks ago

the base use for LLMs is gonna be hypertargetted advertising, malware, political propaganda etc

well the base case for LLMs is that, right now

the privacy nerds won't know what hit them

[–] BlueMonday1984@awful.systems 10 points 2 weeks ago (5 children)

Major RAM/SSD manufacturer Micron just shut down its Crucial brand to sell shovels in the AI gold rush, worsening an already-serious RAM shortage for consumer parts.

Just another way people are paying more for less, thanks to AI.

load more comments (5 replies)
[–] gerikson@awful.systems 10 points 1 week ago (5 children)

2 links from my feeds with crossover here

Lawyers, Guns and Money: The Data Center Backlash

Techdirt: Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric

Unfortunately Techdirt's Mike Masnick is a signatory some bullshit GenAI-collaborationist manifesto called The Resonant Computing Manifesto, along with other suspects like Anil Dash. Like so many other technolibertarian manifestos, it naturally declines to say how their wonderful vision would be economically feasible in a world without meaningful brakes on the very tech giants they profess to oppose.

load more comments (5 replies)
[–] blakestacey@awful.systems 10 points 1 week ago (1 children)

From Lila Byock:

A 4th grader was assigned to design a book cover for Pippi Longstocking using Adobe for Education.

The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.

load more comments (1 replies)
load more comments
view more: next ›