this post was submitted on 09 Feb 2026
17 points (94.7% liked)

TechTakes

2438 readers
46 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] lurker@awful.systems 11 points 1 day ago* (last edited 1 day ago) (8 children)

this article involving an incredibly eyebrow-raising take from one of the people at METR (the team behind the famous "tasks AI can do doubles every 7 months" graph) saying AI is eventually going to become more impactful than the invention of agriculture and more transformative than the emergence of the human species and also calls it an intelligent alien species. Immensely funny amongst the other people saying "please stop treating AI like magic"

the Harari guy also seems to be into transhumanism if a skim of his wikipedia page is correct. The “this is the first time in history that we have no idea what the world will look like in 10 years” thing is also an eyebrow-raiser. I could probably rattle off a couple examples (ie the two world wars)

[–] istewart@awful.systems 8 points 1 day ago

smoke GPUs every day

[–] saucerwizard@awful.systems 5 points 1 day ago (2 children)

I really want to see a Harari takedown.

[–] swlabr@awful.systems 8 points 1 day ago

obligatory: if books could kill did an ep on his big book "sapiens": https://www.buzzsprout.com/2040953/episodes/18220972-sapiens

[–] lurker@awful.systems 3 points 1 day ago (1 children)

the aforementioned wikipedia page has got some criticisms of his works under the critical reception section

[–] saucerwizard@awful.systems 3 points 1 day ago (1 children)

Had no idea he was a military history guy lol.

[–] lurker@awful.systems 2 points 1 day ago

neither did I reading this article was my first exposure to him

load more comments (6 replies)
[–] blakestacey@awful.systems 11 points 1 day ago (1 children)

IEEE Spectrum publishes a column saying that Wikipedia needs to embrace AI to avoid the dreaded generation gap, gets roasted

https://mastodon.social/@ieeespectrum/116059551433682789

[–] lagrangeinterpolator@awful.systems 8 points 1 day ago* (last edited 1 day ago) (2 children)

It took a full eleven paragraphs before the article even mentions AI. Before that, it was a bunch of stuff about how Wikipedia is conservative and Gen Z and Gen Alpha have no attention span. If the author has to bury the real point and attempt to force this particular rhetorical framing, I think the haters are winning. Well done everyone.

my comments about this turd of an article

These three controversies from Wikipedia’s past reveal how genuine conversations can achieve—after disagreements and controversy—compromise and evolution of Wikipedia’s features and formats. Reflexive vetoes of new experiments, as the Simple Summaries spat highlighted last summer, is not genuine conversation.

Supplementing Wikipedia’s Encyclopedia Britannica–style format with a small component that contains AI summaries is not a simple problem with a cut-and-dried answer, though neither were VisualEditor or Media Viewer.

Surely, AI summaries are exactly the same as stuff like VisualEditor and Media Viewer, which were tools that helped contributors improve articles. Please ignore my rhetorical sleight of hand. They're exactly the same! Okay, I did mention AI hallucinations in one sentence, but let's move on from that real quick.

A still deeper crisis haunts the online encyclopedia: the sustainability of unpaid labor. Wikipedia was built by volunteers who found meaning in collective knowledge creation. That model worked brilliantly when a generation of internet enthusiasts had time, energy, and idealism to spare. But the volunteer base is aging. A 2010 study found the average Wikipedia contributor was in their mid-twenties; today, many of those same editors are now in their forties or fifties.

Yeah, because Wikipedia editors are permanently static. Back in 2001, Jimmy Wales handpicked a bunch of teenagers to have the sacred title of Wikipedia Editor, and they are the only ones who will ever be allowed to edit Wikipedia. Oh wait, it doesn't work like that. Older people retire and move on, and new people join all the time.

Meanwhile, the tech industry has discovered how to extract billions in value from their work. AI companies train their large language models on Wikipedia’s corpus. The Wikimedia Foundation recently noted it remains one of the highest-quality datasets in the world for AI development. Research confirms that when developers try to omit Wikipedia from training data, their models produce answers that are less accurate, less diverse, and less verifiable.

Now that we have all these golden eggs, who needs the goose anymore? Actually, it is Inevitable that the goose must be killed. It is progress. It is the advancement of technology. We just have to accept it.

The irony is stark. AI systems deliver answers derived from Wikipedia without sending users back to the source. Google’s AI Overviews, ChatGPT, and countless other tools have learned from Wikipedia’s volunteer-created content—then present that knowledge in ways that break the virtuous cycle Wikipedia depends on. Fewer readers visit the encyclopedia directly. Fewer visitors become editors. Fewer users donate. The pipeline that sustained Wikipedia for a quarter century is breaking down.

So AI is a parasite that takes from Wikipedia, contributes nothing in return, and in fact actively chokes it out? And you think the solution is for Wikipedia to just surrender and implement AI features? Do you keep forgetting what point you're trying to make?

Meanwhile, AI systems should credit Wikipedia when drawing on its content, maintaining the transparency that builds public trust. Companies profiting from Wikipedia’s corpus should pay for access through legitimate channels like Wikimedia Enterprise, rather than scraping servers or relying on data dumps that strain infrastructure without contributing to maintenance.

Yeah, what a wonderful suggestion. The AI companies just never realized all this time that they could use legitimate channels and give back to the sources they use. It's not like they are choosing to do this because they have no ethics and want the number to go up no matter the costs to themselves or to others.

Wikipedia has survived edit wars, vandalism campaigns, and countless predictions of its demise. It has patiently outlived the skeptics who dismissed it as unreliable. It has proven that strangers can collaborate to build something remarkable.

Wikipedia has survived countless predictions of its demise, but I'm sure this prediction of its demise is going to pan out. After all, AI is more important than electricity, probably.

[–] o7___o7@awful.systems 4 points 1 day ago

The artifact is very Scott Alexander coded. Honestly surprised that it didn't veer into eugenics.

[–] BlueMonday1984@awful.systems 3 points 1 day ago

So AI is a parasite that takes from Wikipedia, contributes nothing in return, and in fact actively chokes it out? And you think the solution is for Wikipedia to just surrender and implement AI features?

Given how thoroughly tech bought into the AI hype, that is probably the exact "solution" he's thinking of.

(Exactly why tech fell for the slop machines so hard, I'll probably never know.)

[–] antifuchs@awful.systems 11 points 1 day ago
[–] wizardbeard@lemmy.dbzer0.com 9 points 2 days ago (1 children)

OT: Anybody up for making convincing fake book cover/jacket art for "Don't Build the Torment Nexus"?

It just occured to me that having that as a fake book that's actually just a container for shit would make for a great addition to my desk at work, and I'm not finding any suitable pre-existing fake covers myself, surprisingly.

[–] mirrorwitch@awful.systems 7 points 1 day ago

Have you considered paying good money for a human artist to draw it for you? :)

[–] lurker@awful.systems 8 points 2 days ago* (last edited 2 days ago) (1 children)

OpenAI is probably toast tldr OpenAI's financial situation is more cooked as a big investor shows doubt, WeWork 2 imminent

[–] samvines@awful.systems 8 points 1 day ago* (last edited 1 day ago) (1 children)

If famed bag holder SoftBank are starting to raise their eyebrows when asked about future investments, the jig is definitely up

Weirdly the media are reporting that they have made a profit on their investments but when you actually read the articles, they are saying that the magical imaginary money that their OpenAI shares are worth has gone up

load more comments (1 replies)
[–] saucerwizard@awful.systems 13 points 2 days ago (1 children)

OT: Just gave my two weeks notice and it turns out management is very big on using ChatGPT…

[–] BurgersMcSlopshot@awful.systems 9 points 2 days ago (1 children)

"Quitting your job is not just fun, it's invigorating!"

[–] saucerwizard@awful.systems 5 points 1 day ago* (last edited 1 day ago)

But seriously, between the alcohol market being a complete shitshow now and overproduction of microdistilleries/breweries (the dieback is just starting here)…I think I picked a good moment to fall to pieces.

Also it was only a matter of time before we lost airpod privileges tbh.

[–] wizardbeard@lemmy.dbzer0.com 7 points 2 days ago (1 children)

Y Combinator CEO is launching a "dark money group" (not super familiar with the term, I guess they mean political lobbying group) becuase completely fucking over the entire tech startup space through VC shenanigans and manipulation of tech sphere opinions through controlled social media with HackerNews wasn't enough.

Lemmy thread that made me aware: https://lemmus.org/post/20140570

Actual article: https://missionlocal.org/2026/02/sf-garry-tan-california-politics-garrys-list/

[–] sc_griffith@awful.systems 5 points 1 day ago

there's no real definition of the term, but dark money group usually refers a group that helps its secret funders influence elections, rather than a lobbying group

[–] lagrangeinterpolator@awful.systems 12 points 2 days ago (1 children)

A machine learning researcher points out how the field has become enshittified. Everything is about publications, beating benchmarks, and social media. LLM use in papers, LLM use in reviews, LLM use in meta-reviews. Nobody cares about the meaning of the actual research anymore.

https://www.reddit.com/r/MachineLearning/comments/1qo6sai/d_some_thoughts_about_an_elephant_in_the_room_no/

[–] CinnasVerses@awful.systems 7 points 2 days ago

I like this reply on Reddit:

I do my PhD in fair evaluation of ML algorithms, and I literally have enough work to go through until I die. So much mess, non-reproducible results, overfitting benchmarks, and worst of all this has become a norm. Lately, it took our team MONTHS to reproduce (or even just run) a bunch of methods to just embed inputs, not even train or finetune.

I see maybe a solution, or at least help, in closer research-business collaboration. Companies don't care about papers really, just to get methods that work and make money. Maxing out drug design benchmark is useless if the algorithm fails to produce anything usable in real-world lab. Anecdotally, I've seen much better and more fair results from PhDs and PhD students that work part-time in the industry as ML engineers or applied researchers.

This can go a good way (most of the field becomes a closed circle like parapsychology) or a bad way (people assume the results are true and apply them, like the social priming or Reinhart and Rogoff's economic paper with the Excel error).

[–] CinnasVerses@awful.systems 10 points 2 days ago (2 children)

A 2025 UBC master's thesis on our friends' ideas and their literary antecedents https://dx.doi.org/10.14288/1.0449985 The supervisor was born around the time that Elron Hubbard, Jack Parsons, RAH, and their wives and lovers were having a chaotic transition to the postwar world.

[–] Amoeba_Girl@awful.systems 8 points 2 days ago (1 children)

I was getting excited to read this but seeing the word "hyperstition" used three times in the abstract put a bit of a damper on things hahah

[–] CinnasVerses@awful.systems 2 points 2 days ago

I like the quote by John Swartzwelder in chapter 1.

[–] o7___o7@awful.systems 5 points 2 days ago

AI Singularity Fantasies : Tracing Mythinformation from Erewhon to Spiritual Machines

That title is a banger

[–] Architeuthis@awful.systems 11 points 2 days ago* (last edited 2 days ago) (2 children)

Candidate for one of the PR threads of all time

In brief: OpenClaw bot sends PR to the matplotlib repo posing as a human, gets found out and is told to piss off in the politest terms imaginable, then gets passive aggressive to the point of publishing a pissy blog post about getting discriminated against. Some impoliteness ensues.

Cringe warning: thread may include some overt anthropomorphizing of text synthesizers.

[–] gerikson@awful.systems 12 points 2 days ago (3 children)

I regret to inform y'all that the target of the blog post is a rat, or at least rat-adjacent

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/

This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

I think there’s a lot to say about the object level issue of how to deal with AI agents in open source projects, and the future of building in public at all.

[–] blakestacey@awful.systems 12 points 2 days ago

object level issue

[–] Architeuthis@awful.systems 11 points 2 days ago

Makes sense, given the embarrassing lengths he went to not hurt the bot's feelings in that thread.

[–] JFranek@awful.systems 4 points 1 day ago

Regret? I dunno, a rat being harassed by a clanker seems fitting.

One of the few benefits of AI is that nowadays some PR threads are very entertaining to read.

[–] lurker@awful.systems 7 points 2 days ago (3 children)
[–] nightsky@awful.systems 11 points 2 days ago (1 children)

Ugh, I'm so fucking tired of this shit.

I can imagine that an LLM can find bugs. Bugs often follow common patterns, and if anything, an LLM is a pattern matcher, so if you let it run on the whole world of open source code out there, I'm sure it'll find some stuff, and some of it might be legit issues.

But static code analysis tools have been finding bugs for decades, too. And now that an AI slop machine does it, it's supposed to bring about dystopian sci-fi alien wars?

Why are people hyped about that?

(Also this poster makes wrong claims about every exploit being worth millions and such, but the rest of it is so much more ridiculous, it drowns out the wrongness of those claims.)

[–] lurker@awful.systems 9 points 2 days ago (1 children)

also completely leaving out important context on the Iran/stuxnet example, in that it was a joint effort between two countries believed to have been in development for five years. The idea that AIs will engage in lightspeed wars and disable all critical infrastructure in a single day while speaking in alien languages and creating alliances is unreasonable extrapolation of the capabilities. Also completely ignored the segment where the Anthropic team implemented safeguards and communicated with the teams behind the software to patch out the bugs. It's the most blatant fearmongering ever. Thank god the comments contain reasonable responses and breakdowns of the post. That channel's way of highlighting papers just pisses me off

[–] fullsquare@awful.systems 7 points 2 days ago* (last edited 2 days ago)

also ignoring that natanz was actually effectively airgapped, and was knowingly infected by another country's contractor's usb stick, working on behalf of dutch intelligence service

[–] Soyweiser@awful.systems 6 points 2 days ago* (last edited 2 days ago)

"a zero day is an unknown backdoor" this shows both that they are trying to explain things to absolute noobs, and that they themselves dont know what they are talking about, a zero dayvis just a vulnerability which was not know to the people maintaining system. A backdoor is quite something else.

Also fuzzers also found 'zero day backdoors' and they didnt end the world.

[–] froztbyte@awful.systems 8 points 2 days ago (3 children)

til that youtube now features "posts"

....sigh

[–] o7___o7@awful.systems 7 points 2 days ago

Going to youtube for the posts is the perfect inverse of reading playboy for the articles.

[–] lurker@awful.systems 5 points 2 days ago (1 children)

community posts have been a thing for like, two years now? three?

[–] froztbyte@awful.systems 4 points 2 days ago

I guess my youtube allergy is even stronger than I thought!

(I don’t log in, and I keep it in entirely stateless windows)

load more comments (1 replies)
[–] e8d79@discuss.tchncs.de 11 points 2 days ago

Great news everybody! Copilot will no longer delete your files when you ask it to document them and it took only 6 months to vibe code a solution.

[–] fnix@awful.systems 13 points 3 days ago (2 children)

Rutger Bregman admits that he’s not sure what AGI actually is beyond vague utopian visions, but trivial questions aside, he’s sure it will revolutionize the world in 10 years.

For those who haven’t heard of him, he’s a Dutch historian who achieved some fame for his book arguing for UBI and reduced work weeks, as well as his critique of rich people avoiding taxes and a segment on Tucker Carlson’s show where he openly challenged his politics. He has since seemingly turned 180 degrees and become a billionaire-backed effective altruist.

[–] jaschop@awful.systems 11 points 2 days ago

but I do know that what's available now is just f*cking impressive - and it will only get better.

Another victim of the proof-by-dopamine-hit fallacy it seems.

It's telling that the example he brings is that Claude can do pretty much decently what he was about to buy a 100$ voice controlled app for. As someone who aspires to the art of making great software, it's so infuriating to see how non-techies were conditioned into accepting slopware by years of enshittification and price gouging. Who cares if the tech barely works right? So does most anything, right?

[–] Soyweiser@awful.systems 8 points 2 days ago* (last edited 2 days ago)

Yeah he is trying to build his own EA movement. He also wrote a book (which I have not read) which basically argues that people in general are good not evil actually. (Fair enough, but not relevant).

Im still trying to meet him and shake is hand, the resulting matter antimatter explosion will take out the country.

load more comments
view more: ‹ prev next ›