this post was submitted on 31 Aug 2025
18 points (100.0% liked)

TechTakes

2163 readers
56 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] swlabr@awful.systems 21 points 1 week ago (2 children)

Enemy of awful systems Malcolm Gladwell is a full throated transphobe

[–] nightsky@awful.systems 15 points 1 week ago (1 children)

Why is it always transphobia with these kinds of people... is it because they feel racism is too risky? So they need a different outlet for hating on people based on what they believe is "science"?

[–] swlabr@awful.systems 13 points 1 week ago

the answer to all those is yes. To synthesise the bigger idea, as the saying goes: scratch a liberal, a fascist bleeds. Gladwell caters to a huge audience that is centrist, leaning somewhat liberal (though he flirts constantly with race science and straight up racism, see his writings on Korean Air). As liberals move towards fascism, so must he. Of course, it’s the most marginalised that are going to be targeted first.

[–] Soyweiser@awful.systems 11 points 1 week ago (4 children)

that “90%” of the audience was “on [Tucker’s] side” but had been unwilling to admit it.

Why are transphobes always this full of shit. Polls always disagree with this, and it cant be 'people are afraid to speak out out of fear', as there seem to be very little consequences to just being a lowkey transphobe. Hell even raging transphobes who stalk people, assault kids, and call for violence in what can only be called a life destroying obsession with trans people get defended by the people in power. There is very little reason for people to lie on polls over this.

Also dislike the focus on 'trans women' as these whole shitty arguments forget trans men exist. (This seems to be a general problem sadly, trans men, same as intersex people, or detransitioners tend to be only brought up like gotcha arguments, and not much to discuss the different problems they also face. Obv, I'm also doing that here, so I'm not immune)

load more comments (4 replies)
[–] dgerard@awful.systems 19 points 1 week ago

I just found out the name of Scott Alexander's psychiatry practice (Lorien Psychiatry) is a Lord of the Rings reference, so my guess as to direct Thiel money just went way up

[–] Architeuthis@awful.systems 17 points 1 week ago* (last edited 1 week ago) (2 children)

The common clay of the new west:

transcriptChatGPT has become worthless

[Business & Professional]

I’m a paid member and asked it to help me research a topic and write a guide and it said it needed days to complete it. That’s a first. Usually it could do this task on the spot.

It missed the first deadline and missed 5 more. 3 weeks went by and it couldn’t get the task done. Went to Claude and it did it in 10 minutes. No idea what is going on with ChatGpt but I cancelled the pay plan.

Anyone else having this kind of issue?

[–] froztbyte@awful.systems 19 points 1 week ago (1 children)

I am extremely, extremely sickos.png for having all these weird little fucks find out that all this nonsense is heavily subsidised by VCs

[–] BlueMonday1984@awful.systems 12 points 1 week ago

Same here. The world's been forced to deal with these promptfucks ruining everything they touch for literal years at this point, some degree of schadenfreude at their expense was sorely fucking needed.

load more comments (1 replies)
[–] gerikson@awful.systems 15 points 1 week ago (1 children)

Promptfondlers on lobste.rs are unhappy about the tag "vibecoding", used to denote development using GenAI. I'd not recommend reading the thread, just want to observe that if slop coding had actually taken the coding world by storm, I doubt there would be much pearl clutchign about how slop-slingers are treated on the site. We're talking about people willing to pay Scam Altman money monthly after all.

https://lobste.rs/s/gkzmfy/let_s_rename_vibecoding_tag_llms

load more comments (1 replies)
[–] scruiser@awful.systems 15 points 2 weeks ago (2 children)

Lesswronger notices all of the rationalist's attempts at making an "aligned" AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate

Notably, the author doesn't realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.

[–] swlabr@awful.systems 24 points 2 weeks ago (1 children)
>50 min read  
>”why company has perverse incentives”
>no mention of capitalism

rationalism.mpeg

[–] scruiser@awful.systems 11 points 1 week ago (1 children)

Every time I see a rationalist bring up the term "Moloch" I get a little angrier at Scott Alexander.

[–] swlabr@awful.systems 10 points 1 week ago* (last edited 1 week ago)

“Moloch”, huh? What are we living in, some kind of demon-haunted world?

Others were alarmed and advocated internally against scaling large language models. But these were not AGI safety researchers, but critical AI researchers, like Dr. Timnit Gebru.

Here we see rationalists approaching dangerously close to self-awareness and recognizing their whole concept of "AI safety" as marketing copy.

[–] yetanotherduc@awful.systems 15 points 2 weeks ago (1 children)

As a CS student, I wonder why us and artists are always the one who are attacked the most whenever some new "insert tech stuff" comes out. And everyone's like: HOLY SHIT PROGRAMMERS AND ARTISTS ARE DEAD, without realizing that most of these things are way too crappy to actually be... good enough to replace us?

[–] shapeofquanta@lemmy.vg 10 points 2 weeks ago (2 children)

My guess would be because most people don’t understand what you all actually do so gen AI output looks to them like their impression of the work you do. Just look at the game studios replacing concept artists with Midjourney, not grasping what concept art even is for and screwing up everyone’s workflow as a result.

I’m neither a programmer nor an artist I can sorta understand how people get fooled. Show me a snippet of nonsense code or image and I’ll nod along if you say it’s good. But then as a writer (even if only hobbyist) I am able to see how godawful gen AI writing is whereas some non-writers won’t, and so I extrapolate from that since it’s not good at the thing I have domain expertise in, it probably isn’t good at the things I don’t understand.

[–] swlabr@awful.systems 10 points 2 weeks ago (1 children)

Show me a snippet of nonsense code or image and I’ll nod along if you say it’s good.

Smirk I’m in.

load more comments (1 replies)
load more comments (1 replies)
[–] mirrorwitch@awful.systems 13 points 2 weeks ago

So I learned about the rise of pro-Clippy sentiment in the wake of ChatGPT and that led me on a little ramble about the ELIZA effect vs. the exercise of empathy https://awful.systems/post/5495333

[–] Soyweiser@awful.systems 13 points 1 week ago (1 children)

Not totally in our wheelhouse, but seems like the Abundance movements has a bit of a right wing speaker problem: https://bsky.app/profile/therealbrent.bsky.social/post/3lxzn3lxqo22b

[–] swlabr@awful.systems 12 points 1 week ago (10 children)

Oh Abundance, the repackaging of Reaganomics by liberals to court conservatives, is having a conference and has booked right wing speakers? Nobody could have predicted this

load more comments (10 replies)
[–] o7___o7@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago)

DragonCon drops the ban hammer on a slop slinger. There was much rejoicing.

https://old.reddit.com/r/dragoncon/comments/1n5r2eu/a_warm_heartfelt_goodbye_to_a10_the_ai_artist_who/

Btw, the vibes were absolutely marvelous this year.

Edit: a shrine was built to shame the perpetrator

https://old.reddit.com/r/dragoncon/comments/1n60s10/to_shame_that_ai_stand_in_artist_alley_people/

[–] gerikson@awful.systems 12 points 1 week ago
[–] froztbyte@awful.systems 12 points 1 week ago (3 children)

after that Flock shit that got announced recently, looks like garry tan is currently doing his damnedest to ensure people know he's absolutely full of it

thread starts here, features bangers like this

You're thinking Chinese surveillance

US-based surveillance helps victims and prevents more victims

"nooooo, we're the good kind of boot to have on your face! we're the boot from the same country as you!" says the boot in the rising fascist upswing

load more comments (3 replies)
[–] BigMuffN69@awful.systems 12 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Great piece on previous hype waves by P. Ball

https://aeon.co/essays/no-suffering-no-death-no-limits-the-nanobots-pipe-dream

It’s sad, my “thoroughly researched” “paper” greygoo-2027 just doesn’t seem to have that viral x-factor that lands me exclusive interviews w/ the Times 🫠

[–] scruiser@awful.systems 15 points 2 weeks ago (4 children)

Putting this into the current context of LLMs... Given how Eliezer still repeats the "diamondoid bacteria" line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.

load more comments (4 replies)
load more comments (1 replies)
[–] bitofhope@awful.systems 12 points 2 weeks ago (5 children)

Creator of NaCl publishes something even saltier.

"Am I being detained?" I scream as IETF politely asks me to stop throwing a tantrum over the concept of having moderation policy.

load more comments (5 replies)
[–] BlueMonday1984@awful.systems 11 points 2 weeks ago (1 children)
[–] BurgersMcSlopshot@awful.systems 11 points 2 weeks ago (1 children)

Where the fuck has that guy been for 20 years? I've seen that happen many times with junior programmers during my 20 years of experience.

[–] froztbyte@awful.systems 10 points 2 weeks ago (1 children)

also from a number of devs who went borderline malicious compliance in "adopting tdd/testing" but didn't really grok the assignment

load more comments (1 replies)
[–] JFranek@awful.systems 11 points 2 weeks ago (3 children)

Shamelessly posting link to my skeet thread (skeet trail?) on my experience with an (mandatory) AI chatbot workshop. Nothing that will surprise regulars here too much, but if you want to share the pain...

https://bsky.app/profile/jfranek.bsky.social/post/3lxtdvr4xyc2q

load more comments (3 replies)
[–] DonPiano@feddit.org 10 points 2 weeks ago

Kind of generic: I am a researcher and recently started a third party funded project where I won't teach for a while. I kinda dread what garbage fire I'll return to in a couple of years when I teach again, how much AI slop will be established on the sides of teachers and students.

[–] lagrangeinterpolator@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

From the ChatGPT subreddit: Gemini offers to pay me for a developer to fix its mess

Who exactly pays for it? Google? Or does Google send one of their interns to fix the code? Maybe Gemini does have its own bank account. Wow, I really haven't been keeping up with these advances in agentic AI.

[–] fullsquare@awful.systems 15 points 2 weeks ago

it's almost as funny as when one time chatbot told vibecoder to learn to code

load more comments (1 replies)
load more comments
view more: next ›