this post was submitted on 07 Sep 2025
22 points (95.8% liked)

TechTakes

2163 readers
80 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 3) 50 comments
sorted by: hot top controversial new old
[–] fnix@awful.systems 8 points 6 days ago (2 children)

Why progressives should care about falling birth rates

2 weeks old, but regardless. The Financial Times’ John Burn-Murdoch wrote a very EA-adjacent article basically making the case for “progressive” eugenics. The studies he cites all derive from a behavior-genetic model of intergenerational value transmission, i.e. conservatism is “in the genes” & progressivism is literally getting bred out of the gene pool.

A masterclass in baiting liberals. Take note, NYT & Atlantic!

load more comments (1 replies)
[–] BlueMonday1984@awful.systems 6 points 6 days ago (3 children)

OpenAI's trying to make an AI-generated animated film, and claiming their Magical Slop Extruders^tm^ can do in nine months what allegedly would take three years, with only a $30 mil budget and the writers of Paddington in Peru for assistance.

Allegedly, they're also planning to show it off at the Cannes Film Festival, of all places. By my guess, this was Sam Altman's decision - he's already fawned over AI-extruded garbage before, its clear he has zero taste in art whatsoever.

[–] Soyweiser@awful.systems 5 points 6 days ago (2 children)

$30 mil budget

From what I heard that is twice the budget of a Studio Ghibli movie. Nausicaä of the Valley of the Wind cost 1 million to make in 1984. (No idea what that would be adjusted for inflation).

[–] BlueMonday1984@awful.systems 6 points 6 days ago (1 children)

Nausicaä of the Valley of the Wind cost 1 million to make in 1984. (No idea what that would be adjusted for inflation).

I checked a few random inflation calculators, and it comes out to roughly $3.1 million.

[–] Soyweiser@awful.systems 3 points 6 days ago
load more comments (1 replies)
[–] ShakingMyHead@awful.systems 4 points 6 days ago (1 children)

OpenAI’s tools also lower the cost of entry, allowing more people to make creative content, he said.

So, even working under the assumption that this somehow works, they still needed two animation studios, professional writers, and 30 million to get this film off the ground.

load more comments (1 replies)
[–] o7___o7@awful.systems 2 points 6 days ago* (last edited 6 days ago)

I mean he doesn't seem to taste food either, so it tracks

[–] wizardbeard@lemmy.dbzer0.com 17 points 1 week ago (4 children)

Some poor souls who arguably have their hearts in the right place definitely don't have their heads screwed on right, and are trying to do hunger strikes outside Google's AI offices and Anthropic's offices.

https://programming.dev/post/37056928 contains links to a few posts on X by the folks doing it.

Imagine being so worried about AGI that you thought it was worth starving yourself over.

Now imagine feeling that strongly about it and not stopping to ask why none of the ideologues who originally sounded the alarm bells about it have tried anything even remotely as drastic.

On top of all that, imagine being this worried about what Anthropic and Google are doing in the research of AI, hopefully being aware of Google's military contracts, and somehow thinking they give a singular shit if you kill yourself over this.

And... where's the people outside fucking OpenAI? Bets on this being some corporate shadowplay shit?

[–] YourNetworkIsHaunted@awful.systems 10 points 1 week ago (4 children)

I mean, I try not to go full conspiratorial everything-is-a-false-fllag, but the fact that the biggest AI company that has been explicitly trying to create AGI isn't getting the business here is incredibly suspect. On the other hand, though, it feels like anything that publicly leans into the fears of evil computer God would be a self-own when they're in the middle of trying to completely ditch the "for the good of humanity, not just immediate profits" part of their organization.

load more comments (4 replies)
load more comments (3 replies)
[–] EponymousBosh@awful.systems 15 points 1 week ago (4 children)
[–] BlueMonday1984@awful.systems 16 points 1 week ago (1 children)

I genuinely thought therapists were gonna avoid the psychosis-inducing suicide machine after seeing it cause psychosis and suicide. Clearly, I was being too optimistic.

load more comments (1 replies)
[–] zogwarg@awful.systems 10 points 1 week ago
The future is now, and it is awful. 
Would any still wonder why, I grow so ever mournful.
load more comments (2 replies)
[–] Architeuthis@awful.systems 15 points 1 week ago* (last edited 1 week ago) (2 children)

Apparently the hacker who publicized a copy of the no fly list was leaked an article containing Yarvin's home address, which she promptly posted on bluesky. Won't link because I don't think we've had the doxxing discussion but It's easily findable now.

I'm mostly posting this because the article featured this photo:

[–] froztbyte@awful.systems 10 points 1 week ago (1 children)

I was curious so I dug up the post and then checked property prices for the neighbourhood

$2.6~4.8m

being thiel's idea guy seems to pay pretty well

load more comments (1 replies)
[–] Seminar2250@awful.systems 14 points 1 week ago (3 children)

university where the professor physically threatened me and plagiarized my work called to ask if i was willing to teach a notoriously hard computer science class (that i have taught before to stellar evals as a phd student^[evals are bullshit for measuring how well students actually learn anything, but are great for measuring the stupid shit business idiots love, like whether students will keep paying tuition. also they can be used to explain the pitfalls of using likert scales carelessly, as business idiots do.]). but they had to tell me that i was their last choice because they couldn't find a full professor to teach it (since i didn't finish my phd there because of said abusive professor). on top of that, they offered me a measly $6,000 usd for the entire semester with no benefits, and i would have to pay $500 for parking.

should i just be done with academia? enrollment deadlines for the spring are approaching and i'm wondering if i should just find a "regular job", rather than finishing a PhD elsewhere, especially given the direction higher ed is going in the us.

[–] V0ldek@awful.systems 13 points 1 week ago (1 children)

Every time I learn one single thing about how academia works in the USA I want to commit unspeakable acts of violence

load more comments (1 replies)
load more comments (2 replies)
[–] mawhrin@awful.systems 13 points 1 week ago (6 children)

simon willison, the self-styled reasonable ai researcher, finds it hilarious and a good use of money throwing $14000 at claude to create an useless programming language that doesn't work.

good man simon willison!

[–] gerikson@awful.systems 15 points 1 week ago (1 children)

I mean it's still just funny money seeing the creator works for some company that resells tokens from Claude, but very few people are stepping back to note the drastically reduced expectations of LLMs. A year ago, it would have been plausible to claim that a future LLM could design a language from scratch. Now we have a rancid mess of slop, and it's an "art project", and the fact it's ersatz internally coherent is treated as a great success.

Willison should just have let this go, because it's a ludicrous example of GenAI, but he just can't help himself defending this crap.

load more comments (1 replies)
[–] blakestacey@awful.systems 15 points 1 week ago

Good sneer from user andrewrk:

People are always saying things like, “surprisingly good” to describe LLM output, but that’s like when 5 year old stops scribbling on the walls and draws a “surprisingly good” picture of the house, family, and dog standing outside on a sunny day on some construction paper. That’s great, kiddo, let’s put your programming language right here on the fridge.

[–] istewart@awful.systems 14 points 1 week ago (1 children)

Top-tier from Willison himself:

The learning isn’t in studying the finished product, it’s in watching how it gets there.

Mate, if that's true, my years of Gentoo experience watching compiler commands fly past in the terminal means I'm a senior operating system architect.

[–] froztbyte@awful.systems 9 points 1 week ago (3 children)

which naturally leads us to: having to fix a portage overlay ~= “compiler engineer”

wonder what simonw’s total spend (direct and indirect) in this shit has been to date. maybe sunk cost fallacy is an unstated/un(der?)accounted part in his True Believer thing?

load more comments (3 replies)
[–] nightsky@awful.systems 10 points 1 week ago (6 children)

Sigh. Love how he claims it's worth it for "learning"...

We already have a thing for learning, it's called "books", and if you want to learn compiler basics, $14000 could buy you hundreds of copies of the dragon book.

load more comments (6 replies)
load more comments (1 replies)
[–] CinnasVerses@awful.systems 12 points 1 week ago* (last edited 1 week ago) (18 children)

When it started in ’06, this blog was near the center of the origin of a “rationalist” movement, wherein idealistic youths tried to adapt rational styles and methods. While these habits did often impress, and bond this community together, they alas came to trust that their leaders had in fact achieved unusual rationality, and on that basis embraced many contrarian but not especially rational conclusions of those leaders. - Robin Hanson, 2025

I hear that even though Yud started blogging on his site, and even though George Mason University type economics is trendy with EA and LessWrong, Hanson never identified himself with EA or LessWrong as movements. So this is like Gabriele D'Annunzio insisting he is a nationalist not a fascist, not Nicholas Taleb denouncing phrenology.

load more comments (18 replies)
[–] BlueMonday1984@awful.systems 10 points 1 week ago (2 children)

New Loser Lanyard (ironically called the Friend) just dropped, a "chatbot-enabled" necklace which invades everyone's privacy and provides Internet reply "commentary" in response. As if to underline its sheer shittiness, WIRED has reported that even other promptfondlers are repulsed by it, in a scathing review that accidentally sneers its techbro shithead inventor:

If you're looking for some quick schadenfreude, here's the quotes on Bluesky.

load more comments (2 replies)
[–] mlen@awful.systems 10 points 1 week ago

Signal is finally close to releasing a cross platform backup system: https://signal.org/blog/introducing-secure-backups/

[–] BlueMonday1984@awful.systems 10 points 1 week ago* (last edited 1 week ago) (2 children)

GoToSocial recently put up a code of conduct that openly barred AI-"assisted" changes and fascist/capitalist involvement, prompting some concern trolling on the red site.

Got a promptfondler trying to paint basic human decency as ridiculous, and a Concerned Individual^tm^ who's pissed at GoToSocial refusing to become a Nazi bar.

[–] BlueMonday1984@awful.systems 9 points 1 week ago

Found two separate AI-related links for today.

First, AI slop corpo Apiiro put out a study stating the obvious (that AI is a cybersecurity nightmare), and tried selling its slop agents as the solution. Apiiro was using their own slop-bots to do the study, too, so I'm taking all this with a major grain of salt.

Second, I came across an AI-themed Darwin Awards spinoff cataloguing various comical fuck-ups caused through the slop-bots.

[–] Architeuthis@awful.systems 9 points 1 week ago (2 children)

Some quality wordsmithing found in the wild:

transcript@MosesSternstein (quote-twitted): AI-Capex is the everything cycle, now.

Just under 50% of GDP growth is attributable to AI Capex

@bigblackjacobin: Almost certainly the greatest misallocation of capital you or I will ever see. There's no justification for this however you cut it but the beatings will continue until a stillborn god is born.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›