this post was submitted on 14 Sep 2025
15 points (94.1% liked)

TechTakes

2163 readers
59 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] Soyweiser@awful.systems 2 points 40 minutes ago (1 children)

Was reading some science fiction from the 90's and the AI/AGI said 'im an analog computer, just like you, im actually really bad at math.' And I wonder how much damage these one of these ideas (the other being there are computer types that can do more/different things. Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science) did.

The idea that a smart computer will be worse at math (which makes sense from a storytelling perspective as a writer, because smart AI who also can do math super well is gonna be hard to write), which now leads people who read enough science fiction to see the machine that can't count nor run doom and go 'this is what they predicted!'.

Not a sneer just a random thought.

[–] pikesley@mastodon.me.uk 1 points 21 minutes ago
[–] froztbyte@awful.systems 4 points 5 hours ago (1 children)

hot off the heels of months of “agentic! it can do things for you!” llm hype, they have to make special APIs for the chatbots, I guess because otherwise they make too many whoopsies?

[–] Architeuthis@awful.systems 2 points 2 hours ago* (last edited 2 hours ago) (1 children)

In collaboration with cryptocurrency outfits Coinbase, MetaMask, and the Ethereum foundation, Google also produced an extension that would integrate the cryptocurrency-oriented x402 protocol, allowing for AI-driven purchasing from crypto wallets.

what could possibly go wrong

In either case, the goal is to maintain an auditable trail that can be reexamined in cases of fraud.

Which is a thing that you only need to worry about if you use these types of agents.

Which in any case you can't, because

The protocol is built for a future in which AI agents routinely shop for products on customers’ behalf and engage in complex real-time interactions with retailers’ AI agents.

[–] Soyweiser@awful.systems 2 points 38 minutes ago

what could possibly go wrong

Unrelated to this specific topic but more cryptocurrency fails. This reminds me of hardware wallets which, on the wallet show information about the transaction. Which seems smart, so you can make sure the data from your perhaps compromised machine is correct. Only, the problem with these wallets was that they didn't understand smart contracts. So if you got a smart contract you could still get hacked this way, because the information on the hardware wallet didn't make sense (there were fixes for this, but think most people only really went in to fix this after the North Koreans made off with billions of fake coins).

[–] PMMeYourJerkyRecipes@awful.systems 10 points 7 hours ago (1 children)

Getting pretty far afield here, but goddamn Matt Yglesias's new magazine sucks:

The case for affirmative action for conservatives

"If we cave in and give the right exactly what they want on this issue, they'll finally be nice to us! Sure, you might think based on the last 50,000 times we've tried this strategy that they'll just move the goalposts and demand further concessions, but then they'll totally look like hypocrites and we'll win the moral victory, which is what actually matters!"

[–] jonhendry@iosdev.space 6 points 6 hours ago

@PMMeYourJerkyRecipes @BlueMonday1984

The guy from the Federalist *doesn’t* want more ideological diversity in academia, he wants *less*. But he’ll settle for more as an interim goal until he can purge the wrong-thinkers.

[–] o7___o7@awful.systems 5 points 10 hours ago (2 children)

We need a word for when they make up a guy who doesn't exist and then get mad at him.

[–] FredFig@awful.systems 3 points 1 hour ago

I guess keeping in theme, "vibe replying"

[–] ShakingMyHead@awful.systems 5 points 8 hours ago (2 children)

Pretty sure that's a strawman.

I mean, I think the relevant difference is that rather than trying to argue against a weak opponent they're trying to validate their feelings of victimization, superiority, and/or outrage by imagining an appropriate foil.

It's a straw man that exists to be effectively venerated rather than torn down.

[–] Evinceo@awful.systems 6 points 6 hours ago

Since this is the solo version, strawmasturbating

[–] BlueMonday1984@awful.systems 7 points 14 hours ago* (last edited 13 hours ago)

New edition of AI Killed My Job, giving a deep dive into how genAI has hurt artists. I'd like to bring particular attention to Meilssa's story, which is roughly halfway through, specifically the ending:

There's a part of me that will never forgive the tech industry for what they've taken from me and what they've chosen to do with it. In the early days as the dawning horror set in, I cried about this almost every day. I wondered if I should quit making art. I contemplated suicide. I did nothing to these people, but every day I have to see them gleefully cheer online about the anticipated death of my chosen profession. I had no idea we artists were so hated—I still don't know why. What did my silly little cat drawings do to earn so much contempt? That part is probably one of the hardest consequences of AI to come to terms with. It didn't just try to take my job (or succeed in making my job worse) it exposed a whole lot of people who hate me and everything I am for reasons I can't fathom. They want to exploit me and see me eradicated at the same time.

[–] corbin@awful.systems 9 points 15 hours ago (1 children)

Wolfram has a blog post about lambda calculus. As usual, there are no citations and the bibliography is for the wrong blog post and missing many important foundational papers. There are no new results in this blog post (and IMO barely anything interesting) and it's mostly accurate, so it's okay to share the pretty pictures with friends as long as the reader keeps in mind that the author is writing to glorify themselves and make drawings rather than to communicate the essential facts or conduct peer review. I will award partial credit for citing John Tromp's effort in defining these diagrams, although Wolfram ignores that Tromp and an entire community of online enthusiasts have been studying them for decades. But yeah, it's a Mathematica ad.

In which I am pedantic about computer science (but also where I'm putting most of my sneers too, including a punchline)

For example, Wolfram's wrong that every closed lambda term corresponds to a combinator; it's a reasonable assumption that turns out to not make sense upon closer inspection. It's okay, because I know that he was just quoting the same 1992 paper by Fokker that I cited when writing the esolangs page for closed lambda terms, which has the same incorrect claim verbatim as its first sentence. Also, credit to Wolfram for listing Fokker in the bibliography; this is one of the foundational papers that we'd expect to see. With that in mind, here's some differences between my article and his.

The name "Fokker" appears over a dozen times in my article and nowhere in Wolfram's article. Also, I love being citogenic and my article is the origin of the phrase "Fokker size". I think that this is a big miss on his part because he can't envision a future where somebody says something like "The Fokker metric space" or "enriched over Fokker size". I've already written "some closed lambda terms with small Fokker size" in the public domain and it's only a matter of time until Zipf's law wears it down to "some small Fokkers".

Also, while "Tromp" only appears once in my article, it appears next to somebody known only as "mtve" when they collaborated to produce what Wolfram calls a "size-7 lambda" known as Alpha. I love little results like these which aren't formally published and only exist on community wikis. Would have been pretty fascinating if Alpha were complete, wouldn't it Steve!? Would have merited a mention of progress in the community amongst small lambda terms, huh Steve!?

I also checked the BB Gauge for Binary Lambda Calculus (BLC), since it's one of the topics I already wrote up, and found that Wolfram's completely omitted Felgenhauer from the picture too, with that name in neither the text nor bibliography. Felgenhauer's made about as many constructions in BLC as Tromp; Felgenhauer 2014 constructs that Goodstein sequence, for example. Also, Wolfram didn't write that sequence, they sourced it from a living paper not in the bibliography, written by…Felgenhauer! So it's yet another case of Wolfram just handily choosing to omit a name from a decade-old result in the hopes that somebody will prefer his new presentation to the old one.

Finally, what's the point of all this? I think Wolfram writes these posts to advertise Mathematica (which is actually called Wolfram Mathematica and uses a programming language called Wolfram BuT DiD YoU KnOw) He also promotes his attempt at rewriting all of physics to have his logo upon it, and this blog post is a gateway to that project in the sense that Wolfram genuinely believes that staring at these chaotic geometries will reveal the equations of divine nature. Meanwhile I wrote my article in order to ~~win an IRC argument against~~ make a reasonable presentation of an interesting phenomenon in computer science directly to Felgenhauer & Tromp, and while they don't fully agree with me, we together can't disagree with what's presented in the article. That's peer review, right?

[–] antifuchs@awful.systems 7 points 8 hours ago

Having followed PLT stuff online for more than a quarter century now, I can state with confidence that basically everyone writing about lambda calculus online is doing it to glorify themselves.

[–] HotGarbage@awful.systems 6 points 18 hours ago (2 children)
[–] yellowcake@awful.systems 4 points 18 hours ago

I’m convinced the proliferation of AI art comes from a generation of digital inhalants.

[–] BlueMonday1984@awful.systems 3 points 17 hours ago (1 children)

Nvidia and California College of the Arts Enter Into a Partnership

Oh, I'm sure the artists enrolling at the CCA are gonna be so happy to hear they've been betrayed

The collaboration with CCA is described in today’s announcement as aiming to “prepare a new generation of creatives to thrive at the intersection of art, design and emerging technologies.”

Hot take: There is no "intersection" between these three, because the "emerging technologies" in question are a techno-fascist ideology designed to destroy art for profit

[–] TinyTimmyTokyo@awful.systems 5 points 16 hours ago (1 children)

In fairness, not everything nVidia does is generative AI. I don't know if this particular initiative has anything to do with GenAI, but a lot of digital artists depend on their graphics cards' capabilities to create art that is very much human-derived.

[–] BlueMonday1984@awful.systems 3 points 15 hours ago

Given how gen-AI has utterly consumed the tech industry over these past two years, I see very little reason to give the benefit of the doubt here.

Focusing on NVidia, they've made billions selling shovels in the AI gold rush (inflating their stock prices in the process), and have put billions more into money-burning AI startups to keep the bubble going. They have a vested interest in forcing AI onto everyone and everything they can.

[–] YourNetworkIsHaunted@awful.systems 13 points 1 day ago (1 children)

Sneer inspired by a thread on the preferred Tumblr aggregator subreddit.

Rationalists found out that human behavior didn't match their ideological model, then rather than abandon their model or change their ideology decided to replace humanity with AIs designed to behave the way they think humans should, just as soon as they can figure out a way to do that without them destroying all life in the universe.

[–] scruiser@awful.systems 7 points 11 hours ago (1 children)

That thread gives me hope. A decade ago, a random internet discussion in which rationalist came up would probably mention "quirky Harry Potter fanfiction" with mixed reviews, whereas all the top comments on that thread are calling out the alt-right pipeline and the racism.

[–] dgerard@awful.systems 2 points 4 hours ago

I have no hope. The guy who introduced me to LessWrong included what I later realised was a race science pitch. Yudkowsky was pushing this shit in 2007. This sucker just realised a coupla decades late.

load more comments
view more: next ›