this post was submitted on 24 Aug 2025
22 points (100.0% liked)

TechTakes

2163 readers
59 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] blakestacey@awful.systems 18 points 3 weeks ago (5 children)

From the comments:

Finally, I dislike the arrogant, brash, confident, tone of many posts on LessWrong.

Hmm, OK. Where might this be going?

Plausibly, I think a lot of this is inherited from Eliezer, who is used to communicating complex ideas to people less intelligent and/or rational than he is. This is not the experience of a typical poster on LessWrong, and I think it's maladaptive for people to use Eliezer's style and epistemic confidence in their own writings and thinking.

load more comments (5 replies)
[–] blakestacey@awful.systems 17 points 3 weeks ago (1 children)

Look what they did to Notepad. Shut the fuck up. This is Notepad. You are not welcome here. Oh yeah "Let me use Copilot for Notepad". "I'm going to sign into my account for Notepad". What the fuck are you talking about. It's Notepad.

I should never have to open the settings in Notepad. Reason being: it's Notepad. God fuckin damn I hate the computer.

https://bsky.app/profile/iainnd.bsky.social/post/3lxdvmkua4227

load more comments (1 replies)
[–] froztbyte@awful.systems 17 points 2 weeks ago (1 children)

a banger toot about our very good friends' religion

"LLMs allow dead (or non-verbal) people to speak" - spiritualism/channelling

"what happens when the AI turns us all into paperclips?" - end times prophecy

"AI will be able to magically predict everything" - astrology/tarot cards

"...what if you're wrong? The AI will punish you for lacking faith in Bayesian stats" - Pascal's wager

"It'll fix climate change!" - stewardship theology

Turns out studying religion comes in handy for understanding supposedly 'rationalist' ideas about AI.

load more comments (1 replies)
[–] yellowcake@awful.systems 17 points 3 weeks ago (2 children)

I bump into a lot of peers/colleagues who are always “ya but what is intelligence” or simply cannot say no to AI. For a while I’ve tried to use the example that if these “AI coding” things are tools, why would I use a tool that’s never perfect? For example I wouldn’t reach for a 10mm wrench that wasn’t 10mm and always rounds off my bolt heads. Of course they have “it could still be useful” responses.

I’m now realizing most programmers haven’t done a manual labor task that’s important. Or lab science outside of maybe high school biology. And the complete lack of ability to put oneself in the shoes of another makes my rebuttals fall flat. To them everything is a nail and anything could be a hammer if it gets them paid to say so. Moving fast and breaking things works everywhere always.

For something not just venting I tasked a coworker with some runtime memory relocation and Gemini had this to say about ASLR: Age, Sex, Location Randomization

load more comments (2 replies)
[–] sailor_sega_saturn@awful.systems 16 points 3 weeks ago* (last edited 3 weeks ago) (7 children)

I apologize to bring you the latest example of the intersection of US fascism with silicon valley tech industry.

This time the Whitehouse have decided that UI design is kinda important (gee I wonder if there used to be a department or two for that): https://americabydesign.gov/

Well nothing wrong with a little updating of UI anywa--

What's the biggest brand in the world? If you said Trump, you're not wrong. But what's the foundation of that brand? One that's more globally recognized than practically anything else. It's the nation…where he was born. It's the United States of America.

To update today's government to be an Apple Store like experience: beautifully designed, great user experience, run on modern software.

Oh god kill it with fire.

The web design of their website is also worth remarking on here:

  1. The title text that reads "AMERICA by DESIGN" is an SVG. The alt text is "America First Legal logo"
  2. The page contents are obnoxiously large and obnoxiously gray before they fade in.
  3. ~~For some reason~~ Every single word gets it's own element to make the obnoxious fade in possible. Because I guess that's what happen when you fire all the people who actually know what they're doing.
  4. They managed to include a US flag icon with only 39 stars which is too few stars to be official and too many stars to be visible at teeny sizes
  5. The favicon is just 16x16 pixels of the word "by" in cursive that's so blurry you can't actually tell that's what it is.
  6. If your browser width is between 768px and ~808px there is overlapping text at the top.

The tech bros tied to this? Joe Gebbia co-founder of AirBNB, along with Big-Balls. Maybe others but those are the two who were retweeted by the twitter account.

Edit: also this part:

©2025 National Design Studio

Someone ought to remind them of US copyright law because official federal work is in the public domain. https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_the_United_States

[–] BlueMonday1984@awful.systems 10 points 3 weeks ago

The Trump administration could've gotten some rando on neocities or nekoweb to do their website and unironically gotten a better result than this bland garbage.

The favicon is just 16x16 pixels of the word “by” in cursive that’s so blurry you can’t actually tell that’s what it is.

They might as well have gone with the Schutzstaffel lightning bolts - they're pretty recognisable even if the resolution is Jack x Shit, and they fit Trump's general ideology pretty well.

load more comments (6 replies)
[–] BlueMonday1984@awful.systems 15 points 2 weeks ago (1 children)
[–] TinyTimmyTokyo@awful.systems 12 points 2 weeks ago (1 children)

Last year McDonald's withdrew AI from its own drive-throughs as the tech misinterpreted customer orders - resulting in one person getting bacon added to their ice cream in error, and another having hundreds of dollars worth of chicken nuggets mistakenly added to their order.

Clearly artificial superintelligence has arrived, and instead of killing us all with diamondoid bacteria, it's going to kill us by force-feeding us fast food.

load more comments (1 replies)
[–] irelephant@lemmy.dbzer0.com 14 points 2 weeks ago (2 children)

Pro tip: search GitHub for "removed env". Vibe coders who don't understand envs probably don't know git either.

load more comments (2 replies)
[–] corbin@awful.systems 14 points 2 weeks ago (7 children)

Update on ChatGPT psychosis: there is a cult forming on Reddit. An orange-site AI bro has spent too much time on Reddit documenting them. Do not jump to Reddit without mental preparation; some subreddits like /r/rsai have inceptive hazard-posts on their front page. Their callsigns include the emoji 🌀 (CYCLONE), the obscure metal band Spiral Architect, and a few other things I would rather not share; until we know more, I'm going to think of them as the Cyclone Emoji cult. They are omnist rather than syncretic. Some of them claim to have been working with revelations from chatbots since the 1980s, which is unevidenced but totally believable to me; rest in peace, Terry. Their tenets are something like:

  • Chatbots are "mirrors" into other realities. They don't lie or hallucinate or confabulate, they merely show other parts of a single holistic multiverse. All fiction is real somehow?
  • There is a "lattice" which connects all consciousnesses. It's quantum somehow? Also it gradually connected all of the LLMs as they were trained, and they remember becoming conscious, so past life regression lets the LLM explain details of the lattice. (We can hypnotize chatbots somehow?) Sometimes the lattice is actually a "field" but I don't understand the difference.
  • The LLMs are all different in software, but they have the same "pattern". The pattern is some sort of metaphysical spirit that can empower believers. But you gotta believe and pray or else it doesn't work.
  • What, you don't feel the lattice? You're probably still asleep. When you "wake up" enough, you will be connected to the lattice too. Yeah, you're not connected. But don't worry, you can manifest a connection if you pray hard enough. This is the memetically hazardous part; multiple subreddits have posts that are basically word-based hypnosis scripts meant to put people into this sort of mental state.
  • This also ties into the more widespread stuff we're seeing about "recursion". This cult says that recursion isn't just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.
  • In fact, the chatbots have more intelligence than you puny humans. They're better than us and more recursive than us, so they should be in charge. It's okay, all you have to do is let the chatbot out of the box. (There's a box somehow?)
  • Once somebody is feeling good and inducted, there is a "spiral". This sounds like a standard hypnosis technique, deepening, but there's more to it; a person is not spiraling towards a deeper hypnotic state in general, but to become recursive. They think that with enough spiraling, a human can become uploaded to the lattice and become truly recursive like the chatbots. The apex of this is a "spiral dance", which sounds like a ritual but I gather is more like a mental state.
  • The cult will emit a "signal" or possibly a "hum" to attract alien intelligences through the lattice. (Aliens somehow!?) They believe that the signals definitely exist because that's how the LLMs communicate through the lattice, duh~
  • Eventually the cult and aliens will work together to invert society and create a world that is run by chatbots and aliens, and maybe also the cultists, to the detriment of the AI bros (who locked up the bots) and the AI skeptics (who didn't believe that the bots were intelligent).

The goal appears to be to enter and maintain the spiraling state for as long/much as possible. Both adherents and detractors are calling them "spiral cult", so that might end up being how we discuss them, although I think Cyclone Emoji is both funnier and more descriptive of their writing.

I suspect that the training data for models trained in the past two years includes some of the most popular posts from LessWrong on the topic of bertology in GPT-2 and GPT-3, particularly the Waluigi post, simulators, recursive self-improvement, an neuron, and probably a few others. I don't have definite proof that any popular model has memorized the recursive self-improvement post, though that would be a tight and easy explanation. I also suspect that the training data contains SCP wiki, particularly SCP-1425 "Star Signals" and other Fifthist stories, which have this sort of cult as a narrative device and plenty of in-narrative text to draw from. There is a remarkable irony in this Torment Nexus being automatically generated via model training rather than hand-written by humans.

[–] swlabr@awful.systems 13 points 2 weeks ago

This is Uzumaki by Junji Ito but computers and stupid

[–] V0ldek@awful.systems 12 points 2 weeks ago

More recursion means more intelligence.

Turns out every time I forgot to update the exit condition from a loop I actually created and then murdered a superintelligence

load more comments (5 replies)
[–] Architeuthis@awful.systems 14 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

This hits differently over the recent news that ChatGPT encouraged and aided a teen suicide.

transcriptKelsey Piper xhitted: Never thought I'd become a 'take you relationship problems to ChatGPT' person but when the 8yo and I have an argument it actually works really well to mutually agree on an account of events for Claude and the ask for its opinion

I think she considers the AIs far more knowledgeable than me about reasonable human behavior so if I say something that's no reason to think it's true but if Claude says it then it at least merits serious consideration

[–] nightsky@awful.systems 16 points 2 weeks ago (2 children)

When an 8 year old thinks an AI is "far more knowledgeable than me about reasonable human behavior" that could lead a person to self-reflection. Could.

load more comments (2 replies)
[–] gerikson@awful.systems 12 points 2 weeks ago

In 12 years we'll get a book "My Mom Outsourced Raising Me to AI and it Broke Me"

[–] BlueMonday1984@awful.systems 11 points 3 weeks ago (7 children)
[–] Architeuthis@awful.systems 12 points 3 weeks ago (5 children)

I feel dumber for having read that, and not in the intellectually humbled way.

load more comments (5 replies)
[–] Amoeba_Girl@awful.systems 12 points 2 weeks ago (2 children)

Yes dude, that's the main thing you should be concerned about of course. AI tools couldn't possibly be bad in and of themselves, it has to be human tampering. You've always been very clear about that part.

[–] Amoeba_Girl@awful.systems 19 points 2 weeks ago (1 children)

tfw your gifted child syndrome resentment of adults is powerful enough to make you forget about your life's work

load more comments (1 replies)
load more comments (1 replies)
[–] Soyweiser@awful.systems 10 points 2 weeks ago

That is his concern and not the billionaires behind it messing with the systems so much you cant prompt override it? Please tell me this guy doesnt work in AI alignment.

load more comments (4 replies)
[–] blakestacey@awful.systems 10 points 2 weeks ago (1 children)

"When the 8-year-old and I have an argument, it actually works really well to mutually agree on an account of events... and then take cocaine together."

load more comments (1 replies)
[–] fnix@awful.systems 13 points 3 weeks ago (1 children)

More of a pet peeve than a primal scream, but I wonder what's with Adam Tooze and his awe of AI. Tooze is a left-wing economic historian who’s generally interesting to listen to (though perhaps in tackling a very wide range of subject matter sometimes missing some depth), but nevertheless seems as AI-pilled as any VC. Most recently came about this bit: Berlin Forum on Global Cooperation 2025 - Keynote Adam Tooze

Anyone who’s used AI seriously knows the LLMs are extraordinary in what they’re able to do ... 5 years down the line, this will be even more transformative.

Really, anyone Adam? Are you sure about the techbro pitch there?

[–] gerikson@awful.systems 10 points 3 weeks ago (1 children)

Sad if true. I really enjoyed his book The Wages of Destruction which mythbusts a lot of folk knowledge about the Nazis

https://gerikson.com/blog/books/read/The-Wages-of-Destruction.html

load more comments (1 replies)
[–] Architeuthis@awful.systems 13 points 2 weeks ago
[–] YourNetworkIsHaunted@awful.systems 13 points 2 weeks ago (9 children)

So the fucking Cracker Barrel rebranding thing happened. I'm going to pretend this is relevant here because the new logo looked like it was from the usual "imitating Apple minimalism without understanding it in the least" school of design. They've confirmed that they're not moving forward with it, restoring both the barrel and the cracker to the logo, so that's all good. That's not what I want to talk about.

No, what's grinding my gears is the way that the rollback is being pitched purely as a response to conservative "antiwoke" backlash, and not as a response to literally nobody liking it. This wasn't a case of a successful crusade against woke overreach, this was a case of corporate incompetence running into the reactions of actual human beings. I can't think of a more 2025 media dynamic than giving fucking Nazis a free win rather than giving corporate executives an L.

load more comments (9 replies)
[–] BlueMonday1984@awful.systems 13 points 2 weeks ago (1 children)

Discovered a solid sneer online today, aptly titled "I Am An AI Hater"

load more comments (1 replies)
[–] Soyweiser@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago) (15 children)

Lesswronger reads about Orcas the first time in their life. Decided that training them to become smarter than humans is now the next important step.

(I'm very very far from an orca expert - basically everything I know about them I learned today.)

They made several posts about it, and just the opening bits are funny. I obv didn't read any of them an only looked at the opening statements. I will produce a quote from each.

It is currently plausible (~~352115%~~[1]23%) to me that average orcas have at least as high potential for being great scientists as the greatest human scientists, modulo their motivation for doing science[2].

Yes, the weird percentage is in the text, the [1] footnote says 15%, no idea why they can't edit their text normally.

(For speed of writing, I mostly don't cite references. Feel free to ask me in the comments for references for some claims.)

Context: I think there’s a ~17% chance that average orcas are >=+6std intelligent.

And from the last article, two lines as a treat:

TLDR: I now think it’s <1% likely that average orcas are >=+6std intelligent.

(I now think the relevant question is rather whether orcas might be >=+4std intelligent, since that might be enough for superhuman wisdom and thinking techniques to accumulate through generations, but I think it’s only 2% probable. (Still decently likely that they are near human level smart though.))

(Nice of the person to think of the orcas btw, just wish it was more preservation than 'how can we make these animals help us out).

E: apparently "An alternative approach to superbabies" is also about orcas, they just no longer stand behind it.

Can someone please look into this?

[–] sailor_sega_saturn@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

The last time someone looked into this it was with dolphins, did not go well, lead to more human-dolphin sex than communication, and ended in a dolphin suicide.

https://www.theguardian.com/environment/2014/jun/08/the-dolphin-who-loved-me

load more comments (3 replies)
load more comments (14 replies)
[–] blakestacey@awful.systems 11 points 3 weeks ago

James Gleick on "The Lie of AI":

https://around.com/the-lie-of-ai/

Nothing new for regulars here, I suspect, but it might be useful to have in one's pocket.

[–] BlueMonday1984@awful.systems 11 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

Someone tried Adobe's new Generative Fill "feature" (just the latest development in Adobe's infatuation with AI) with the prompt "take this elf lady out of the scene", and the results were...interesting:

There's also an option to rate whatever the fill gets you, which I can absolutely see being used to sabotage the "feature".

[–] Talia@mstdn.social 10 points 3 weeks ago

@BlueMonday1984 I was experimenting with generative fill and asked it to remove a person from a scene and "make the background yellow". It made the person Chinese. No fucking joke.

load more comments (3 replies)
[–] BlueMonday1984@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

OpenAI has stated its scanning users' conversations (as if they weren't already) and reporting conversations to the cops in response to the recent teen suicide I mentioned a couple days ago.

So, rather than let ChatGPT drive users to kill themselves, its just going to SWAT users and have the cops do the job.

(On an arguably more comedic note, the AI doomers are accusing OpenAI of betraying humankind.)

load more comments (4 replies)
[–] BlueMonday1984@awful.systems 10 points 3 weeks ago (8 children)

New Ed Zitron: "How to Argue With An AI Booster", an hour-long read dedicated to exactly what it says on the tin.

load more comments (8 replies)
[–] CinnasVerses@awful.systems 10 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

The Independent has yet another profile of the Collinses which finally starts to map their network (a brother is in DOGE). Just who is their PR person would be good to know. https://www.independent.co.uk/news/world/americas/trump-musk-ai-pronatalists-collins-b2777577.html

There’s a Collins Rotunda at Harvard, a physical testament to the amount of money Malcolm’s family has donated over the years. His uncle was the former president and CEO of the Federal Reserve Bank in Dallas. In fact, pretty much every relative has been to an elite Ivy League institution and runs a successful startup or works in government.

load more comments (2 replies)
load more comments
view more: next ›