Enemy of awful systems Malcolm Gladwell is a full throated transphobe
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Why is it always transphobia with these kinds of people... is it because they feel racism is too risky? So they need a different outlet for hating on people based on what they believe is "science"?
the answer to all those is yes. To synthesise the bigger idea, as the saying goes: scratch a liberal, a fascist bleeds. Gladwell caters to a huge audience that is centrist, leaning somewhat liberal (though he flirts constantly with race science and straight up racism, see his writings on Korean Air). As liberals move towards fascism, so must he. Of course, it’s the most marginalised that are going to be targeted first.
that “90%” of the audience was “on [Tucker’s] side” but had been unwilling to admit it.
Why are transphobes always this full of shit. Polls always disagree with this, and it cant be 'people are afraid to speak out out of fear', as there seem to be very little consequences to just being a lowkey transphobe. Hell even raging transphobes who stalk people, assault kids, and call for violence in what can only be called a life destroying obsession with trans people get defended by the people in power. There is very little reason for people to lie on polls over this.
Also dislike the focus on 'trans women' as these whole shitty arguments forget trans men exist. (This seems to be a general problem sadly, trans men, same as intersex people, or detransitioners tend to be only brought up like gotcha arguments, and not much to discuss the different problems they also face. Obv, I'm also doing that here, so I'm not immune)
I just found out the name of Scott Alexander's psychiatry practice (Lorien Psychiatry) is a Lord of the Rings reference, so my guess as to direct Thiel money just went way up
The common clay of the new west:
transcript
ChatGPT has become worthless
[Business & Professional]
I’m a paid member and asked it to help me research a topic and write a guide and it said it needed days to complete it. That’s a first. Usually it could do this task on the spot.
It missed the first deadline and missed 5 more. 3 weeks went by and it couldn’t get the task done. Went to Claude and it did it in 10 minutes. No idea what is going on with ChatGpt but I cancelled the pay plan.
Anyone else having this kind of issue?
I am extremely, extremely sickos.png for having all these weird little fucks find out that all this nonsense is heavily subsidised by VCs
Same here. The world's been forced to deal with these promptfucks ruining everything they touch for literal years at this point, some degree of schadenfreude at their expense was sorely fucking needed.
Promptfondlers on lobste.rs are unhappy about the tag "vibecoding", used to denote development using GenAI. I'd not recommend reading the thread, just want to observe that if slop coding had actually taken the coding world by storm, I doubt there would be much pearl clutchign about how slop-slingers are treated on the site. We're talking about people willing to pay Scam Altman money monthly after all.
Lesswronger notices all of the rationalist's attempts at making an "aligned" AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate
Notably, the author doesn't realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.
>50 min read
>”why company has perverse incentives”
>no mention of capitalism
rationalism.mpeg
Every time I see a rationalist bring up the term "Moloch" I get a little angrier at Scott Alexander.
“Moloch”, huh? What are we living in, some kind of demon-haunted world?
Others were alarmed and advocated internally against scaling large language models. But these were not AGI safety researchers, but critical AI researchers, like Dr. Timnit Gebru.
Here we see rationalists approaching dangerously close to self-awareness and recognizing their whole concept of "AI safety" as marketing copy.
As a CS student, I wonder why us and artists are always the one who are attacked the most whenever some new "insert tech stuff" comes out. And everyone's like: HOLY SHIT PROGRAMMERS AND ARTISTS ARE DEAD, without realizing that most of these things are way too crappy to actually be... good enough to replace us?
My guess would be because most people don’t understand what you all actually do so gen AI output looks to them like their impression of the work you do. Just look at the game studios replacing concept artists with Midjourney, not grasping what concept art even is for and screwing up everyone’s workflow as a result.
I’m neither a programmer nor an artist I can sorta understand how people get fooled. Show me a snippet of nonsense code or image and I’ll nod along if you say it’s good. But then as a writer (even if only hobbyist) I am able to see how godawful gen AI writing is whereas some non-writers won’t, and so I extrapolate from that since it’s not good at the thing I have domain expertise in, it probably isn’t good at the things I don’t understand.
Show me a snippet of nonsense code or image and I’ll nod along if you say it’s good.
Smirk I’m in.
So I learned about the rise of pro-Clippy sentiment in the wake of ChatGPT and that led me on a little ramble about the ELIZA effect vs. the exercise of empathy https://awful.systems/post/5495333
Not totally in our wheelhouse, but seems like the Abundance movements has a bit of a right wing speaker problem: https://bsky.app/profile/therealbrent.bsky.social/post/3lxzn3lxqo22b
Oh Abundance, the repackaging of Reaganomics by liberals to court conservatives, is having a conference and has booked right wing speakers? Nobody could have predicted this
DragonCon drops the ban hammer on a slop slinger. There was much rejoicing.
Btw, the vibes were absolutely marvelous this year.
Edit: a shrine was built to shame the perpetrator
https://old.reddit.com/r/dragoncon/comments/1n60s10/to_shame_that_ai_stand_in_artist_alley_people/
after that Flock shit that got announced recently, looks like garry tan is currently doing his damnedest to ensure people know he's absolutely full of it
thread starts here, features bangers like this
You're thinking Chinese surveillance
US-based surveillance helps victims and prevents more victims
"nooooo, we're the good kind of boot to have on your face! we're the boot from the same country as you!" says the boot in the rising fascist upswing
Great piece on previous hype waves by P. Ball
https://aeon.co/essays/no-suffering-no-death-no-limits-the-nanobots-pipe-dream
It’s sad, my “thoroughly researched” “paper” greygoo-2027 just doesn’t seem to have that viral x-factor that lands me exclusive interviews w/ the Times 🫠
Putting this into the current context of LLMs... Given how Eliezer still repeats the "diamondoid bacteria" line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.
Creator of NaCl publishes something even saltier.
"Am I being detained?" I scream as IETF politely asks me to stop throwing a tantrum over the concept of having moderation policy.
Where the fuck has that guy been for 20 years? I've seen that happen many times with junior programmers during my 20 years of experience.
also from a number of devs who went borderline malicious compliance in "adopting tdd/testing" but didn't really grok the assignment
Shamelessly posting link to my skeet thread (skeet trail?) on my experience with an (mandatory) AI chatbot workshop. Nothing that will surprise regulars here too much, but if you want to share the pain...
https://bsky.app/profile/jfranek.bsky.social/post/3lxtdvr4xyc2q
Kind of generic: I am a researcher and recently started a third party funded project where I won't teach for a while. I kinda dread what garbage fire I'll return to in a couple of years when I teach again, how much AI slop will be established on the sides of teachers and students.
From the ChatGPT subreddit: Gemini offers to pay me for a developer to fix its mess
Who exactly pays for it? Google? Or does Google send one of their interns to fix the code? Maybe Gemini does have its own bank account. Wow, I really haven't been keeping up with these advances in agentic AI.
it's almost as funny as when one time chatbot told vibecoder to learn to code