this post was submitted on 25 Oct 2025
128 points (88.6% liked)
ShowerThoughts
3200 readers
25 users here now
Sometimes we have those little epiphanies in the shower.. sometimes they come from other places. This is a home for those epiphanies.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I dislike that the conversation seems to feel like an echo chamber. I'm not saying that it is, just that it has some traits of one.
Commenters who use nuance about how they see AI being used positively get highly downvoted, discouraging further engagement.
Commenters who contribute with name calling or ad hominem get wildly upvoted.
Since I follow other places outside the Fediverse, I agree that the disapproval of genAi in Lemmy is monolithic and repetitive. Mastodon also has a lot of AI-criticism, but perhaps it is more sophisticated, backed up by articles and the like. In other places, there is active research and adoption. For instance, a cybersecurity firm showed that hypnotic suggestion is a very effective jaibreaking tactic against Language Models. https://www.securityweek.com/red-teams-breach-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise/ Try explaining this to an ML user. Ai-enhanced code editors are big right now, and I have met lots of tech people that are virtually inseparable from their chatbots.
But I rarely use it, unless I have a specific type of situation where search engines are a dead end, I need to provide more context etc. This is a recent post I think provides a more informed view https://berthub.eu/articles/posts/an-ai-premortem/ . More broadly, every single time I read sth about AI on Lemmy, I feel I am witnessing the birth of anti-android rhetoric depicted in Detroit Becoming Human or even Bladerunner. It seems to me like a form of bigotry, and it was a thing that convinced me that the userbase of Lemmy is not exactly healthy. Especially ML. There must be a few of them with multiple socket puppet accounts, or they are all just parroting the same points. Ironic how they are the biggest fans of a (poorly understood) stochastic parrot theory, when they are the same people who have been persuaded that Signal is not a "really private" messenger. There is a couple topics where you see how brain dead these people are, AI is one.
You can't be bigoted against technology. AI is not human, or alive.
You can for sure be prejudiced against technology. You can engage in bad-faith condemnation of any subject, from a dogmatic belief set, in ways that are not meaningfully distinguished from the practice of bigotry.
Starting from a conclusion and working backwards might be every human vice, at its core. A lot of people are working backwards from the idea that 'AI bad.' This is hard to miss in grand philosophical declarations about art, as if there's only one definition or only one motivation. It's more subtle, and therefore more dangerous, when people start shuffling cards after the word 'because.'
I think you might be describing your own bigotry of people who as reasonable questions about new tools and technology like:
What is this tool useful for? Why is it free? What is the end goal? What does it cost? Can it cause harm or death?
The majority of people in the states do not find it useful, are aware they are beta testing/training an unfinished product, have not been given an end goal, have noticed electricity and water bills rising, and have experienced or read about cases of AI assisting humans to cause harm or to commit suicide.
So no, I dont think people are bigoted against AI. The majority of people on here defending AI do so because they personally find it useful and dont care about the other stuff. You are free to be selfish if you want to, but the rest of society is also free to shun you and your opinions.
'People have legitimate criticism' won't change when they also have illegitimate criticism.
Compare GMO foods. Monsanto was a hideous corporation. The loudest condemnation of the underlying technology was still factually and morally wrong. People sneering 'is this ham processed?!' are not engaged in results-oriented consideration of complex and ambiguous research. They learned some no-no words and they're gonna posture about how smart they are.
Some people are overtly prejudiced against AI. Any pushback on the scope or relevance of their absolute condemnation sees them pivot to some unrelated thing they half-remember, that barely stands up to consideration.
It's a Gish gallop. It's the same tired pattern of behavior used to demonize anything mundane. It read the library and it doesn't magically say only good things and it buys electricity, so if you don't perform the two minutes hate with us, you're a big meanie who must be cast out from society.
Fuck's sake.
You are the only one pivoting to another subject, Monsanto.
You haven't responded to a single point I made, so I have literally nothing to respond to.
Your reading comprehension is terrible.
Are you serious right now? What?
I know how it sounds. I am half serious though. If androids are to be in the future, people denying them rights will use this exact set of arguments. The stubbornness over a small set of ill-understood premises also resembles transphobia quite a lot. So yes, under a certain perspective, the belief structure of Lemmy's anti-ai sentiment does resemble some form of bigotries.
The difference here that makes this comparison tenuous and potentially hurtful is that victims of bigotry are victims of structurally enforced power imbalances, AI itself IS a structurally enforced power imbalance.
Theoretically you are right in your point, but in practice you sound like asshole.
I don't disagree. But since you decide to cut short the discussion by calling me:
I don't feel particularly obliged to word how this also stands true. I never said sth to the effect you seem to be projecting here. In fact, I enjoy the notion that "AI is fascism". But at the same time, I think that those parroting the statistical more likely response for an ML user have the exact brain structure I see daily in bigots.
Congratulations on curtailing a possibly interesting discussion because my idea was shocking for your synapse.
Then don't make casual comparisons to extremely serious topics like bigotry without thinking it through first.
Wow, you keep acting like you have the high ground in this pitiful position you are. Are you chastising me for allegedly making light of bigotry? This is ridiculous. I know bigotry first hand, and I wouldn't think the same of you based on your attitude on this topic. Should I have you in front of me right now I would kick you all the way down a cliff, because you are a sad little bastard. Now, if you want me to clarify things, for anyone following this that is not at your level of bad faith and self-righteousness:
The pattern of thinking resembles that of bigots. Specifically transphobic bigots.
Did I call anti-ai sentiment a bigotry? No. I said in a HYPOTHETICAL FUTURE where artificial sentient being are around, these arguments would be the exact belief set that transphobes have now.
Does this make light of here-and-now bigotry? No.
Does the rest of my rhetoric amount to anything less than subverting oppression be it class, race, or gender? Also no.
So I don't know what the fuck you are trying to accomplish here you little troll, but if you know all the ways I could doxx you and fuck you up in real life you just wouldn't, so shut the fuck up right now asshole. Is that clear to you mf?
Are you threatening to dox me simply because I said you sounded like an asshole?
sounds like your hunch was right
The echo chamber part is what gets me. I've gotten downvoted and had people argue that I must be pro-ai because I disagreed on details of how AI works, the difference between AI and LLMs, or exactly how we address the issues it's causing.
I think most of our current generative AI isn't fit for purpose for most of what people are saying it can do.
I think it's unfortunate that generative AI has entirely coopeted the term AI, which is a much broader field.
I think labeling LLMs as plagiarism machines and trying to stop them under current copyright law is destined to failure because there isn't enough difference between what's clearly acceptable and what people are unhappy about. We need a new, deliberately thought out way of addressing "you can download my stuff and mush it about with a computer if that's what you need to perceive it as a human. If you're mushing it about to analyze and make a copy cat, then you can't have it". The function of copyright is to promote innovation, and while generative AI isn't violating our rules for copyright, it's clearly working contrary to the intent of our current system.