this post was submitted on 21 Feb 2026
157 points (98.2% liked)

Fuck AI

6579 readers
1818 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
157
submitted 1 month ago* (last edited 1 month ago) by JustJack23@slrpnk.net to c/fuck_ai@lemmy.world
 

Not a ragebait post.

I started thinking why I hate AI and it's mostly:

  • It is pushed down my throat very hard for what it does;
  • The unauthorized use of content on the internet;
  • The worsening of the environmental crisis;
  • The content it generates is shit.

I am wondering do you have other arguments against it?

(page 2) 25 comments
sorted by: hot top controversial new old
[–] thedeadwalking4242@lemmy.world 4 points 1 month ago

I don't hate AI. That's what bothers me most about all this I think. LLMs aren't AI. I've been bothered with using the term AI as a catch all for ML for so long now.

LLMs have some form of machine intelligence and pattern matching. However a majority of their output are just compositions of their training data. They aren't intelligent and any "intelligence" they posses is just ripped from other places. Real people generate the value while other people give the LLM the credit.

They are a tool nothing more and definitely can't replace any but the most mind numbing jobs.

Not only that but it's been hyped up by the most annoying and shitty people. It's destroying the economy, not because it's replacing jobs but because it's over inflating the value of a select group of companies. Everyone is scrambling to adopt a technology that doesn't deliver on its promises. Meaning worse quality everything and what's worse is that these companies willingly manipulate the public to hide it's short falls. Decent "thinking" LLM models are incredibly expensive to run and generate very little value. It's all subsidized. When the real price hits the fan we will all be fucked.

I don't hate AI. Hell I don't even particularly hate LLMs. I hate the hype, I hate LLM bros, and I hate the market.

[–] Denjin@feddit.uk 4 points 1 month ago

Dramatically accelerating the erosion of knowledge and the ability to seek information and develop critical thinking skills.

People no longer look for different sources of information (a trend started with social media but now expanded and accelerated by LLMs) but taken the first thing they see as gospel truth.

Businesses and bad actors are realising this and are flooding "scraped spaces" with false, misleading or flattering information which is rote copied by ChatGPT et al nd given a veneer of credibility because it's labelled as "intelligent".

[–] Crackhappy@lemmy.world 4 points 1 month ago

Simply because it is taking jobs, taking money from hardworking people and giving even more back to the ultra wealthy.

[–] webghost0101@sopuli.xyz 4 points 1 month ago

Because the future potential it has in transforming the world for the better is absolutely astonishing.

But our execution of it, the overhyped barely useable projects, the instant enshitification of all things by capitalism empowerd by it. The blind masses glorifying these experiments as an all knowing always just entity because it feeds their ego.

[–] HaraldvonBlauzahn@feddit.org 3 points 1 month ago

It adds to a culture of bullshit which tries to take control over what you think, perceive and believe. That is even more of the bullshit that flourishes in big corporations and it is a gesture of dominance over your mind. It is not only antithetical to free thinking, use of your own intelligence, and science as a search for understanding, but it is also antithetical to enlightenment which is one foundation of our modern democratic societies - and a pre-requirement for science that is powerful enough to navigate our dangerous world.

[–] 4grams@awful.systems 2 points 1 month ago* (last edited 1 month ago)

I don’t hate AI, I kinda like it and I do think it will redefine computing in the future.

I hate the people behind it, and who are pushing it. I hate the hype, I hate the pressure, I hate how dangerous it is without guardrails, I hate the stupidity of people using it, and getting addicted to it because it’s an ego stoking machine. I hate the entire industry around it.

But I really enjoy using and learning about it in my own sandbox. I use a number of local LLMs successfully for research and learning. But I don’t trust them, at all, I think of them as an egotistical knowitall who has no problem lying to make themselves feel smart. There’s tons of useful info to get from them, but you have to understand what you are dealing with.

I’m very curious what the future looks like. It really depends on us though. Critical thinking and being observant are the new critical skills for success in the he future. Unfortunately neither are particularly common these days so I have a feeling I will continue to hate for a while now…

[–] bhamlin@lemmy.world 1 points 1 month ago

I don't really hate AI; it's an interesting (and rarely, useful) tool. What I hate is the drive to push it into every part of our lives. As it is now it isn't suited for the uses they're pushing, and we are currently a long way off of training a model in a way that could be. Add to that the drive to push advertising via AI and most of what's out there is now entirely suspect. All that to say, I think the issue is capitalism moreso than AI.

[–] ndupont@lemmy.blahaj.zone 1 points 1 month ago

Mostly your points 1,3 and 4. I'm less offended by the second one.

[–] ZDL@lazysoci.al 1 points 1 month ago

LLMbeciles are dangerously incompetent tools that unfortunately "hack" a weakness in human perception: We are hard-wired to equate eloquence and confidence with intellect. (The so-called fluency heuristic.) LLMbeciles are very fluent, eloquent, and confident and we are very vulnerable to that combination. As a result outside our areas of expertise we have a tendency to trust LLMbecile output despite the fact that it is literally 100% bullshit (in the Frankfurt sense) hallucination. It just happens that by the statistics of human language stolen to build the model that these hallucinations match reality enough to fool non-experts. And that's the danger: they're "right" (which is to say their bullshit semi-accidentally matches reality) often enough we don't catch the cases where their bullshit is just plain wrong.

This is a pattern see with a lot of people who have areas of high expertise:

  1. "LLMbeciles are not really useful in this field in which I have expertise..."
  2. "...but I think they're very useful in all these fields in which I have no expertise."

Gell-Mann must be rolling in his grave right now! (Yes, I know it's Crichton, but I'm sticking to his bit.)

[–] fallaciousBasis@lemmy.world -1 points 1 month ago (1 children)

I like AI. You just need to think of it like a search engine. Not a person.

[–] Monument@lemmy.sdf.org 2 points 1 month ago* (last edited 1 month ago)

I’ve had to help users see the light when AI claimed it had the solution for a problem, but after 3 troubleshooting steps, it invoked menus from programs other than the one they were using.

The users then kept telling me that they had their original issue, plus their software was missing features.

And while that’s great when, I guess, it makes for more feature-rich software. It’s a nightmare when the answer is a solid and resounding “No, that doesn’t work that way” and AI doesn’t want to tell someone no, so it lies.

[–] RedstoneValley@sh.itjust.works -4 points 1 month ago* (last edited 1 month ago) (1 children)

Might be an unpopular opinion. I don't hate AI as the technologies like LLMs and ML, the possibilities are limited but when used consciously with the drawbacks and faults in mind it can be useful. If you want to hate anything, hate the players, not the game...

  • People who sell LLMs to customers under false pretenses
  • People who force the use of LLMs for tasks they are objectively bad at
  • People who build massive datacenters, ruining the environment for their dubious claims.
  • People who feed the LLMs with a massive amount of stolen training data
  • People who release those LLMs to customers who are not educated to deal with them (causing AI psychosis and general brainrot)
  • People who sell that stuff as if it was magic instead of what it really is. A sophisticated autocomplete.
  • People who sell that stuff as if it was close to being a superintelligence and therefore dangerous. Which is bullshit. The dangers lie in LLM chatbots being confidently wrong, persuading unsuspecting users to believe the hype
  • People... i think there is a pattern here.
[–] JustJack23@slrpnk.net 3 points 1 month ago* (last edited 1 month ago)

I generally agree with some asterisks.

It is not people, my neighbors do not trying to sell me AI it's the capitalist class looking to make a buck at the expense of workers.

And another thing is that LLM in their current form require massive data and data centers. So hating LLM and hating the infrastructure here I think is the same thing.

load more comments
view more: ‹ prev next ›