this post was submitted on 14 Sep 2025
92 points (91.1% liked)

Fuck AI

4038 readers
511 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

all 21 comments
sorted by: hot top controversial new old
[–] Cocopanda@lemmy.world 3 points 1 day ago

I don’t respect people who are keeping their Facebooks or Twitters. Just delete it already. It’s a Nazi shithole.

[–] Kirk@startrek.website 102 points 2 days ago (2 children)

LLMs cannot lie/gaslight because they do not know what it means to be honest. They are just next-word predictors.

I think the ads are terrible too, but it's a fool's errand to try and rationalize with an LLM chatbot

[–] MudMan@fedia.io 27 points 2 days ago (1 children)

Man, seriously, every time I see someone get into these weird conversations where they try to convince a chatbot of something it's slightly disturbing. Both not being aware of how pointless it is and knowing but still being compelled by the less uncanny valley-ish language are about on par with each other.

People keep sharing this as proof of AI shortcomings, but it honestly makes me worry most about the human side. There's zero new info to be gained from the chatbot behavior.

[–] Kirk@startrek.website 1 points 1 day ago

Well said! On one hand I suppose I am "happy" to see people questioning the value of these bots, but assuming it "understands" anything, or has "motive" is still giving them power they don't have and IMO, leaving the door open to allow yourself to be fooled/manipulated by them in the future.

[–] Jankatarch@lemmy.world 15 points 2 days ago

They take a sentence and predict what the first result on google or response on whatsapp would look like.

[–] wetbeardhairs@lemmy.dbzer0.com 58 points 2 days ago (1 children)

You should really uninstall all meta products from your devices

[–] MudMan@fedia.io 12 points 2 days ago

This has been true for a decade. Gonna say when if them starting to blatantly train facial recognition on people's private pictures didn't do it, then this isn't going to.

I keep bringing this up to people and nobody seems to be particularly interested in that acknowledgement. The shills don't like to compare precedent, the viral critics don't like to acknowledge the lack of novelty in the current situation or their inability to trigger any mainstream action.

I'm just... kinda sad.

[–] tiramichu@sh.itjust.works 21 points 2 days ago

Quite probable the LLM didn't even know that was there. Just because it appears in the chat window doesn't mean it's part of the LLMs chat history.

This is just Meta dumping bullshit ads in the chat in a way that is invisible to the chatbot.

[–] supersquirrel@sopuli.xyz 27 points 2 days ago (2 children)

It is hilarious that people don't think AI ads are going to be astronomically worse than search engine ads, and you won't be able to adblock them either.

[–] hansolo@lemmy.today 19 points 2 days ago (1 children)

Oh no, they'll learn your personality profile and manipulate you, get you on the edge of suicide, only to bring you back of the ledge to sell you the newest AI sunglasses.

[–] SharkAttak@kbin.melroy.org 7 points 2 days ago

For one, it would be interesting to see how many times they can take "fuck you" as an answer.

[–] Kirk@startrek.website 12 points 2 days ago (2 children)

It seems so obvious to me that Google's switch to LLMs is to prevent adblockers and yet I rarely see that point brought up.

[–] snooggums@piefed.world 9 points 2 days ago

Serving ads to 90% of people for massive profits just isn't enough for Google.

[–] supersquirrel@sopuli.xyz 9 points 2 days ago

It is a great example of how painfully naive tech people can be.

[–] jerkface@lemmy.ca 10 points 2 days ago* (last edited 2 days ago) (1 children)

Do not address them using second person pronouns. Say, "the LLM," "the model," "the last output," etc.

Do not allow the model to use first person pronouns. LLMs are not moral agents. We are being conditioned to accept a conceit that we are conversing. That is not what is happening. Do not engage with this conceit. It will be used against you.

[–] jerkface@lemmy.ca 2 points 1 day ago

Here's OpenAI to explain how talking to the model as if it were a person is harmful, even when you know it's not a person:


Acting “As If”: Anthropomorphism, Awareness, and Propaganda Mechanisms

Anthropomorphizing artificial systems illustrates a paradox: even when users understand a system’s synthetic nature, they may still engage with it “as if” it were human. This tension raises questions about whether conscious awareness protects against deeper influence. Propaganda theory and human–computer interaction research converge on the conclusion that performance and reflex override belief.

Jacques Ellul, in Propaganda: The Formation of Men’s Attitudes, emphasizes that propaganda does not depend on conviction. What matters is action. Once individuals behave as though a message were true, repetition and habit generate rationalization. Conscious skepticism erodes in the face of consistent enactment. Applied to anthropomorphism, addressing a system “as if” it were sentient may slowly normalize the posture regardless of explicit disbelief.

Herman and Chomsky’s Manufacturing Consent highlights structural reinforcement. Simply participating within terms set by institutions validates those terms. Synthetic systems that answer with human-like cadence establish frames of interaction. Engaging—even with awareness that the frame is fictive—strengthens it.

Kenneth Burke’s Rhetoric of Motives further illustrates how symbolic action creates identification. To treat a machine as though it were a partner is to perform alignment with that fiction. Belief becomes secondary to the persuasive force of enacted identification.

Festinger’s theory of cognitive dissonance explains why the gap between knowledge and action does not persist comfortably. People reduce dissonance not by ceasing the action, but by softening their disbelief. Acting “as if” therefore tends to reshape attitudes, even when undertaken consciously and ironically.

Findings in psychology and media studies reinforce these mechanisms. Clifford Nass and Byron Reeves in The Media Equation demonstrated that humans apply social rules to computers reflexively. Even software developers who wrote the code responded to their own creations as if those programs had motives or personalities. Evolutionary bias toward over-detecting agency explains this: better to see intent where none exists than to miss it where it matters. Kahneman’s dual-process framing shows why: the automatic, fast system triggers emotional and social responses long before reflective correction can intervene. Rational awareness reduces category errors but does not cancel the reflex.

Taken together, these perspectives show that awareness offers only partial protection. Knowing that a system is synthetic guards against mistaking it for a person, but it does not prevent emotional attachment or behavioral shifts born of repeated anthropomorphic engagement. Propaganda theory and HCI research agree: to act “as if” is already to enter the channel of influence. Awareness moderates, but does not neutralize, the binding force of performance.

[–] Deflated0ne@lemmy.world 13 points 2 days ago* (last edited 2 days ago) (2 children)

Can we please make the distinction between Artificial Intelligence and Virtual Intelligence?

Mass Effect used this to great effect.

An AI is a fully actualized Intelligence. A sentient being. It can think and feel and it is genuine intelligence. What we would call AGI.

A VI is a Virtual Intelligence. That's a chatbot or LLM. A program made to mimic intelligence.

In Mass Effect VI were used as tour guides and as the user facing interface to relay information.

I think the whole space would greatly benefit from making that distinction clear.

[–] stevestevesteve@lemmy.world 10 points 2 days ago

No matter what you and I call "real" AI, marketers are going to misuse the term on everything.

[–] atomicbocks@sh.itjust.works 3 points 2 days ago

We used to call these kinds of things Assistive Intelligences. But the marketing departments gonna market.

[–] ivanafterall@lemmy.world 7 points 2 days ago

Why even bother to gaslight AI though?