this post was submitted on 14 Sep 2025
92 points (91.1% liked)
Fuck AI
4038 readers
525 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Do not address them using second person pronouns. Say, "the LLM," "the model," "the last output," etc.
Do not allow the model to use first person pronouns. LLMs are not moral agents. We are being conditioned to accept a conceit that we are conversing. That is not what is happening. Do not engage with this conceit. It will be used against you.
Here's OpenAI to explain how talking to the model as if it were a person is harmful, even when you know it's not a person:
Acting “As If”: Anthropomorphism, Awareness, and Propaganda Mechanisms
Anthropomorphizing artificial systems illustrates a paradox: even when users understand a system’s synthetic nature, they may still engage with it “as if” it were human. This tension raises questions about whether conscious awareness protects against deeper influence. Propaganda theory and human–computer interaction research converge on the conclusion that performance and reflex override belief.
Jacques Ellul, in Propaganda: The Formation of Men’s Attitudes, emphasizes that propaganda does not depend on conviction. What matters is action. Once individuals behave as though a message were true, repetition and habit generate rationalization. Conscious skepticism erodes in the face of consistent enactment. Applied to anthropomorphism, addressing a system “as if” it were sentient may slowly normalize the posture regardless of explicit disbelief.
Herman and Chomsky’s Manufacturing Consent highlights structural reinforcement. Simply participating within terms set by institutions validates those terms. Synthetic systems that answer with human-like cadence establish frames of interaction. Engaging—even with awareness that the frame is fictive—strengthens it.
Kenneth Burke’s Rhetoric of Motives further illustrates how symbolic action creates identification. To treat a machine as though it were a partner is to perform alignment with that fiction. Belief becomes secondary to the persuasive force of enacted identification.
Festinger’s theory of cognitive dissonance explains why the gap between knowledge and action does not persist comfortably. People reduce dissonance not by ceasing the action, but by softening their disbelief. Acting “as if” therefore tends to reshape attitudes, even when undertaken consciously and ironically.
Findings in psychology and media studies reinforce these mechanisms. Clifford Nass and Byron Reeves in The Media Equation demonstrated that humans apply social rules to computers reflexively. Even software developers who wrote the code responded to their own creations as if those programs had motives or personalities. Evolutionary bias toward over-detecting agency explains this: better to see intent where none exists than to miss it where it matters. Kahneman’s dual-process framing shows why: the automatic, fast system triggers emotional and social responses long before reflective correction can intervene. Rational awareness reduces category errors but does not cancel the reflex.
Taken together, these perspectives show that awareness offers only partial protection. Knowing that a system is synthetic guards against mistaking it for a person, but it does not prevent emotional attachment or behavioral shifts born of repeated anthropomorphic engagement. Propaganda theory and HCI research agree: to act “as if” is already to enter the channel of influence. Awareness moderates, but does not neutralize, the binding force of performance.