this post was submitted on 08 Apr 2026
243 points (98.8% liked)

Technology

83666 readers
3570 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] flandish@lemmy.world 38 points 2 days ago (2 children)

“Hallucinations” are things humans do. An AI can only just be wrong. Even when it makes up data, it’s just a stochastic parrot.

[–] PushButton@lemmy.world 40 points 2 days ago (3 children)

They coined the term "hallucination" as soon as when people realized that the "AI thing" is throwing back bullshit at us.

They had to force that term in people's head, else we would call that bullshit, lies and so on as we should.

It's like Google with their "side loading". There is no such thing, it's installing an app...

It's a word war. People are being manipulated.

[–] architect@thelemmy.club 1 points 1 day ago

Lies require intent.

So the AI hallucinates because it loses context. Hooked up to quantum computers you won’t have that happening. So regular people think the thing is stupid while the government has a murder AI.

[–] ouRKaoS@lemmy.today 9 points 2 days ago

Been going on for a while. Remember "Alternative Facts"?

[–] flandish@lemmy.world 3 points 2 days ago (1 children)
[–] LegenDarius@lemmy.world 2 points 1 day ago (1 children)

Why do you concur? You have a problem with "hallucinations" because it's something humans do. This commentor wants to call them (among other things) "lies", which implies intent and knowledge of falsehood which an LLM definitely can't have. I'm not saying "halliconations" are super accurate but I don't think the term is too positive and lessens the major issues LLMs have.

[–] flandish@lemmy.world 3 points 1 day ago* (last edited 1 day ago) (1 children)

ok. so I think what you see as commenter wants to call them lies is descriptive of what the corporations are pushing (as “hallucinations” but what a reasonable person would call lies)

In other words it’s a “meta” conversation that I concur with. A LLM cannot do human things obviously, but “sales” can portray them as such.

In my day to day usage I make an actual effort to refer to that stuff that is wrong from an LLM as wrong. Not with human focused words.

[–] LegenDarius@lemmy.world 1 points 1 day ago
[–] melroy@kbin.melroy.org 9 points 2 days ago (1 children)

Hallucinations are by design for Ai. It's just advanced next word predictions. So all answers (correct or wrong) are doing through the same hallucination process.

[–] Cort@lemmy.world 13 points 2 days ago (1 children)

Ah, it's always hallucinating, sometimes the hallucinations conveniently line up with reality.

[–] snugglesthefalse@sh.itjust.works 3 points 2 days ago (1 children)

The whole goal of these algorithms is that you put an input in and the output it gives out is as close to the most likely to be correct answer as it can be, training is just repeating that process. We're several years deep into these "most likely" results and sometimes they're pretty close but usually it's not quite there because the only guidance they get is from outside.

[–] melroy@kbin.melroy.org 1 points 2 days ago

Exactly. This is also why Ai doesn't really truly understand the responses it gives back.

It's faking intelligence by the training data, so it seems like intelligence by an untrained eye, but in reality Ai is just an hallucination that tries it best to give the most likely and correct answer possible (again without understanding).