this post was submitted on 17 Sep 2025
88 points (96.8% liked)

Fuck AI

4038 readers
526 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
top 18 comments
sorted by: hot top controversial new old
[–] DandomRude@lemmy.world 11 points 3 hours ago* (last edited 3 hours ago)

Apart from the fact that these hallucinations just cannot be fixed, that doesn't even seem to be the only major problem atm: ChatGPT 5, for example, often seems to live in the past and is regularly unable to assess reliability when a question needs to be answered based on current data.

For example, when I ask who the US president is, I regularly get the answer that it is Joe Biden. When I ask who the current German chancellor is, I get the answer that it is Olaf Scholz. This raises the question of what LLMs can be used for if they cannot even answer these very basic questions correctly.

The error rate simply seems far too high for use by the general public—and that's without even considering hallucinations, but simply based on answers to questions that are based on outdated or unreliable data.

And that, in my opinion, is the fundamental problem that also causes LLMs to hallucinate: They are unable to understand either the question or their own output—it is merely a probability calculation based on repetitive patterns—but LLMs are fundamentally incapable of understanding the logic behind these patterns; they only recognize the pattern itself, but not at all the underlying logic of the word order in a sentence. So they do not have a concept of right or wrong but only a statistical model based on the sequence of words in sentences—the meaning of a sentence cannot be captured fully in this way, which is why LLMs can only somewhat deal with sarcasm, for example, if the majority of sarcastic sentences in their training data have /s written after them so that this can be interpreted as an indicator for sarcasm (this way they can at least identify a sarcastic question if it contains/s).

Of course, this does not mean that there are no use cases for LLMs, but it does show how excessively oversold AI is.

[–] BlameTheAntifa@lemmy.world 12 points 4 hours ago

Literally everything a generative AI outputs is a hallucination. It is a hallucination machine.

Still, I like this fix. Let us erase AI.

[–] Cevilia@lemmy.blahaj.zone 12 points 4 hours ago

You can't "fix" hallucinations. They're literally how it works.

[–] ignirtoq@fedia.io 7 points 4 hours ago (1 children)

"Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly," the researcher wrote.

While there are "established methods for quantifying uncertainty," AI models could end up requiring "significantly more computation than today’s approach," he argued, "as they must evaluate multiple possible responses and estimate confidence levels."

"For a system processing millions of queries daily, this translates to dramatically higher operational costs," Xing wrote.

  1. They already require substantially more computation than search engines.
  2. They already cost substantially more than search engines.
  3. Their hallucinations make them unusable for any application beyond novelty.

If removing hallucinations means Joe Shmoe isn't interested in asking it questions a search engine could already answer, but it brings even 1% of the capability promised by all the hype, they would finally actually have a product. The good long-term business move is absolutely to remove hallucinations and add uncertainty. Let's see if any of then actually do it.

[–] DreamlandLividity@lemmy.world 3 points 3 hours ago

They probably would if they could. But removing hallucinations would remove the entire AI. The AI is not capable of anything other than hallucinations that are sometimes correct. They also can't give confidence, because that would be hallucinated too.

[–] theunknownmuncher@lemmy.world 5 points 5 hours ago* (last edited 4 hours ago)

I know everyone wants to be like "ha ha told you so!" and hate on AI in here, but this headline is just clickbait.

Current AI models have been trained to give a response to the prompt regardless of confidence, causing the vast majority of hallucinations. By incorporating confidence into the training and responding with "I don't know", similar to training for refusals, you can mitigate hallucinations without negatively impacting the model.

If you read the article, you'll find the "destruction of ChatGPT" claim is actually nothing more than the "expert" making the assumption that users will just stop using AI if it starts occasionally telling users "I don't know", not any kind of technical limitation preventing hallucinations from being solved, in fact the "expert" is agreeing that hallucinations can be solved.

[–] CubitOom 32 points 7 hours ago (2 children)

Maybe the real intelligence was the hallucinations we made along the way

[–] TheBat@lemmy.world 3 points 4 hours ago

Feed us LSD

[–] fartographer@lemmy.world 15 points 7 hours ago

You're absolutely right, Steven! Hallucinations can be valuable to simulate intelligence and sound more human like you do, Stephanie. If you wanted to reduce hallucinations, Tiffany—you should reduce your recreational drug use, Tim!

[–] Steve@startrek.website 25 points 7 hours ago

Its a win-win

[–] Corelli_III@midwest.social 16 points 6 hours ago

how many times are these guys going to release a paper just because one of them thought to look up "stochastic"

fucking bonkers, imagine thinking this is productive

[–] paraphrand@lemmy.world 10 points 6 hours ago* (last edited 6 hours ago)

This is what I hang my assumptions on them not reaching AGI on. It’s why hearing them raise money on wild hype is annoying.

Unless they come up with a totally new foundational approach. Also, I’m not saying current models are useless.

If anyone cared what experts think, we would not be here.

[–] floo@retrolemmy.com 13 points 7 hours ago

Sounds like a solid plan