this post was submitted on 11 Jul 2025
96 points (98.0% liked)
Fuck AI
4932 readers
1623 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
no, they aren't processing high quality data from multiple sources. They're giving you a statistical average of that data. They will always be wrong by nature. Hallucinations cannot be eliminated. Anyone saying otherwise (irrelevant of how rich they are) is bullshitting.
If hallucinations cannot be eliminated, how are they decreasing them (allegedly)?
Actually according to studies, the most recent versions of all the major LLMbecile vendors are hallucinating more, not less.
by special casing a lot of things. Like expert systems, in the 80s
What do you mean?
the "guardrails" they mention. They are a bunch of if/then statements looking to work around methods that the developers have found to produce undesirable outputs. It doesn't ever mean "the llm will not bo doing this again". It means "the llm wont do this when it is asked in this particular way", which always leaves the path open for "jailbreaking". Because you will almost always be able to ask a differnt way that the devs (of the guardrails, they don't have much control over the llm itself) did not anticipate.
Expert systems were kind of "if we keep adding if/then statements, we would eventually cover all the bases and get a smart, reliable system". That didn't work then. It won't work now either
I have experienced this first hand. Asking LLMs explicit things leads to “I can’t help you with that” but if I ask it in a roundabout way, it gives a straight answer.