this post was submitted on 11 Mar 2026
173 points (98.3% liked)
Technology
83295 readers
4874 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Are all outputs hallucinations? It's just some happen to be correct and some aren't. It doesn't know and can't tell unless it's specifically told (hence the guard rails).
But if I've gotta build so many hand rails (instructions) then is it really "AI"?
Point 1 - no. LLM outputs are not always hallucinations (generally speaking - some are worse than others) but where they might veer off into fantasy, I've reinforced with programming. Think of it like giving your 8yr old a calculator instead of expecting them to work out 7532x565 in their head. And a dictionary. And encyclopedia. And Cliff's notes. And watch. And compass. And a .... you get the idea.
The role of the footer is to show you which tool it used (its own internal priors, what you taught it, calculator etc) and what ratio the answer is based on those. Those are router assigned. That's just one part of it though.
Point 2 is a mis-read. These aren't instructions or system prompts telling the model "don't make things up" - that works about as well as telling a fat kid not to eat cake.
Instead, what happens is the deterministic elements fire first. The model gets the answer, which the model then builds context on. It funnels it in the right direction and the llm tends to stay in that lane. That's not guardrails on AI, that's just not using AI where AI is the wrong tool. Whether that's "real AI" is a philosophy question - what I do know and can prove is that it leads to far fewer wrong answers.
EDIT: I got my threads mixed. Still same point but for context, see - https://lemmy.world/post/44805995
I refuse to call it AI
It's a LM..... Pure and simple. Anyway none of the LMs can come up with theory of relatively (if you gave them all of the known physics up to 1915).
Nor can they play paper scissors rock (they don't realise it's pointless).
As far as I can tell they're wrong more times then they're right and the only use I have for them is as a glorified search engine (and even then they're still fricking wrong.
They're only useful if you already know the answer because if you don't know the answer you don't know if they've given you the wrong answer.