this post was submitted on 21 Jan 2026
1274 points (98.7% liked)

Technology

79476 readers
4329 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Workers should learn AI skills and companies should use it because it's a "cognitive amplifier," claims Satya Nadella.

in other words please help us, use our AI

you are viewing a single comment's thread
view the rest of the comments
[–] 5too@lemmy.world 11 points 1 week ago (1 children)

Right now, it’s just a fun toy, prone to hallucinations.

That's the thing though - with an LLM, it's all "hallucinations". They're just usually close to reality, and are presented with an authoritative, friendly voice.

(Or, in your case, they're usually close to the established game reality!)

[–] merc@sh.itjust.works 3 points 6 days ago (1 children)

This is the thing I hope people learn about LLMs, it's all hallucinations.

When an LLM has excellent data from multiple sources to answer your question, it is likely to give a correct answer. But, that answer is still a hallucination. It's dreaming up a sequence of words that is likely to follow the previous words. It's more likely go give an "incorrect" hallucination when the data is contradictory or vague. But, the process is identical. It's just trying to dream up a likely series of words.

[–] OctopusNemeses@lemmy.world 1 points 6 days ago* (last edited 6 days ago)

Before the tech industry set its sights on AI, "hallucination" was called error rate.

It's the rate at which the model incorrectly labelled outputs. But of course the tech industry being what it is needs to come up with alternative words that spin doctor bad things into not bad things. So what the field of AI for decades had been calling error rate, everyone now calls "hallucinations". Error has far worse optics than hallucination. Nobody would be buying this LLM garbage if every article posted about it included paragraphs about how its full of errors.

That's the thing people need to learn.