this post was submitted on 05 Aug 2025
96 points (94.4% liked)

Fuck AI

3642 readers
760 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

A lot of people at my work, especially managerment, are very pro AI. I've haven't openly shared my opinion of AI/the fact that I don't use it, because the hype around AI seems almost cult like at my work. It was months before anyone brought up hallunications.

Part of me wants to share my reasons against AI at work. Some possible reasons I'm thinking of sharing are cooking the planet, you don't know when it is hallucinating so how do you trust it, critical thinking rot.

Any advice on discussing the negatives of AI at work? Or should I just keep my head down and let sloppers slop?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] ech@lemmy.ca 16 points 1 day ago* (last edited 1 day ago) (1 children)

That's because "hallucinating" isn't a bug, it's the core feature of llms. That tech bros have figured out how kludge on a way to get it to sometimes recite accessible data doesn't change the fact that the central purpose for these algorithms is to manufacture text from nothing (well, technically from random noise). The "hallucination" is the failure of the tech bros to hide that function.

[โ€“] wewbull@feddit.uk 5 points 1 day ago

It's not an add-on feature. The LLM produces something with the best score it can. Things that increase the score:

  • Things appropriate to the tokens in the request
  • Things which look like what it's been trained on.

So that includes:

  • Relevant facts
  • grammatically correct language
  • friendly style of writing
  • etc

If it has no relevant facts then it will maximise the others to get a good score. Hence you get confidently wrong statements because sounding like it knows what it's talking about scores higher than actually giving correct information.

This process is inherent to machine learning at its current level though. It's like a "fake it until you make it" person, who will never admit they're wrong.