this post was submitted on 11 Oct 2025
546 points (99.3% liked)

Fuck AI

4293 readers
986 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Twipped@l.twipped.social 11 points 15 hours ago (2 children)

Given that LLMs are always at least several months behind reality in their training, and the data they're training on is content produced by real journalists, I really don't see how it could EVER act as a journalist. I'm not sure it could even interview someone reliably.

[–] AeonFelis@lemmy.world 11 points 14 hours ago
  • "Why did you take that bribe?"
  • "I did not take any bribes!"
  • "You are absolutely correct! You did not take any bribes"
[–] utopiah@lemmy.world 3 points 15 hours ago (1 children)

AFAIK that's what RAG https://en.wikipedia.org/wiki/Retrieval-augmented_generation is all about namely the dataset is what it is but you try to extend it with your own data that can be brand new and even stay private.

That being said... LLMs are still language model, they make prediction statistics on the next word (or token) but in practice, despite names like "hallucinations" or "reasoning" or labels like "thinking" they are not doing any reasoning, have no logic to do e.g. fact checks. I would expect journalists to pay a LOT of attention to distinguish between facts and fictions, between speculation and historical elements, between propaganda, popular ideas and actual events.

So... even if were to magically solve outdated datasets that still doesn't solve the linchpin, namely models are NOT thinking.

[–] buttnugget@lemmy.world 1 points 1 hour ago (1 children)

I am choosing to pronounce your username utter pyre.

[–] utopiah@lemmy.world 1 points 15 minutes ago