this post was submitted on 08 Aug 2025
37 points (100.0% liked)

Hacker News

2602 readers
695 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 1 year ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[–] lvxferre@mander.xyz 10 points 1 month ago

No shit.

They talk about artificial "intelligence", "reasoning" models, "semantic" supplementation, all that babble, but it's all a way to distract you that large language models do not think. Their output does not show signs of reasoning, unless you're a disingenuous (or worse, dumb) fuck who cherry picks the "hallucinations" out of the equation.

And even this idiotic "hallucinations" analogy is a way to distract you from the fact that LLMs do not think. It's there to imply their reasoning is mostly correct, but suddenly it "brainfarts"; no, that is not what happens, the so-called hallucinations are the result of the exact same process as any other output.

[–] resipsaloquitur@lemmy.world 3 points 1 month ago

It doesn’t have to work to lay you off or deny you a raise.