this post was submitted on 26 Jun 2025
8 points (90.0% liked)
dynomight internet forum
83 readers
3 users here now
dynomight internet forum
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Indication that drawing the boundary is hard is just looking at how bad current LLMs are with hallucinating. An LLM almost never states “I don't know” or “I am unsure”, at least not in a meaningful fashion. Ask it about anything that's known to be an unsolved problem, it'll tell you so — but ask it about anything obscure, and it'll come up with some plausible-sounding bullshit.
And I think that's a failure to recognize the boundary of what it knows vs what it doesn't.
I think this is a fair argument. Current AIs are quite bad about "knowing if they know". I think it's likely that we can/will solve this problem, but I don't have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.