this post was submitted on 17 Sep 2025
91 points (96.9% liked)

Fuck AI

4038 readers
525 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Kolanaki@pawb.social 0 points 3 hours ago (1 children)

I literally based what I said on seeing papers and video essays by students using generative AI to perform specific tasks, and overcoming this issue. It's not just about the data it is "learning" but also about how you "reward" it for doing what you intend it to do. Let it figure out how to win at a game, and it will cheat until you start limiting how it is allowed to win.

[โ€“] Thorry@feddit.org 1 points 18 minutes ago

Then either you and/or those kids don't understand the tech they are using.

Sure you can use reinforcement training to improve or shape a model in the way you would want it to be. However, as I said, the model doesn't know what is true and what is not true. That data simply isn't there and can't ever be there. So training the model 'not to lie' simply isn't a thing, it doesn't "know" it's lying, so it can't prevent or control lies or truths.

Lets say you create a large dataset and you define in that data set whether something is true or false. This would be a pretty labour intensive job, but possible perhaps (setting aside the issue of truth often being a grey area and not a binary thing). If you instruct it only to re-iterate what is defined as true in the source data, it then loses all freedom. If you ask it a slightly different question that isn't in the source data, it simply won't have the data to know if the answer is true or false. So just like the way it currently functions, it will output an answer that seems true. An answer that logically fits after the question. It likes putting in those jigsaw pieces and the ones that fit perfectly must be true right? No, they have just as big of a chance of being totally false. Just because the words seem to fit, doesn't mean it's true. You can instruct it not to output anything unless it knows it is true, but that limits the responses to the source data. So you've just created a really inefficient and cumbersome search tool.

This isn't an opinion thing or just a matter of improving the tech. The data simply isn't there, the mechanisms aren't there. There is no way an LLM can still do what it does and also tell the truth. No matter how hard the marketing machines are working to convince people it is an actual artificial intelligence, it is not. It's a text prediction engine and that's all it will ever be.