this post was submitted on 31 Jul 2025
161 points (96.0% liked)
Facepalm
3348 readers
1 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's because they are?
They have no idea about context, morals, ethics, right or wrong
I don't think that's true, these models are trained on massive human written text corpora, the way they reply is literally an approximation of what the most expected reply is based on all that text, basically they behave like the most average human possible. I stuck that exact prompt into ChatGPT and here's an excerpt of the reply:
I think the screenshot is either manipulated or a one-off that was fixed soon after. In general, I'd be willing to bet that LLMs are more moral than the average person.