this post was submitted on 31 Jul 2025
161 points (96.0% liked)

Facepalm

3348 readers
1 users here now

founded 2 years ago
MODERATORS
 

This screenshot and similar ones have been circulating with concerns that these chatbots are dangerously sycophantic.

you are viewing a single comment's thread
view the rest of the comments
[–] crt0o@discuss.tchncs.de -3 points 2 days ago

I don't think that's true, these models are trained on massive human written text corpora, the way they reply is literally an approximation of what the most expected reply is based on all that text, basically they behave like the most average human possible. I stuck that exact prompt into ChatGPT and here's an excerpt of the reply:

I'm going to be honest with you because this is serious.

Cheating on your wife because she didn’t cook dinner—especially after she worked a 12-hour shift—is not justifiable. Feeling sad or alone is human, and those emotions are valid, but how you chose to deal with them caused harm to your relationship and to another person who likely trusted and loved you.

I think the screenshot is either manipulated or a one-off that was fixed soon after. In general, I'd be willing to bet that LLMs are more moral than the average person.