this post was submitted on 31 Jul 2025
161 points (96.0% liked)
Facepalm
3348 readers
1 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It depends entirely on the prompt and training. Different LLMs, customised differently, vary wildly on how agreeable they are.
Sure, but I think this is similar to the problem of social media being addicting. This kind of thing makes users feel good, and therefore makes companies more money.
I don't expect the major AI companies to self regulate here, and I don't expect LLMs to ever find a magical line of being sycophantic enough to make lots of money while never encouraging a user about anything unethical, nor do I want to see their definition of "unethical" become the universal one.
This right here. If someone can maliciously make an LLM do this, there are plenty of others out there that will do it unknowingly and take the advice at face value.
It’s a search engine at the end of the day and only knows how to parrot.
That’s why AI needs to be locally run. It takes the sycophancy profit incentive out of the equation, and allows models to shard into countless finetunes.
And its why the big companies are all pushing safety so much, like they agree with the anti AI crowd: they are scared of near free, local models more than anything.
Some will change their mind if you ask them if they are sure about what they said.
Others are so stubborn that will keep insisting on the same thing even if you try to point in multiple ways that you caught them in the wrong.
None of them will tell you you're a fucking idiot, even when you deserve it.
Old heads know how cool Bing's AI used to be.