this post was submitted on 31 Jul 2025
164 points (96.1% liked)
Facepalm
3354 readers
1 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sure, but I think this is similar to the problem of social media being addicting. This kind of thing makes users feel good, and therefore makes companies more money.
I don't expect the major AI companies to self regulate here, and I don't expect LLMs to ever find a magical line of being sycophantic enough to make lots of money while never encouraging a user about anything unethical, nor do I want to see their definition of "unethical" become the universal one.
This right here. If someone can maliciously make an LLM do this, there are plenty of others out there that will do it unknowingly and take the advice at face value.
It’s a search engine at the end of the day and only knows how to parrot.
That’s why AI needs to be locally run. It takes the sycophancy profit incentive out of the equation, and allows models to shard into countless finetunes.
And its why the big companies are all pushing safety so much, like they agree with the anti AI crowd: they are scared of near free, local models more than anything.