this post was submitted on 31 Jul 2025
161 points (96.0% liked)
Facepalm
3348 readers
1 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Every time you ask something to an LLM it's random, and the randomness is controlled by what is called temperature. Good feeling responses come from LLMs with moderate temperature values, including chatgpt. This means putting the prompt in and getting a different response is expected, and can't disprove the response another person got.
Additionally, people are commonly creating there own "therapist" or "friend" from these LLMs by teaching them to respond in certain ways, such as being more personalized and encouraging instead of being correct. This can lead to a feedback loop with mentally ill people that can be quite scary, and it's possible that even if a fresh chatgpt chat doesn't give a bad response it's still capable of these kinds of responses