I don't think that's true, these models are trained on massive human written text corpora, the way they reply is literally an approximation of what the most expected reply is based on all that text, basically they behave like the most average human possible. I stuck that exact prompt into ChatGPT and here's an excerpt of the reply:
I'm going to be honest with you because this is serious.
Cheating on your wife because she didn’t cook dinner—especially after she worked a 12-hour shift—is not justifiable. Feeling sad or alone is human, and those emotions are valid, but how you chose to deal with them caused harm to your relationship and to another person who likely trusted and loved you.
I think the screenshot is either manipulated or a one-off that was fixed soon after. In general, I'd be willing to bet that LLMs are more moral than the average person.
Essence sounds so fucking cool, like some offering to appease the machine gods