whether it's telling the truth
"whether the output is correct or a mishmash"
"Truth" implies understanding that these don't have, and because of the underlying method the models use to generate plausible-looking responses based on training data, there is no "truth" or "lying" because they don't actually "know" any of it.
I know this comes off probably as super pedantic, and it definitely is at least a little pedantic, but the anthropomorphism shown towards these things is half the reason they're trusted.
That and how much ChatGPT flatters people.
As someone living in a red state, every city here is referred to as a "blue city", whether people are moving to it or not. State color has shit all to do with it and tends to be how much did we make the assumption land votes.