Ok, so your point is that people who interact with these AI systems will know that it can't be trusted and that will alleviate the negative consequences of its misinformation.
The problems with that argument are many:
-
The vast majority of people are not AI experts and do in fact have a lot of trust in such systems
-
Even people who do know often have no other choice. You don't get to talk to a human, it's this chatbot or nothing. And that's assuming the AI slop is even labelled as such.
-
Even knowing that the information can be misleading does not help much. If you sell me a bowl of candy and tell me that 10% of them are poisoned, I'm still going to demand non-poisoned candy. The fact that people can no longer rely on accurate information should be unacceptable.
It's rather difficult to get people who are willing to lie and commit fraud for you. And even if you do, it will leave evidence.
As this article shows, AIs are the ideal mob henchmen because they will do the most heinous stuff while creating plausible deniability for their tech bro boss. So no, AI is not "just like most people".