@drq AI's are not hard-wired to uncritically believe everything they read. A single correctly formulated followup question is usually enough for any decent model to became critical at itself and potentially completely change its mind. (Except the bing one smh, bing will give you utter nonsense and will be way too happy to die on the hill of convincing you that it's 100% true).

AI's are hard-wired to respond, no matter what, since hallucinated answers proven to be way more fun and marketable than reasonable excuses. AI's are not provided with any fact-checking capabilities, since it's expensive, difficult, and will inevitably shift the political compass of the model, while the goal of its creators is pretty much to please as many people as possible.

It's some people that are hard-wired to uncritically believe everything they read, despite that no one promised that it's actually accurate, despite all the disclaimers, in large text, in many places on their way to chat window.