@cstross@wandering.shop I see upon reading that the article states, "AI platforms like ChatGPT often hallucinate totally incorrectly [sic] answers out of thin air."

While this is true as far as it goes, I believe it misstates — and understates — the problem. A more accurate statement of the problem is, "Large language models hallucinate ALL of their responses. Some of the hallucinations merely happen to coincide well with reality." But you cannot obviously tell them from the ones that don't.

They do not understand anything. They are not designed for understanding. What they are designed to do is very specifically to generate grammatically correct output that looks convincing.