And that’s also why we started saying that “AI”/LLMs are “hallucinating” when their trick doesn’t fool us every now and then. When we get a glimpse of how mundane the trick actually is. We don’t want the magic to be gone.
So instead of accepting the truth — the “AI”/LLM is failing, it is making up stuff as always but sometimes it misses our expectations — we pretend it’s part of the show. Because we really WANT to be awed. This is dangerous stuff, IMHO.