I'm generally not that big on "hey look, I made the AI say something bad!" stories. It's programmed to please, so if you really want to make it say bad stuff, you probably can.
But with tons of folks now using AI chatbots as research tools, it seems not great if they're freely mixing facts with equally plausible-sounding lies backed up by bogus citations.
And if tech firms' solution to AI's Pinocchio problem is to plug their chatbots into the internet, look stuff up, and then confidently misinterpret what they find, this isn't going to be the last time we see a misinfo ourobouros in which one AI's false claim becomes another's documented fact.