It isn't just one law professor. It appears ChatGPT routinely fills in the gaps with falsehoods when prompted to talk about specific individuals about whom it may have limited credible data. An Australian mayor is threatening to sue OpenAI with defamation after ChatGPT told a constituent he'd been imprisoned for bribery, and the rumor spread. https://arstechnica.com/tech-policy/2023/04/openai-may-be-sued-after-chatgpt-falsely-says-aussie-mayor-is-an-ex-con/
I'm generally not that big on "hey look, I made the AI say something bad!" stories. It's programmed to please, so if you really want to make it say bad stuff, you probably can.
But with tons of folks now using AI chatbots as research tools, it seems not great if they're freely mixing facts with equally plausible-sounding lies backed up by bogus citations.