AI chatbots don't lie on purpose. They're programmed to respond to any query, drawing on patterns of word association in their data (and search results, for Bing) to generate plausible answers. They have no idea if what they're saying is true. Yet they say it so definitively, even making up nonexistent but realistic-sounding sources when needed to back up their claims.
@Katecrawford dubs these bogus sources "hallucitations." https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/
It isn't just one law professor. It appears ChatGPT routinely fills in the gaps with falsehoods when prompted to talk about specific individuals about whom it may have limited credible data. An Australian mayor is threatening to sue OpenAI with defamation after ChatGPT told a constituent he'd been imprisoned for bribery, and the rumor spread. https://arstechnica.com/tech-policy/2023/04/openai-may-be-sued-after-chatgpt-falsely-says-aussie-mayor-is-an-ex-con/