This shouldn’t be complicated. Defamation is an extremely well settled legal area. https://www.abc.net.au/news/2024-11-04/ai-artificial-intelligence-hallucinations-defamation-chatgpt/104518612
“‘[Hallucinations are] not an issue that can be easily corrected,’ Dr Thorne says.”
Well, that’s simply untrue. If you’re publishing defamatory imputations you shouldn’t be allowed to continue to publish them just because you spent a lot of money to make it happen. You can easily stop defaming someone by taking your chatbot offline until you retrain it. Simplest thing in the world to do.
Maybe that’ll take six months and cost you half a billion dollars, but why is that your victim’s problem?
@NewtonMark I think you misinterpret. He's saying that there is no retraining which can prevent this. Hallucinations are inherent in how these things work. They need to be taken offline and left offline