As long as generative AI is defined by “fancy autocomplete” also known as large language models, we should expect that hallucinations won’t go away.
The idea of forcing autocomplete to tell the truth is absurd but that is essentially what people are expecting out of LLMs like ChatGPT and Google Bard. It’s quite likely these are simply impossible expectations to meet.
https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/
@carnage4life I'd posit that if Google and the rest didn't want people thinking their LLMs would tell nothing but the truth, they shouldn't have used them for stuff that is expected to be truthful.
Like Tesla naming their assistant "autopilot" and then saying "people shouldn't think this thing can drive the car by itself"