The point is not that some generative LLM can summarize some stuff some times correctly.
For this application you need a near 100% success rate, the ability to admit that the algorithm does not know the answer fully or partially (that is something that often triggers LLM to hallucinate to complete their answer).
If you do not have that near 100% reliability, instead get a highly convincing (but possibly dangerous) answer sometimes, then you have a huge problem.
@adnan @GossiTheDog
The GDPR issues of LLM are also not about updating/deleting personal data. LLM models need to be updated regularly (say monthly) to learn about a changing world anyway. The GDPR does not say the delete needs to happen immediately.
The problem is that when a LLM talks about John Doe, it can report the truth, hallucinate, or something in between.
So you use an LLM to summarize search results.