Email or username:

Password:

Forgot your password?
3 comments
Andreas K

@adnan @GossiTheDog

The point is not that some generative LLM can summarize some stuff some times correctly.

For this application you need a near 100% success rate, the ability to admit that the algorithm does not know the answer fully or partially (that is something that often triggers LLM to hallucinate to complete their answer).

If you do not have that near 100% reliability, instead get a highly convincing (but possibly dangerous) answer sometimes, then you have a huge problem.

Andreas K

@adnan @GossiTheDog
The GDPR issues of LLM are also not about updating/deleting personal data. LLM models need to be updated regularly (say monthly) to learn about a changing world anyway. The GDPR does not say the delete needs to happen immediately.

The problem is that when a LLM talks about John Doe, it can report the truth, hallucinate, or something in between.

So you use an LLM to summarize search results.

Andreas K

@adnan @GossiTheDog
The GDPR gives now a right to John Doe to get “John Doe dies 2022 due to a overdose” removed from the summary.

1.) non-trivial to figure out why the LLM added it in the first place. Perhaps a joke on some mailing list by a buddy of his that died from an overdose of gaming?

2.) And how do they keep the LLM from replacing it with a hallucinated item like “John Doe is suspected to be a serial murderer”?

item 2 is currently not solved in academic literature, AFAIK.

@adnan @GossiTheDog
The GDPR gives now a right to John Doe to get “John Doe dies 2022 due to a overdose” removed from the summary.

1.) non-trivial to figure out why the LLM added it in the first place. Perhaps a joke on some mailing list by a buddy of his that died from an overdose of gaming?

2.) And how do they keep the LLM from replacing it with a hallucinated item like “John Doe is suspected to be a serial murderer”?

Go Up