@GossiTheDog arc search doesn’t have this problem https://search.arc.net/zDdh8IlbQ2eRJez9Ouy4
Top-level
@GossiTheDog arc search doesn’t have this problem https://search.arc.net/zDdh8IlbQ2eRJez9Ouy4 3 comments
@adnan @GossiTheDog The problem is that when a LLM talks about John Doe, it can report the truth, hallucinate, or something in between. So you use an LLM to summarize search results. |
@adnan @GossiTheDog
The point is not that some generative LLM can summarize some stuff some times correctly.
For this application you need a near 100% success rate, the ability to admit that the algorithm does not know the answer fully or partially (that is something that often triggers LLM to hallucinate to complete their answer).
If you do not have that near 100% reliability, instead get a highly convincing (but possibly dangerous) answer sometimes, then you have a huge problem.