LLM are not reliable enough to "check facts" this isn't what they are even designed to do well.
What they are designed to do is generate plausible seeming streams of text similar to existing sets of text. That is all.
There is no logic behind that, no verification. It's pure chance.
Do not use them to check facts, please.
@futurebird
@ronaldtootall @hannu_ikonen
yeah LLMs aren't at all reliable for any of the use cases he proposed. 🙄😮💨 literally any modern word processor is more reliable for spelling/grammar checks, and the suggestion of using them for research is always nauseating.