Trusting LLMs threatens your credibility.
I read a bogus claim about GPU instruction sets, cited to GPT-4 and an anonymous "expert". This is my area of expertise, I know the claim is demonstrably false. And now I know the author is relying on bullshit generators. Now I doubt every other claim the author makes, because with egregious errors in the parts I know about, how could I trust the parts I don't?
(Edit: narrowed the scope of the lead.)
@alyssa in a way using llm for facts in earnest is going to expose a lot of people that benefitted from the doubt before. Now everybody is on the lookout, and the easiest to fool are the fools themselves