@alyssa
I read a text (a blog entry? a rant?) a few years ago that annoyed me (a lot).

It was about a researcher who basically stated that people shouldn't criticize or review his papers, because he was "right".
Paraphrasing: "The probability that someone reviewing one of papers is not understanding it or getting it wrong is vastly higher than me making a mistake."

Maybe LLMs finally have an effect that people don't take everything at face value. In the past, a text was very likely written by a human. We can't say that anymore.

(Of course, the effect could be the opposite: "Our new FactGPT makes sure to tell only 'the truth'. If you see a text with the green FactGPT checkmark™️, you can be sure that it only contains 'truth'.")