@angelastella That's not true. There is a class of problems that LLMs are a perfect fit for: bedazzling humans. LLMs generate things that can be hard, at least at the first glance, for a human not particularly familiar with the subject matter to tell apart from the genuine article. This means, LLMs will be very useful for making cromulent-sounding political arguments, convincing-sounding advertising, and confident-sounding lies in Wikipedia articles.
And guess what three areas LLMs will be most eagerly put into a good (?) use for?
For advertising LLM-friendly policies with screwy arguments that most people would find hard to push back against, including via lies on Wikipedia, of course!
@riley @angelastella @eniko @gabrielesvelto @cederbs And don’t forget spam.
So much spam.