Email or username:

Password:

Forgot your password?
Top-level
Riley S. Faelan

@angelastella That's not true. There is a class of problems that LLMs are a perfect fit for: bedazzling humans. LLMs generate things that can be hard, at least at the first glance, for a human not particularly familiar with the subject matter to tell apart from the genuine article. This means, LLMs will be very useful for making cromulent-sounding political arguments, convincing-sounding advertising, and confident-sounding lies in Wikipedia articles.

And guess what three areas LLMs will be most eagerly put into a good (?) use for?

For advertising LLM-friendly policies with screwy arguments that most people would find hard to push back against, including via lies on Wikipedia, of course!

@eniko @gabrielesvelto @cederbs

6 comments
Ángela Stella Matutina

@riley @eniko @gabrielesvelto @cederbs

Well, we also devised efficient technologies for such non-problems as killing and torturing people and destroying whole cities.

Riley S. Faelan

@angelastella An analogy is like a cookie: it crumbles when stretched too far.

@eniko @gabrielesvelto @cederbs

Ángela Stella Matutina

@riley @eniko @gabrielesvelto @cederbs

My by now almost forgotten point is: if I don't recognize a problem as such, I don't need any solutions.

Go Up