Email or username:

Password:

Forgot your password?
Top-level
Stuart Gray

@simon LLMs can’t lie, they can only ever output tokens according to statistical probability derived from their training.

It responds to its input exactly as it was trained to do with zero understanding or agency. Please don’t fall into the anthropomorphism trap like so many others.

This is a great, clear read on the differences between the ways in which humans think and LLMs predict, a short paper by Murray Shanahan arxiv.org/pdf/2212.03551.pdf

1 comment
Simon Willison

@StuartGray I'm not convinced by that

I think it's possible to use the term "lying" while also emphasizing that these are not remotely human-like entities

fedi.simonwillison.net/@simon/

Go Up