@simon LLMs can’t lie, they can only ever output tokens according to statistical probability derived from their training.
It responds to its input exactly as it was trained to do with zero understanding or agency. Please don’t fall into the anthropomorphism trap like so many others.
This is a great, clear read on the differences between the ways in which humans think and LLMs predict, a short paper by Murray Shanahan https://arxiv.org/pdf/2212.03551.pdf
@StuartGray I'm not convinced by that
I think it's possible to use the term "lying" while also emphasizing that these are not remotely human-like entities
https://fedi.simonwillison.net/@simon/110146906375675620