@david_chisnall You seem to presuppose that LLMs perform some form of interpretation, even if they might disambiguate the user's natural language incorrectly. I think this is still wildly optimistic. They just attempt to predict what another natural language user might reply, based on what is essentially a huge compressed Markov model. A human doing that would interpret the message first, but an LLM skips that step altogether.