@jarkman When I ask people to do things in natural language, it often fails miserably (the people do the wrong thing). You must be interacting with very capable people or your natural language is very precise and unambiguous.
Top-level
@jarkman When I ask people to do things in natural language, it often fails miserably (the people do the wrong thing). You must be interacting with very capable people or your natural language is very precise and unambiguous. 3 comments
@jarkman @rspfau |
@rspfau @jarkman
Or, third option, you and your audience share sufficient context to resolve the ambiguities or at least make the right interpretation easy to infer.
This is a feature that no LLM is ever likely to possess, as that would require its training set to mirror your own experience and training, and also requires the kind of generalized, informed judgement that humans routinely do but LLMs purely suck at. Statistical next-word prediction is no substitute for a mental model of actual meaning.
That LLMs routinely produce utterly confident, wrong results is one of their core dangers. Humans at least know (mostly) to say, "I'm really not sure about this," before speculating.
@rspfau @jarkman
Or, third option, you and your audience share sufficient context to resolve the ambiguities or at least make the right interpretation easy to infer.
This is a feature that no LLM is ever likely to possess, as that would require its training set to mirror your own experience and training, and also requires the kind of generalized, informed judgement that humans routinely do but LLMs purely suck at. Statistical next-word prediction is no substitute for a mental model of actual meaning.