@rspfau @jarkman
Or, third option, you and your audience share sufficient context to resolve the ambiguities or at least make the right interpretation easy to infer.
This is a feature that no LLM is ever likely to possess, as that would require its training set to mirror your own experience and training, and also requires the kind of generalized, informed judgement that humans routinely do but LLMs purely suck at. Statistical next-word prediction is no substitute for a mental model of actual meaning.
That LLMs routinely produce utterly confident, wrong results is one of their core dangers. Humans at least know (mostly) to say, "I'm really not sure about this," before speculating.
@n1xnx @rspfau Sure, LLMs are terrible at many things right now. Maybe they'll never get better, maybe future machines will come to learn the world more richly and deeply.