Email or username:

Password:

Forgot your password?
Top-level
Apostolis

@david_chisnall

Humans are able to recover the meaning of an ambiguous text.

On the other hand , a new version of a language can make a program not be able to compile. The compiler cannot understand the meaning of the program.

Ambiguity thus leads to antifragility. We need to able to recover from a certain number of errors.

3 comments
ohad

@apostolis @david_chisnall These issues seem different to me. The issue here is not that of ambiguity (one program means multiple things) but of backwards incompatibility, something natural languages suffer of as well (older writing are archaic at best, if not unreadable to the lay person).

In theory, one can write a compiler that translates valid programs in an older version of the language to programs in newer versions. I have used such compilers in the past. In practice, it's too much work to do it as the language evolves, and instead we manually update each program, or try to make changes backward compatible.

@apostolis @david_chisnall These issues seem different to me. The issue here is not that of ambiguity (one program means multiple things) but of backwards incompatibility, something natural languages suffer of as well (older writing are archaic at best, if not unreadable to the lay person).

In theory, one can write a compiler that translates valid programs in an older version of the language to programs in newer versions. I have used such compilers in the past. In practice, it's too much work to...

Apostolis

@ohad

Let me rephrase what I am trying to say.

Ambiguity exists in the natural world anyway. Human beings have the ability to understand the world and to create formal systems that are précise (more or less so)

Part of what we are , our intelligence, is to be able to handle ambiguity.

(Also, multiple mutations in the DNA of a cell do not lead to failure)

Computers on the other hand can't do that, at the moment! Is there a solution of this?

@david_chisnall

David Chisnall (*Now with 50% more sarcasm!*)

@apostolis @ohad Ambiguity exists in the world. A computer executing an operation does not handle ambiguity, it expects to be told to execute a sequence of steps from a finite state machine.

Something has to resolve the ambiguity. GUIs do this by presenting the set of available commands and requiring the person to learn them. Command lines (even rich ones like the INFORM interpreters other folks have brought up) do this by having a well-defined vocabulary and grammar and reporting errors if you deviate from this. Both of these approaches have advantages and disadvantages but they both have the common property that the human is responsible for resolving the ambiguity. Both also provide the human with feedback on how to resolve the ambiguity.

If you try to support natural language, you are moving that requirement into the machine. This removes agency from the human. Rather than having to be explicit about what you want, you grant the computer greater freedom.

For issuing instructions to another human, this is useful: the other human is an intelligent being with agency and may be able to solve the problem in better ways than you expected (or worse: ‘Will no one rid me of this turbulent priest?’). If this is a work context, a lot of team building exercises (and, in military contexts, a lot of drill and manoeuvres) are intended to ensure that you have a common frame of reference that ensures that the people giving and receiving instructions will do so in the same way.

Back to the computer, it does not have a model of how you think. The common pitch for how you use an LLM for this is to prompt it to consume a text stream from a human and emit JSON or similar for a rule-based system to execute. The LLM has no understanding of the rule-based system and no theory of mind. It has a latent space that defines a notion of similarity based on proximity in this n-dimensional space and produces output based on that notion. This is not something that a human can develop an intuition for (why does a one pixel change turn your classifier from tagging an image as cat to tagging it as dog?). There is no defined vocabulary of commands, there is only a well-tested part of the space. If you stick to that, it will probably work but you have something less powerful than just exposing the underlying rule-based system via a GUI or command line. If you stray from that, the interpretation of your inputs will diverge in surprising ways.

And if you can define a restricted set of commands that work, now you don’t have natural language. Now you have a command line (possibly a voice-controlled command line). You can write a grammar for it. Users can learn how to use it. Users get feedback when their commands are ambiguous and can express their intent more clearly. You empower users, rather than trying to remove agency from them and replacing it with inexplicable behaviour.

@apostolis @ohad Ambiguity exists in the world. A computer executing an operation does not handle ambiguity, it expects to be told to execute a sequence of steps from a finite state machine.

Something has to resolve the ambiguity. GUIs do this by presenting the set of available commands and requiring the person to learn them. Command lines (even rich ones like the INFORM interpreters other folks have brought up) do this by having a well-defined vocabulary and grammar and reporting errors if you deviate...

Go Up