Email or username:

Password:

Forgot your password?
Top-level
Fish Id Wardrobe

@yogthos @failedLyndonLaRouchite@mas.to Personally, I have every doubt. The underlying technology isn't any more than a device to make sentences that sound like answers. It doesn't understand what it is saying; there's no "there" there.
This isn't intelligence, artificial or otherwise. It's the wrong approach if you want more than it's currently giving us.

7 comments
Yogthos

@fishidwardrobe I think you have to take a broader perspective here. Fundamentally, GPT is a system that builds a model of the data it's exposed to and then makes predictions based on that model.

The problem is that current approaches is that they simply feed text into this system without any context.

However, imagine if such a system was embodied either in a physical robot or a virtual avatar, and then taught the rules of the physical world the way we'd teach a child.

Yogthos

@fishidwardrobe I would argue that at that point it would have real understanding in a human sense. It would create a predictive model of the physical world based on its interactions, and then we could start developing a common language with it.

There is nothing to suggest that there is anything fundamentally different happening when a human mind develops.

Fish Id Wardrobe

@yogthos That would be the completely different approach I was talking about, then, because the current approach is incapable of "learning like a child".

Yogthos

@fishidwardrobe could you elaborate on that, why do you say that GPT is incapable of learning the way a human child does?

Phil Johnston

@yogthos @fishidwardrobe what does “learning the way a human child does” mean? That statement could literally mean millions of different things depending on subtle context differences.

Yogthos

@johnstonphilip @fishidwardrobe I meant in a sense of building a predictive model of the environment through interaction.

Fish Id Wardrobe

@yogthos @johnstonphilip Well, it doesn't do that. It's really not much better in that respect than ELIZA – the difference is, instead of just the prompt, it has gigabytes of text to work with and does so in a cleverer way. But it still has zero understanding. It's a "Chinese Room", broadly speaking.

There's plenty of information out there about how LLMs work. You might start by searching for a paper called "Stochastic Parrots".

Go Up