Email or username:

Password:

Forgot your password?
Top-level
Arina Artemis :nonbinary_flag:

@devopscats Proof #467198245 that LLMs don't actually understand anything, they're just fancy phrase generators.

9 comments
Trent Waddington

@arina @devopscats it's trained to produce statistically similar outputs to its training data. If they put more "anti-questions" in the training data it'd produce more appropriate answers. What does "understanding" even mean? Suppose I asked this question to someone who had never seen a boat, or a goat, would they understand it? Most of us recognise the question as implying a row boat, but few of us have ever rowed a boat, and I bet none of us have tried to row a boat with a goat. Anyways, I'm going to eat some green eggs and ham.

@arina @devopscats it's trained to produce statistically similar outputs to its training data. If they put more "anti-questions" in the training data it'd produce more appropriate answers. What does "understanding" even mean? Suppose I asked this question to someone who had never seen a boat, or a goat, would they understand it? Most of us recognise the question as implying a row boat, but few of us have ever rowed a boat, and I bet none of us have tried to row a boat with a goat. Anyways, I'm going...

Arina Artemis :nonbinary_flag:

@quantumg @devopscats What I mean is, they don't understand that there's a boat and a goat. It is not translated into concepts the way a human mind would.

Trent Waddington

@arina @devopscats we don't have any idea how a human mind constructs concepts. Let alone any particular human mind. LLMs do indeed translate words into "concepts" and we know this because their internals can be interrogated. If we give it a few different sentences about goats there will be similar vectors in the different computation. What's more, we can intervene, changing the goat vectors to look more like cat vectors and the output will be cat-related. Sentences about goats eating socks will become about cats eating mice, even though we provided it nothing about mice, because it has encoded in it the more likely relationship.

@arina @devopscats we don't have any idea how a human mind constructs concepts. Let alone any particular human mind. LLMs do indeed translate words into "concepts" and we know this because their internals can be interrogated. If we give it a few different sentences about goats there will be similar vectors in the different computation. What's more, we can intervene, changing the goat vectors to look more like cat vectors and the output will be cat-related. Sentences about goats eating socks will...

Saffron🏳️‍⚧️

@quantumg @arina @devopscats this model is trained on more data than a human brain could ever possibly absorb within a human lifetime, and yet it still can't solve an incredibly simple logic puzzle. if your solution is to throw more data at the problem, which up to this point has clearly not worked, then you fundamentally misunderstand what this type of tool is useful for.

Trent Waddington

@spinach @arina @devopscats I don't know why you feel the need to dunk, but I suspect it's because you've been trained to behave that way. I'd argue that the average human being experiences an Internet worth of data every few minutes. It's also in a social context which is constructed for us and evaluated by other people who are trained in the same context.

Saffron🏳️‍⚧️

@quantumg @arina @devopscats you can't really measure sensory inputs in terms of data. the brain discards most of its sensory input, and it's not something you can represent in terms of a digital signal because everything is analog. the important takeaway is, humans are infinitely better at parsing language than a language model ever will be, for a fraction of the energy cost. full stop.

the power of the human mind is in its ability to discard irrelevant information. LLMs can't do that, period.

Trent Waddington

@spinach @arina @devopscats I don't think anyone on Earth is qualified to make the statements you're making right now.

Saffron🏳️‍⚧️

@quantumg @arina @devopscats oh, so my argument is irrelevant because you haven't deemed me qualified? sorry Mr. Official Knowledge Certification Board. i know that truth and facts inconvenience you so. keep believing you can train the LLM out of its fundamental design issues and see how far it takes you.

Trent Waddington

@spinach @arina @devopscats again. You're dunking. Why are you like this? No-one knows enough about these systems or the human brain to be as certain as your statements indicate. I would say the same thing if you were talking in absolutes about cosmology. We just don't know yet.

Go Up