@devopscats Proof #467198245 that LLMs don't actually understand anything, they're just fancy phrase generators.
Top-level
@devopscats Proof #467198245 that LLMs don't actually understand anything, they're just fancy phrase generators. 9 comments
@quantumg @devopscats What I mean is, they don't understand that there's a boat and a goat. It is not translated into concepts the way a human mind would. @quantumg @arina @devopscats this model is trained on more data than a human brain could ever possibly absorb within a human lifetime, and yet it still can't solve an incredibly simple logic puzzle. if your solution is to throw more data at the problem, which up to this point has clearly not worked, then you fundamentally misunderstand what this type of tool is useful for. @spinach @arina @devopscats I don't know why you feel the need to dunk, but I suspect it's because you've been trained to behave that way. I'd argue that the average human being experiences an Internet worth of data every few minutes. It's also in a social context which is constructed for us and evaluated by other people who are trained in the same context. @quantumg @arina @devopscats you can't really measure sensory inputs in terms of data. the brain discards most of its sensory input, and it's not something you can represent in terms of a digital signal because everything is analog. the important takeaway is, humans are infinitely better at parsing language than a language model ever will be, for a fraction of the energy cost. full stop. the power of the human mind is in its ability to discard irrelevant information. LLMs can't do that, period. @spinach @arina @devopscats I don't think anyone on Earth is qualified to make the statements you're making right now. @quantumg @arina @devopscats oh, so my argument is irrelevant because you haven't deemed me qualified? sorry Mr. Official Knowledge Certification Board. i know that truth and facts inconvenience you so. keep believing you can train the LLM out of its fundamental design issues and see how far it takes you. @spinach @arina @devopscats again. You're dunking. Why are you like this? No-one knows enough about these systems or the human brain to be as certain as your statements indicate. I would say the same thing if you were talking in absolutes about cosmology. We just don't know yet. |
@arina @devopscats it's trained to produce statistically similar outputs to its training data. If they put more "anti-questions" in the training data it'd produce more appropriate answers. What does "understanding" even mean? Suppose I asked this question to someone who had never seen a boat, or a goat, would they understand it? Most of us recognise the question as implying a row boat, but few of us have ever rowed a boat, and I bet none of us have tried to row a boat with a goat. Anyways, I'm going to eat some green eggs and ham.
@arina @devopscats it's trained to produce statistically similar outputs to its training data. If they put more "anti-questions" in the training data it'd produce more appropriate answers. What does "understanding" even mean? Suppose I asked this question to someone who had never seen a boat, or a goat, would they understand it? Most of us recognise the question as implying a row boat, but few of us have ever rowed a boat, and I bet none of us have tried to row a boat with a goat. Anyways, I'm going...