Email or username:

Password:

Forgot your password?
Top-level
Remença

@donnodubus @sjuvonen @tinker

That's bullshit, LLMs of sufficient size paired with enough data are universal approximators, which mean that conceptually it is possible. The only catch is the cost and that we do not know if we have enough data. But conceptually I do not know why a machine should be unable to surpass any human in any intellectual task.

6 comments
Remença

@donnodubus @sjuvonen @tinker

And they can and are currently solving the hallucination problem. As a matter of fact they have managed to get like 70% hallucinations on the benchmarks with the latest techniques, I can find the paper for you if you want to read it. How much of that improvement will be preserved in real life I dont know, but they are fixing the problem, unlike you state.

Donnodubus

@remenca @sjuvonen I didn't say a machine could never match human intelligence, I said an LLM can't.

An LLM has no intelligence.

The simple fact of what it is and how it works means it will never stop "hallucinating," no matter how much processing power or data you throw at it.

Remença

@donnodubus @sjuvonen

This is mathematically incorrect and demonstrates a lack of understanding of what universal approximation means.

Donnodubus

@remenca All an LLM does is resynthesize content from its training set that corresponds to words in the query it receives.

It understands nothing. It does no reasoning. It can't even use a calculator or look things up in a database, which much simpler and lower powered machines are able to do.

LLMs are incapable of intelligence BY DESIGN. They are literally not AI at all:

link.springer.com/article/10.1

Remença

@donnodubus

I am sorry mate, but this article all it does is just to frame the problem of hallucinations (which is being solved as we speak) as some interpretation of some guy about what is "bullshit". It does not talk about scaling laws, nor approximation, nor pac nor nothing. Please, do not embarass yourself citing articles you do not understand.

Donnodubus replied to Remença

@remenca LLM's don't do any reasoning in the first place, so what they do can't be scaled up into "intelligence".

Pretty simple!

Go Up