That's bullshit, LLMs of sufficient size paired with enough data are universal approximators, which mean that conceptually it is possible. The only catch is the cost and that we do not know if we have enough data. But conceptually I do not know why a machine should be unable to surpass any human in any intellectual task.
@donnodubus @sjuvonen @tinker
And they can and are currently solving the hallucination problem. As a matter of fact they have managed to get like 70% hallucinations on the benchmarks with the latest techniques, I can find the paper for you if you want to read it. How much of that improvement will be preserved in real life I dont know, but they are fixing the problem, unlike you state.