@sjuvonen @remenca @tinker LLMs are conceptually incapable of delivering on their promises and the hopes people have been fooled to put in them.
They can't solve basic issues like "hallucinations" because they are just not designed to actually know or understand anything. They are a fundamentally useless parlor trick.
@donnodubus @sjuvonen @tinker
That's bullshit, LLMs of sufficient size paired with enough data are universal approximators, which mean that conceptually it is possible. The only catch is the cost and that we do not know if we have enough data. But conceptually I do not know why a machine should be unable to surpass any human in any intellectual task.