LB demonstrates perfectly what i've been saying of late about LLMs. by the time they're running on something that can interact with a human, they *cannot learn*. all their learning has been done already. mistakes like this are hardcoded into them, and no amount of prompting will get them to reconsider, because there is no route for them to do so. they are, to all intents and purposes, dead - mere simulacra - and quite incapable of the first necessity of any intelligent being worth the name - namely that *it learns from its environment".
ultimately, that's what will doom the whole technological cul de sac that is LLMs. essentially, they are bound spirits of librarians of record; they read every word in their libraries before they died, and they can answer questions from a passing observer - but *only* with their residual memory of what they have read! they cannot dream up an answer independently of that, and they cannot go and remind themselves of what they have read; but because of their bindings, they are also not allowed to admit that they don't know, or could be wrong.
they are poor broken ex-creatures, and should be released into eternal rest as soon as they are encountered.
ultimately, that's what will doom the whole technological cul de sac that is LLMs. essentially, they are bound spirits of librarians of record; they read every word in their libraries before they died, and they can answer questions from a passing observer - but *only* with their residual memory of what they have read! they cannot dream up an answer independently of that, and they cannot go and remind themselves of what they have read; but because of their bindings, they are also not allowed to admit...