@evan @Gargron The generating functions of LLMs are (again, IMHO) both the most hyped and least useful function of LLMs.
While LLMs generate text that is coherent, that can illicit emotion or thought or any number of things, we're mostly looking into a mirror. LLMs don't "integrate" knowledge, they're just really, really, really big Markov chains.
Don't get me wrong, "intelligent" systems most certainly will use an LLM, but generating text from prompts the way we do isn't intelligence.
[2/2]
@evan @Gargron Ok, ok, one parting thought:
I'll just add that having memory, being adaptive, and using language to communicate are all things that computer programmes that don't use LLMs do today.
LLMs are (IMHO) the most convincing mimics we've ever created by many orders of magnitude. But they don't actually *know* anything.
I can't wait for the world to see what truly *useful* things LLMs can do other than be sometimes right on logic puzzles and write bad poetry.