Email or username:

Password:

Forgot your password?
Top-level
Joseph Szymborski :qcca:

@evan @Gargron Ok, ok, one parting thought:

I'll just add that having memory, being adaptive, and using language to communicate are all things that computer programmes that don't use LLMs do today.

LLMs are (IMHO) the most convincing mimics we've ever created by many orders of magnitude. But they don't actually *know* anything.

I can't wait for the world to see what truly *useful* things LLMs can do other than be sometimes right on logic puzzles and write bad poetry.

3 comments
Joseph Szymborski :qcca:

@evan @Gargron Ya, I think that's the heart of the question :)

What I'm trying to communicate is that when I ask an LLM "what is on the inside of an orange", the programme isn't consulting some representation of the concept of "orange (fruit)". Rather, it's looking at all the likely words that would follow your prompt.

If you get a hallucination form that prompt, we think it made an error, but really the LLM is doing it's job, just plausible words. My bar for intelligence is personally higher

Go Up