Email or username:

Password:

Forgot your password?
Top-level
Evan Prodromou

@Gargron come now! This overstates our current knowledge of the nature of intelligence. LLMs are adaptive, they have memory, they use language to communicate, and they integrate disparate experiences to solve problems. They have many of the hallmarks of what we call intelligence. They have more such characteristics than, say, dolphins or chimps. Us knowing how they work is not a disqualifier for them being intelligent.

36 comments
Kevin Marks

@evan @Gargron that paper cites a definition of intelligence by racist eugenicists, and doesn't have any actual controls, only vibes. It is worth watching / listening, as is the linked radiolab series on intelligence measuring

todo!("change this, Jacky")

@evan @KevinMarks @Gargron check the citations of that paper would be the most immediate one

Evan Prodromou

@jalcine @KevinMarks @Gargron I see a lot of discussion of the Gottfredson definition of intelligence, which was removed. I've only read parts of the most recent version, which says "there's no generally agreed definition of intelligence." Which I think is still true, although I am not an expert in this field.

Bike Shed

@evan ranking LLMs over dolphins or chimps for "intelligence" is pure anthropocentrism.

Evan Prodromou

@bikeshed I don't think that there's a ranking of intelligence on a single scale. But LLMs are better at, say, language use than chimps are.

Bike Shed

@evan but again, this is anthropocentric. You're defining language as language that is intelligible to humans and then saying that the tool designed by humans to output human language is better at human language than chimps! It's a silly game that plays into this very stratified view of what constitutes intelligence.

I certainly think that ranking LLMs over dolphins, who we have little understanding of their communication, seems very bizarre.

Bike Shed

@evan additionally, why is language use a more defining characteristic of intelligence than tool use? Chimps, bonobos, dolphins, octopi, corvids etc all can use tools and solve complex tasks but aren't good at language (to our definition of language). Does this matter?

Evan Prodromou

@bikeshed I am not! I think you should go back and reread my post with fresh eyes. I said that LLMs do better on some of the measures of intelligence than chimps and dolphins. I didn't say that they are more intelligent than those animals, nor did I say that the measures of intelligence they excel at are more important than the intelligence necessary to survive in the world.

Bike Shed

@evan agree to disagree, but I struggle to read "They have many of the hallmarks of what we call intelligence. They have more such characteristics than, say, dolphins or chimps" in another way than a kind of ranking.

Matt Hodges

@evan @Gargron Not too long ago — in fact, roughly a year or two ago — "Artificial Intelligence" was a term used to describe computer systems which could perform tasks that historically required human cognition. Few people were offended that Chess or Go-playing systems were considered "AI" and "real intelligence" was never a requirement. But, as we see time and time again, "AI is whatever hasn't been done yet."

en.wikipedia.org/wiki/AI_effec

Matt Hodges

@evan @Gargron I think it's historically incorrect to say that, "technically calling it AI is buying into the marketing". Yes, marketing is capitalizing on it! But the nomenclature matches my CS education from the late 2000s and it matches 70 years of how "AI" is used in research and literature. The recent obsession with asserting "theory of mind" or "intentions" or "originality" or "real intelligence" seems, well, recent.

Evan Prodromou

@MattHodges @Gargron I think there are a lot of things GPT4 is bad at. It's not very good at simple arithmetic. It is bad at geographical information -- what places are near others, parts of each other. It also does a bad job at string manipulation -- words that start with a particular letter, or words that are anagrams of other words. I don't think you have to resort to mysticism to say why it is not yet human-equivalent. But that doesn't mean it's not intelligent.

Matt Hodges

@evan

Yes, and...!

> It's not very good at simple arithmetic.

This is a recurrent example that is starting to illustrate the difference between bare LLMs and the products built on top of them. Eg, ChatGPT is a product built on top of a system. That system has a lot of components. One of those components is a LLM. And another component is a Python interpreter. LLMs can write Python quite well, and Python can do math quite well.

Seems like a pretty intelligent system to me!

Bill Plein🌶

@evan @Gargron Based on my understanding, LLMs are trained and fixed models. The amount of “memory” they have is a fraction of the training data. They can retain some context from conversations but the model itself doesn’t change. Looked at this way, LLMs can’t learn to think differently. You have to distill a new LLM.
(1/2)

Bill Plein🌶

@evan @Gargron 2/2

The current LLMs are literally statistical models distilled down to map a vast amount of training data into a very small amount of code (with embedded words) that meet the goals of the humans that created it.

When an LLM “learns” from a conversation, it’s just adding new words (rasterized in their context) to the heap. It doesn’t change the mapping/model. That’s been hard coded by the humans who developed the model.

Bill Plein🌶

@evan @Gargron 3/2 (I lied)

Summarized, ChatGPT-x doesn’t get to ChatGPT-(x+1) without the humans learning, applying that knowledge to a new model, training that model by burning down a forest or 3, and then publishing the new distillation.

I could be wrong but that’s my understanding.

Fifi Lamoura

@evan Hey, you may be unaware of the actual problem solving, social lives and intelligence of dolphins, they're far more adaptive to reality than an LLM is. And LLMs don't have experiences, that's projecting human sensory capabilities onto them they simply don't have since they're not embodied (experiences are far more than memories and they don't just live in narratives/texts/memories....see current research into PTSD and memory, for instance). @Gargron

Evan Prodromou

@fifilamoura @Gargron thanks, and that's a fair point. I'd say LLMs are better at other intelligence metrics, like language use.

Joseph Szymborski :qcca:

@evan @Gargron I'd have to disagree. LLMs are primarily used for two things, parsing text, and generating text.

The parsing functions of LLMs are truly incredible, an represent (IMHO) a generational shift in tech. But the world's best regex isn't intelligence in my book, even if it parses semantically.

[1/2]

Joseph Szymborski :qcca:

@evan @Gargron The generating functions of LLMs are (again, IMHO) both the most hyped and least useful function of LLMs.

While LLMs generate text that is coherent, that can illicit emotion or thought or any number of things, we're mostly looking into a mirror. LLMs don't "integrate" knowledge, they're just really, really, really big Markov chains.

Don't get me wrong, "intelligent" systems most certainly will use an LLM, but generating text from prompts the way we do isn't intelligence.

[2/2]

Joseph Szymborski :qcca:

@evan @Gargron Ok, ok, one parting thought:

I'll just add that having memory, being adaptive, and using language to communicate are all things that computer programmes that don't use LLMs do today.

LLMs are (IMHO) the most convincing mimics we've ever created by many orders of magnitude. But they don't actually *know* anything.

I can't wait for the world to see what truly *useful* things LLMs can do other than be sometimes right on logic puzzles and write bad poetry.

Joseph Szymborski :qcca:

@evan @Gargron Ya, I think that's the heart of the question :)

What I'm trying to communicate is that when I ask an LLM "what is on the inside of an orange", the programme isn't consulting some representation of the concept of "orange (fruit)". Rather, it's looking at all the likely words that would follow your prompt.

If you get a hallucination form that prompt, we think it made an error, but really the LLM is doing it's job, just plausible words. My bar for intelligence is personally higher

Alan Sill

@evan I agree with @Gargron on this. All we have at this point is predictive statistics, which combined with re-labeling of long-standing (and sometimes valuable) methods of machine learning and pattern recognition are creating the illusion that artificial intelligence actually exists. The greatest danger in my view associated with AI now is that people will believe that it exists.

Old ML joke: “Just because your friends jump off a cliff, will you?”
ML: “Of course!”

@evan I agree with @Gargron on this. All we have at this point is predictive statistics, which combined with re-labeling of long-standing (and sometimes valuable) methods of machine learning and pattern recognition are creating the illusion that artificial intelligence actually exists. The greatest danger in my view associated with AI now is that people will believe that it exists.

Go Up