Email or username:

Password:

Forgot your password?
Hector Martin

LRT: I think this really illustrates just how dumb present AI systems are. They aren't reasoning or "thinking"; what they're doing is just learning to imitate the behavior they're trained on. They can produce outputs that look novel, but in the end it all boils down to a combination of the inputs they were trained on.

Effectively, AIs like Stable Diffusion and ChatGPT know how to extrapolate and interpolate from their training data. Sure, it looks cool, and it feels intelligent because they're riffing off of a corpus of material produced by actually intelligent humans. But give them a problem they haven't seen before, or for which the obvious "extrapolated" solution is just hilariously and obviously wrong, and they'll show you just how dumb they are. They also have no concept of logic or facts, so there is no expectation of accuracy - an AI won't tell you it doesn't know how to do something, it'll just make up some BS.

Another way to put it is that AI models are just fancy generalized (very) lossily compressed versions of their training inputs. Think about that next time the copyright implications of AI come up again.

5 comments
Dr. Quadragon ❌

@marcan

> They aren't reasoning or "thinking"; what they're doing is just learning to imitate the behavior they're trained on. They can produce outputs that look novel, but in the end it all boils down to a combination of the inputs they were trained on

I am feeling extremely seen by this post. Sorry, I will just quietly retreat into my corner.

Dr. Quadragon ❌

@marcan Seriously, though. How in the slightest does that differ from what we, humanimals, do?

All we do is also take the behaviours, pieces of information (a.k.a memes) as sensory input, memorize it, train on it, transforming raw input into experience (it's called learning), combine the inputs, compare them, transform, recurse on it, many-many times, and then produce some output, which we then call "reasoning". Or "art" if nobody seems to buy into it. Or "culture" as umbrella term.

Have you seen the "Everything is a remix" series? This is true to an uncomfortable degree for some.

I'm fine with it. Whatever. There's no golden pot at the end of the rainbow, because a rainbow is not a bow, but actually a circle, and we're looking at it the wrong way. Everything that exists, works somehow. We do too.

@marcan Seriously, though. How in the slightest does that differ from what we, humanimals, do?

All we do is also take the behaviours, pieces of information (a.k.a memes) as sensory input, memorize it, train on it, transforming raw input into experience (it's called learning), combine the inputs, compare them, transform, recurse on it, many-many times, and then produce some output, which we then call "reasoning". Or "art" if nobody seems to buy into it. Or "culture" as umbrella term.

Hector Martin

@drq Just look at the failure modes to understand how it's different. There's no higher reasoning with current AIs. No common sense, no ability to solve novel problems even when the solution is obvious.

Maybe we just need deeper networks, who knows. But we're definitely not there yet, not anywhere close.

Bornach

@marcan @drq
Some of ChatGPT's so-called "failure modes" remind me of similar failure modes in humans

ChatGPT arguing that a movie not yet released during its training cut-off period in 2022 means it hasn't been released in 2023, reminded me of exchanges I've had arguing politics on birdsite. They weren't interested in arriving at some agreed truth, but only in making an argument-winning tweet

Brain science already acknowledges this very human characteristic
nytimes.com/2011/06/15/arts/pe

@marcan @drq
Some of ChatGPT's so-called "failure modes" remind me of similar failure modes in humans

ChatGPT arguing that a movie not yet released during its training cut-off period in 2022 means it hasn't been released in 2023, reminded me of exchanges I've had arguing politics on birdsite. They weren't interested in arriving at some agreed truth, but only in making an argument-winning tweet

Everybody loves Gordo

@drq @marcan It's less about the "repeating it's training data" and more about the fundamentals. It's not really taught to reason. It's just predicting the probability of the next word based on all the previous words, so it's missing a lot of the circuitry we've got going on. It's really not an apples to apples comparison.

That being said, it's also the most advanced version of that sort of AI anyone's ever seen, and it's only going to get more advanced, and I...

Go Up