LRT: I think this really illustrates just how dumb present AI systems are. They aren't reasoning or "thinking"; what they're doing is just learning to imitate the behavior they're trained on. They can produce outputs that look novel, but in the end it all boils down to a combination of the inputs they were trained on.
Effectively, AIs like Stable Diffusion and ChatGPT know how to extrapolate and interpolate from their training data. Sure, it looks cool, and it feels intelligent because they're riffing off of a corpus of material produced by actually intelligent humans. But give them a problem they haven't seen before, or for which the obvious "extrapolated" solution is just hilariously and obviously wrong, and they'll show you just how dumb they are. They also have no concept of logic or facts, so there is no expectation of accuracy - an AI won't tell you it doesn't know how to do something, it'll just make up some BS.
Another way to put it is that AI models are just fancy generalized (very) lossily compressed versions of their training inputs. Think about that next time the copyright implications of AI come up again.
@marcan
> They aren't reasoning or "thinking"; what they're doing is just learning to imitate the behavior they're trained on. They can produce outputs that look novel, but in the end it all boils down to a combination of the inputs they were trained on
I am feeling extremely seen by this post. Sorry, I will just quietly retreat into my corner.