@MattHodges @Gargron I think there are a lot of things GPT4 is bad at. It's not very good at simple arithmetic. It is bad at geographical information -- what places are near others, parts of each other. It also does a bad job at string manipulation -- words that start with a particular letter, or words that are anagrams of other words. I don't think you have to resort to mysticism to say why it is not yet human-equivalent. But that doesn't mean it's not intelligent.
@evan
Yes, and...!
> It's not very good at simple arithmetic.
This is a recurrent example that is starting to illustrate the difference between bare LLMs and the products built on top of them. Eg, ChatGPT is a product built on top of a system. That system has a lot of components. One of those components is a LLM. And another component is a Python interpreter. LLMs can write Python quite well, and Python can do math quite well.
Seems like a pretty intelligent system to me!