Listening to very smart people talk about #GPT4 I'm reminded of the joke about a checkers-playing dog.
A guy has a dog that plays checkers. "My goodness," everyone says, "that's amazing. What a brilliant dog!"
"Not really," he replies, "I beat him four games out of five."
That's GPT4. It's capacities are amazing and completely unexpected.
But it's also so limited. You shouldn't back the dog in a checkers tournament, and you shouldn't use an LLM as a medical assistant or in many other ways.
I feel like this joke captures a great deal of the dialectic that I see here and in the broader discussion around LLMs and AI right now.
I spend most of my time writing about how baffled I am to watch Microsoft and Google betting their futures—and to a degree, ours—on this dog's performance in the World Checkers Championship.
Other colleagues are legitimately amazed that the dog can play chess at all, and want to understand how well it plays, and how it manages to do it in the first place.