Email or username:

Password:

Forgot your password?
Carl T. Bergstrom

Listening to very smart people talk about #GPT4 I'm reminded of the joke about a checkers-playing dog.

A guy has a dog that plays checkers. "My goodness," everyone says, "that's amazing. What a brilliant dog!"

"Not really," he replies, "I beat him four games out of five."

That's GPT4. It's capacities are amazing and completely unexpected.

But it's also so limited. You shouldn't back the dog in a checkers tournament, and you shouldn't use an LLM as a medical assistant or in many other ways.

8 comments
Carl T. Bergstrom

I feel like this joke captures a great deal of the dialectic that I see here and in the broader discussion around LLMs and AI right now.

I spend most of my time writing about how baffled I am to watch Microsoft and Google betting their futures—and to a degree, ours—on this dog's performance in the World Checkers Championship.

Other colleagues are legitimately amazed that the dog can play chess at all, and want to understand how well it plays, and how it manages to do it in the first place.

Carl T. Bergstrom

(Here the metaphor really strains, but my other big concern is what happens to the game of checkers to which everyone on the planet has been addicted by design, when all of a suddenly everyone has a dozen of these dogs of their own, and checkers-playing has already been monetized by 20 years of surveillance capitalism, and then you throw in a handful of bad actors that want to see everything burn.)

Tom Bellin :picardfacepalm:

@ct_bergstrom What if we took the deep data algorithms that already drive most large-scale software operations and made them chat bots.

Saivad

@ct_bergstrom one possible outcome is companies that make checkers boards will be financially successful. Or in this case companies that make GPUs.

leo

A dog as a living being has intelligence, but #chatgpt being a object has not intelligent. Its an inanimate object without thought, emotion or any feelings. Chatgpt is also not an #ai or #agi as many people call it, its an large language model meaning it is trained on a large languge as its base and can only answer our question from that dataset it has access to.

1/

leo

This in turn is the ability of chatgpt or present "AI" all ofthem are predictive model that spit out random text corelating to your query & is answering them in a very confident manner that make us belive what is said by it. We can see these ai models giving the most wrong answers with the utmost confidence & pride and when pointed out their mistakes they panic. Without this hype if we closely look we can see that these #ai are no more than glorified word predictor with pride.

2/

leo

@ct_bergstrom Sorry this went with a mistake i am correcting after this reply

Go Up