Email or username:

Password:

Forgot your password?
Top-level
Matt Hodges

@evan @Gargron Not too long ago — in fact, roughly a year or two ago — "Artificial Intelligence" was a term used to describe computer systems which could perform tasks that historically required human cognition. Few people were offended that Chess or Go-playing systems were considered "AI" and "real intelligence" was never a requirement. But, as we see time and time again, "AI is whatever hasn't been done yet."

en.wikipedia.org/wiki/AI_effec

4 comments
Matt Hodges

@evan @Gargron I think it's historically incorrect to say that, "technically calling it AI is buying into the marketing". Yes, marketing is capitalizing on it! But the nomenclature matches my CS education from the late 2000s and it matches 70 years of how "AI" is used in research and literature. The recent obsession with asserting "theory of mind" or "intentions" or "originality" or "real intelligence" seems, well, recent.

Evan Prodromou

@MattHodges @Gargron I think there are a lot of things GPT4 is bad at. It's not very good at simple arithmetic. It is bad at geographical information -- what places are near others, parts of each other. It also does a bad job at string manipulation -- words that start with a particular letter, or words that are anagrams of other words. I don't think you have to resort to mysticism to say why it is not yet human-equivalent. But that doesn't mean it's not intelligent.

Matt Hodges

@evan

Yes, and...!

> It's not very good at simple arithmetic.

This is a recurrent example that is starting to illustrate the difference between bare LLMs and the products built on top of them. Eg, ChatGPT is a product built on top of a system. That system has a lot of components. One of those components is a LLM. And another component is a Python interpreter. LLMs can write Python quite well, and Python can do math quite well.

Seems like a pretty intelligent system to me!

A screenshot from ChatGPT showing that it uses Python to do math.
Go Up