Email or username:

Password:

Forgot your password?
Top-level
Tom Bellin :picardfacepalm:

@escarpment @carnage4life People have given agency to computers for ages, just as you are doing now.

It's natural that we project the structures and behaviors of our minds onto everything around us.

GPT is cleverly (and expensively) designed explicitly to fool people into seeing intelligence.

Part of the trick is that no one can believe that companies would spend billions of dollars to make a chatbot.

7 comments
Escarpment

@tob @carnage4life I'm not naively giving agency to computers. I am not fooled. I have studied cognitive science and computer science for a long time. I am simply remarking on the nature of these systems. I hypothesize, but cannot prove yet, that to predict the next word, these systems must rely on representations of functions and abstractions beyond a simple "given word w, x, y, predict word z".

Escarpment

@tob @carnage4life I invite people who belittle the system or think it's some kind of parlor trick to design their own system that passes these various tests: stacking physical objects in a logical way; deducing an abstract word from a range of different examples of that word.

I especially invite them to attempt to do so with "traditional" statistical methods, such as n-gram model.

Tom Bellin :picardfacepalm:

@escarpment @carnage4life I am not saying it's not cool and a huge accomplishment. But it's akin to climbing Mt Everest.

It's amazing. But not exactly productive.

My main issue with GPT/Bard is that everyone at these LLM companies knows that their tech is a toy, but they can't admit it.

It's like they climbed Mt. Everest and then tried to tell you that the future was everyone running their business from the top of Mt. Everest.

Tom Bellin :picardfacepalm:

@escarpment @carnage4life Will some of the tech that OpenAI/Google/etc. built in making these generative systems be useful in the future? Definitely.

Are LLMs AI? No. Will they eventually be AI? No. Should they be used for anything other than a lark? No.

And that's the problem. The companies that invested $Billions into LLM are *never* going to see a return. (Even if you ignore the copyright theft angle - which they are eager to do.)

Tom Bellin :picardfacepalm:

@escarpment @carnage4life And that problem, that they've invested $$$ in a technology that's not worth $$$ is what's putting AI in the bitcoin category.

These companies now have to convince other companies that they *NEED* their worthless tech and must spend $$$ to get it.

The optimal outcome for OpenAI/Bard/etc. is an "Emperor's New Clothes" scenario where so many big players have bought in to the BS that no one dares say it's BS.

Escarpment

@tob @carnage4life I can't really comment on the "politics" of the technology- who's trying to hype what; who's overstating potential applications. My personal view is that this technology is way more interesting than bitcoin. I think people are way too quick to dismiss it as not artificial intelligence when it passes a bunch of tests for intelligence that psychologists had devised to characterize human and animal intelligence.

Escarpment

@tob @carnage4life I also can't deny the applications I have seen with my own eyes: I ask it software programming questions and it helps me come to a solution. I've seen it hooked up to a robot and make the robot pretty "intelligent".

Go Up