Email or username:

Password:

Forgot your password?
Dare Obasanjo

As long as generative AI is defined by “fancy autocomplete” also known as large language models, we should expect that hallucinations won’t go away.

The idea of forcing autocomplete to tell the truth is absurd but that is essentially what people are expecting out of LLMs like ChatGPT and Google Bard. It’s quite likely these are simply impossible expectations to meet.

fortune.com/2023/08/01/can-ai-

18 comments
Adriano

@carnage4life I'd posit that if Google and the rest didn't want people thinking their LLMs would tell nothing but the truth, they shouldn't have used them for stuff that is expected to be truthful.

Like Tesla naming their assistant "autopilot" and then saying "people shouldn't think this thing can drive the car by itself"

Escarpment

@carnage4life I feel obligated to classify the "fancy autocomplete" pejorative as wrong. This is a pernicious attitude but it just doesn't match the facts. For example, one researcher asked GPT-4 questions about how to stack a bunch of physical objects (a book, a stool, a pencil, a beachball) and it gave a reasonable answer. That requires algorithmic reasoning. The neural network must have representations of algorithms.

Escarpment

@carnage4life Another example from my own personal testing: giving it examples of abstract concepts. You can seemingly give it a thousand different examples of say, "optimism", or "justice", all using different words, and it supplies the abstract word that describes the example. Each example is phrased with completely different words.

Tom Bellin :picardfacepalm:

@escarpment @carnage4life People have given agency to computers for ages, just as you are doing now.

It's natural that we project the structures and behaviors of our minds onto everything around us.

GPT is cleverly (and expensively) designed explicitly to fool people into seeing intelligence.

Part of the trick is that no one can believe that companies would spend billions of dollars to make a chatbot.

Escarpment

@tob @carnage4life I'm not naively giving agency to computers. I am not fooled. I have studied cognitive science and computer science for a long time. I am simply remarking on the nature of these systems. I hypothesize, but cannot prove yet, that to predict the next word, these systems must rely on representations of functions and abstractions beyond a simple "given word w, x, y, predict word z".

Escarpment

@tob @carnage4life I invite people who belittle the system or think it's some kind of parlor trick to design their own system that passes these various tests: stacking physical objects in a logical way; deducing an abstract word from a range of different examples of that word.

I especially invite them to attempt to do so with "traditional" statistical methods, such as n-gram model.

Tom Bellin :picardfacepalm:

@escarpment @carnage4life I am not saying it's not cool and a huge accomplishment. But it's akin to climbing Mt Everest.

It's amazing. But not exactly productive.

My main issue with GPT/Bard is that everyone at these LLM companies knows that their tech is a toy, but they can't admit it.

It's like they climbed Mt. Everest and then tried to tell you that the future was everyone running their business from the top of Mt. Everest.

Tom Bellin :picardfacepalm:

@escarpment @carnage4life Will some of the tech that OpenAI/Google/etc. built in making these generative systems be useful in the future? Definitely.

Are LLMs AI? No. Will they eventually be AI? No. Should they be used for anything other than a lark? No.

And that's the problem. The companies that invested $Billions into LLM are *never* going to see a return. (Even if you ignore the copyright theft angle - which they are eager to do.)

Tom Bellin :picardfacepalm:

@escarpment @carnage4life And that problem, that they've invested $$$ in a technology that's not worth $$$ is what's putting AI in the bitcoin category.

These companies now have to convince other companies that they *NEED* their worthless tech and must spend $$$ to get it.

The optimal outcome for OpenAI/Bard/etc. is an "Emperor's New Clothes" scenario where so many big players have bought in to the BS that no one dares say it's BS.

Escarpment

@tob @carnage4life I can't really comment on the "politics" of the technology- who's trying to hype what; who's overstating potential applications. My personal view is that this technology is way more interesting than bitcoin. I think people are way too quick to dismiss it as not artificial intelligence when it passes a bunch of tests for intelligence that psychologists had devised to characterize human and animal intelligence.

Escarpment

@tob @carnage4life I also can't deny the applications I have seen with my own eyes: I ask it software programming questions and it helps me come to a solution. I've seen it hooked up to a robot and make the robot pretty "intelligent".

Renee DiResta

@carnage4life right. Which is why deploying them in capacities like search where people expect accuracy was a bad call.

Lukas Neville

@carnage4life ... which makes the decision to orient Google Assistant around LLMs puzzling to me.

Sinistar7510

@carnage4life I think LLMs will wind up being just one component of many in a functional AI system. There will be redundancies and checks and balances in whatever AI ultimately is.

Michael Knepprath

@carnage4life People don’t get this. Accuracy of the information is entirely unrelated to the goal of these models, which is simply to sound human.

kostadis_tech

@carnage4life Asimov in his legendary series that began with I, Robot and ended with the last foundation book, explores this in exhausting detail. An adversary eventually breaks any rules-based system.

Asimov makes the observation only by eliminating the value of personhood and creating a hive-ish mind can you create a rule system.

LLMs are a more sophisticated Positronic brain than a random number generator, but they can still not think beyond their rule limits.

Johannes Ernst

@carnage4life "Autocomplete" is a good way of putting it, based on parroting all things that have ever been said on the internet.

LLMs: the world's most sophisticated autocomplete parrots.

Feeling so much better about my healthcare already.

Go Up