Email or username:

Password:

Forgot your password?
Eugen Rochko

It’s hard not to say “AI” when everybody else does too, but technically calling it AI is buying into the marketing. There is no intelligence there, and it’s not going to become sentient. It’s just statistics, and the danger they pose is primarily through the false sense of skill or fitness for purpose that people ascribe to them.

350 comments
Dr. Sbaitso

@Gargron Yup. They're not intelligence. They're not hallucinating. They're not making new connections.

They're data-center scale autocorrect.

Marsh Ray

@drsbaitso @Gargron Awhile back, I asked GPT-4 to come up with a meme in response to this viewpoint

Raphi

@drsbaitso @Gargron Data Center auto correct is such a good fit, and is exactly my experience. Chatbots are so unreliable you have to check each and every answer, and that makes using them entirely pointless.

Merijn 👨‍💻:mastodon:

@Gargron agreed. It's just LLMs, the professional nonsense generators

Elisa Fadda-🔺

@Gargron funny I had this exact same exact conversation in my head yesterday about something in a paper I am writing. I decided to define the method as ML throughout and I will most def argue to keep it as such even if any reviewer/editor will suggest to me otherwise 😌

ator robot

@Elisa @Gargron it's possible to be more specific or "technically correct", with terms like LLM or neural net, depending on the context of course - I agree though! often I use "AI" in conversations for the sole purpose of putting on a heavily sarcastic tone, pausing, and padding the word with heavy air quotations. works wonders for my mental health 😏

ator robot

@Elisa suddenly thought it sounded like I was trying to "one up" you, so I want to say I like what you're doing, and just wanted to add my thought 😊

Sibrosan

@Gargron

Maybe it should be consistently written as "AI" instead of as AI.

katrina

@sibrosan @Gargron Or maybe A"I"?
It is definitely artificial. Just not intelligent.

TanyaKaroli

@sibrosan @Gargron yes, several people do this already, I often opt to myself if giving presentations

Rob Parsons

@sibrosan @Gargron I like "AS" for artificial simulator. I like "AP" for artificial parrot even better.

Gabriele Pollara

@Gargron well said. #AI is high throughput data integration, processing and repackaging.

Shrikant Joshi

@gpollara @Gargron

But, at some point, doesn't "high throughput data integration, processing, and repackaging" become indistinguishable from "conventional intelligence" though?

katrina

@shrikant @gpollara @Gargron Until such time as we figure out how the brain works, no. I don't think it is possible to do it using boolean logic.

afrangry

@shrikant @gpollara @Gargron good point, but this is a philosophical question. The thing is, AI currently just hasn't reached the point where it can be viewed as conventional intelligence

Trygve Kalland 🇺🇦

@shrikant @gpollara @Gargron That statement hard to prove or disprove, since we don’t have a solid model of the human mind works. And since we don’t we often end up in a non-sequitur that this thing we don’t understand (the mind) must be like the thing we do understand (statistics, AI).

afrangry

@Gargron I once saw a picture of ai-generated js code with thousands of errors🤣

Nicole Parsons

@Gargron

It's another round of Wall Street hype by anti-democracy billionaires, no different than NFT's or cryptocurrency - just another scam.

OpticalNail

@Npars01 @Gargron I disagree. NFTs aren't useful, crypto currencies aren't very useful (although they do have their use cases). ML is very useful. It is abused, extremely abused for that matter, but it also helps solve a lot of problems, which makes ML in and of itself not a scam.
Sure, the ways in which it is used can be scams, but the concept at core is far from it.

Piiieps & Brummm

@arh
I agree, but would even go a step further.

Is it "machine learning" or "machine assisted/aided learning"? Who is the learning entity?

Similar to "computer aided design". The real designer is the human in front of the computer, not the computer itself.

@Npars01 @Gargron

tuban_muzuru

@Gargron

You may have seen this, Eugen - I used to have a Buddhist roshi who would say "Anything worth saying once is worth saying a thousand times."

This is the plainest, most sensible explanation of this AI beast I've ever seen in 40 years of working with machine intelligence.

youtu.be/eK0md9tQ1KY?si=k0eEy0

Still Andy Really

@Gargron and sadly nobody will become more intelligent by using it

VoquiLeibbrandt pas un pseudo

@Squirlykat @Gargron Only ask the question : would I give the same answers.. and you prove to be more intellignet.

stoicmike

@Gargron It's another huge tech scam, like crypto...

DELETED

@stoicmike @Gargron This is what I've been saying for a while personally. Unfortunately, "you can generate any piece of art with just a sentence" is a lot more eye-catching and widely understandable than "you can decentralize your financial transactions," hence why "AI" is a lot more widespread than crypto ever will be.

🇮🇶💚BabilionSon💚🇮🇶

@Gargron still wonders about your main goal of that post?!

DELETED

@Gargron
"AI" can mean a number of things:
- General AI, à la HAL9000 or The Culture: ChatGPTs are not it and will never be, just like we cannot reanimate dead corpses simply by pumping massive quantities of electricity into them
- The field of AI, an academic discipline that has been around since the 1950s: ChatGPTs are a misguided application of a sub-sub-sub-field of that discipline, based on misappropriated IP.

DELETED

@Gargron
Furthermore, the snake oil marketing of the ChatGPTs should not detract from LLMs as such: they are a very valuable scientific object with lots of useful and legitimate potential applications.
Mass-producing superficially plausible garbage by patchworking stolen IP is not one, but it does not mean that they do not exist. The damage is being done by unscrupulous capitalists willing to steal IP, sell idiotic products and dump externalities on society — not by technology itself.

DELETED

@Gargron
In short, identifying ChatGPT with AI is akin to reducing modern physics to Radithor and Tho-Radia, and will soon sound every bit as cringe and dated.

en.wikipedia.org/wiki/Radithor
en.wikipedia.org/wiki/Tho-Radi

Dmitry Borodaenko

@Gargron This false sense of skill or fitness for purpose comes from its incomprehensibility. Not because it's complex, although it is that, too. More because it is not designed to be comprehended. Only to be consumed.

The danger of AI is not in that it's not intelligent, it is in that it's unintelligible.

[DATA EXPUNGED]
Thomas Ricouard

@Gargron Then we can agree that at this stage AI safety is another trick of marketing.

DELETED

@dimillian @Gargron sentience is orthogonal to intelligence. as loss functions continue to improve, new capabilities will spontaneously emerge corresponding to the improving predict-the-next-word ability, that make this externally observable improvement possible.

unless this process hits a wall at some point due to regulation, lack of progress in chips/algorithms, some kind of diminishing return, etc, eventually it will result in generalized intelligence superior to our own.

thanks for IceCubes by the way.

@dimillian @Gargron sentience is orthogonal to intelligence. as loss functions continue to improve, new capabilities will spontaneously emerge corresponding to the improving predict-the-next-word ability, that make this externally observable improvement possible.

unless this process hits a wall at some point due to regulation, lack of progress in chips/algorithms, some kind of diminishing return, etc, eventually it will result in generalized intelligence superior to our own.

Sven A. Schmidt

@dimillian @Gargron AI can be enormously problematic without it actually being intelligent

Sven A. Schmidt

@dimillian You should maybe read up on how AI is being used for spam, misinformation, propaganda, impersonation, to name a few

Ilgaz Öcal

@Gargron As usual, rms is telling the truth and nobody listens ;-)

"I can't foretell the future, but it is important to realize that ChatGPT is not artificial intelligence. It has no intelligence; it doesn't know anything and doesn't understand anything. It plays games with words to make plausible-sounding English text, but any statements made in it are liable to be false. It can't avoid that because it doesn't know what the words _mean_."

VoquiLeibbrandt pas un pseudo

@ilgaz @Gargron Unless you are a Master in ethymology, you - as most humans - have no safe idea of what a word _mean_ and I guess that any 'AI' with a good ethymology database can beat you at that game :)

Osma A

@Gargron
The deceit doesn't start there, but in the A. None of this is any more artificial than most of what surrounds us, certainly all of our software. It's automation. It's also, in most cases, inference. So Automated Inference.

The questions are what is being automated, who stands to benefit, who is at risk, and what are the guardrails around.

TanyaKaroli

@Gargron that, the climate impact and the underpaid annotators are the three points I always try to fit in when I talk to people about LLMs and their chatbot interfaces.

Kathy E. Gill 🇺🇦

@Gargron
I have been calling it an algorithmic tool. I like @emilymbender suggestion to reference it as automation.

#AI

Daoud

@Gargron this is the same problem we had when expert systems were called "AI" en.m.wikipedia.org/wiki/Expert

I guess the temptation to think a problem is solved is too high.

At least we're consistent in calling rubbish systems "intelligent" 😂

Richard Levitte

@Gargron
Isn't that exactly why it's called *artificial*?

DELETED

@Gargron Whatever intelligence is. Advanced statistics?

Tarmo Tanilsoo

@Gargron Reminds me how it is fashionable to say AI subtitling, AI voice etc. I am old enough to remember when these things were called speech recognition, text-to-speech etc.

Catherine Berry

@Gargron @darylgibson

A college professor of mine back in 1983 said "'AI' is what we call software we don't know how to write yet." I think this neatly captures the problem we have talking about current "AI". In 2000, nobody knew how to write software that would drive cars, write poetry, play grandmaster-level chess, or summarize text, so those were considered to be examples of what AI might accomplish. Now we know how to write systems that do those things, so they are no longer AI.

Duncan Stephen

@Gargron The people using the phrase "AI" to describe their weak products are the same people who used the word "algorithm" for everything ten years ago.

Rasmus Vuori

@Gargron I agree, which is why I usually describe contemporary AI as Statistics on Steroids

:CatVibing: kittymaxxing :CatVibing:
@Gargron same goes for vr too
hell remember a few years back when they tried to get us to call those sideways skateboards 'hoverboards'
2?4>

@Gargron My concern with AI is also one of category. I think we can think of it it to reductionist a way. It's going into a socio/pysch complex system to be used. It's the use that likely will define it.

Günter :mildpanic:

@Gargron is this not “gatekeeping” the term AI?

Eg if ChatGPT is not a form of AI, what does good AI look like?

Maybe we are thinking to much about Sci-Fi where we want to have a human-like bot?

craftycat

@lifeofguenter @Gargron I think it's more the fact that in order to be considered AI, it has to be more than a fancy auto-correct. ChatGPT isn't more AI than T9, it's just got more data to work with. I don't think calling things what they are can be classed as gatekeeping.

Tio mark

@Gargron Like everything in computer science, AI is only as good as the program behind it (programmers are human and make mistakes) and the data it is fed with...

MegatronicThronBanks

@Gargron It's a hybrid of Symbolic AI and Neural Networks, which is where this was always going. This current weak adaptation is already destroying egos left right and centre, and it's going to improve. More feedback. Deeper abstraction. Inference.

Zoe Hendrickse

@Gargron yeah. The true test for AI I believe is when they’re able to self learn and improve. So recessions of generations without human input can result in them being “better”.
So far all the “AI” systems being pushed are known to become worse if fed their own output.

VoquiLeibbrandt pas un pseudo

@Gargron By now I dream of AI under the influence it may make it more human.. though I am not under influence by myself :)

Helles Sachsen

@Gargron
There is no common accepted definition of intelligence with which you can prove that a human around you is really intelligent and not just acting like?

It's a double standard if a Computer has to give more proof as a human.

EDIT: In reality its the same with humans. Humans dont understand the logic behind grammar, they just use it because they learned it statistically from their parents, not the rules.

Sumukha S

@Gargron True. There is no original intelligence but data processed and delivered as information with a really good packaging.

NickT6630

@Gargron What I consider to be proper "A.I." is implementing coded algorithms to do stuff like make a robot that learns its way round a maze or a chess playing algorithm or the min/max algorithm as examples.

Darren Moffat

@Gargron I prefer expanding AI as Adaptive Inference

Leonard Ritter

@Gargron cosmic brain: intelligence doesn't exist at all

The Other Economist

@Gargron I would remind you of the percentage of the population that still believes that the CIA puts chips in Covid vaccines, that lizard men are not D&D NPCs but are real and have invaded Earth, and believe that Harry Potter is a danger to children because they will learn magic spells and become involved in witchcraft and become damned. For most, including college educated adults and many with graduate degrees, AI really means MMM — Magic Mojo Machine. It’s good gris-gris.

accela

@Gargron Well, as John Haugeland writes in his AI book, this is no different than how we refer to natural occurred, synthetic and artificial e.g diamonds.
Are we looking for the creation of "true", synthetic intelligence? Sure in the long run.
But, artificial is the best way to describe the current state.

balkongast

@Gargron

And it consumes energy as if there's no tomorrow.

Helles Sachsen

@balkongast

Just the training of the model. ChatGPT3 and 4 dont consume this much energy now the model is finished. There are also already trained open Source models that you can run on your own pc.

@Gargron

Helles Sachsen

@balkongast

It costs maybe 10 billions euros to train a model. But after this you can copy it 1Mio times, and then the educational costs are 10kEUR per entity. This is much lower than the education of a student.
@Gargron

balkongast

@helles_sachsen @Gargron

AI will hardly be able to replace human ideas.
And I can imagine that there will be many models to be trained, at least I can't think of a limited amount of models.

Helles Sachsen

@balkongast

AI can halluzinate even now, this is a early stage of being creative. It just have to prove its hallucinations for plausibilty.

@Gargron

Helles Sachsen

@balkongast

Human creativity is also hallucinating and then checking this with ratio. That ai can draw so nice images (and collages from already existing art is still a art, you get a copyright for this as a human), is kind of proof of creativity.
@Gargron

balkongast

@helles_sachsen @Gargron

The conclusion I draw is, that we just need to stop AI training?

Helles Sachsen

@balkongast

Why? I for one welcome our new overlords, i wait for a general ai, human "intelligence" is so problematic, the situation can just improve.
Also i have a lot of gains from this early stages of ai. Using it as a tutor who shittalks sometimes improves my learning speed by the factor 2-3.
@Gargron

balkongast

@helles_sachsen @Gargron

I consider this just another hope in technology.

Helles Sachsen

@balkongast

I already have so much gains from this early stages of ai, speed of programming and learning increased so much. Also human tutors shittalks sometimes, its normal to check informations. But with this early tools being so helpful i cant imagine what we can have in 20y.

@Gargron

balkongast

@helles_sachsen @Gargron

Programming is a severe logical topic.
Hallucinating is hardly the way to succeed in this topic.

Helles Sachsen

@balkongast

It is! I ask 5 times for the same function, and two of them are not working, and in the three working versions there is often one really impressive solution.

EDIT: And i for one learn from this impressive solution. And i think in 5y i have to ask 2 times to get a impressive solution.

@Gargron

balkongast

@helles_sachsen @Gargron

Having read books like Code Complete 30 years ago, coding in teams etc I prefer human ideas and interaction over machines.

Helles Sachsen

@balkongast

I prefer the best solution, not the human solution.

@Gargron

balkongast replied to Helles

@helles_sachsen @Gargron

So you seem to have the ultimate knowledge.
Gratulations. Honestly.

Helles Sachsen replied to balkongast

@balkongast

Programming is art, you can see good code on first sight because it has its own asthetic. And AI can do this.

@Gargron

balkongast replied to Helles

@helles_sachsen @Gargron

I'm fine with your point of view, but I don't share it.

Helles Sachsen replied to Helles

@balkongast

I work in teams, but i ask a ai for code, not them, because they are better now. Junior Dev will vanished, you will just need software architects in the future, coding will be done by ai.

@Gargron

Helles Sachsen replied to Helles

@balkongast

I speak with the team about architectural decisions, but not about coding functions.

@Gargron

balkongast replied to Helles

@helles_sachsen @Gargron

Maybe. 30 years ago we already had approaches like CASE. The progress may now really aid software engineering, but I still believe, that the questions need to be asked by humans and only humans have generated the automation behind what we call AI. Look at weather forecast models. That's what happens with coding in your case. There is no intelligence behind. It's just going through a lot of pathes in statistics.

Helles Sachsen replied to balkongast

@balkongast

There are already trained ai models with the purpose to alter the code of other ai models. We already enter the path that they write their own code.

I rly think you overestimate human intelligence. Simple animals like mouses and ravens pass the mirror tests, ravens uses tools. Deep neuronal networks detect cancer since 10y better than radiologicsts. Nobody know how, since 10y. You cant be sure whats happening inside this dnn.

@Gargron

balkongast replied to Helles

@helles_sachsen @Gargron

Even if I may overestimate human intelligence, I would still prefer to restrict ourselves to it.
HAL 9000 can tell us, Stanley Kubrick has told a fascinating story with this movie.

Ric

@Gargron I've been trying to say exactly this to people for years. It's basically just autocorrect with a way bigger database and better(ish) context awareness. AI requires sentience, which isn't possible with current hardware, and probably never will be.

The misunderstanding is leading to some companies trying to adopt language models to replace workers, but it doesn't play out well. Eventually it'll settle down to be a toolset but not a replacement for humans. There's little to fear there.

jorgen

@Gargron I agree with you 100% I've tried to talk sense into acquaintances, but they're buying into the mass hysteria created by either ignorant fools or people who have something at stake.

I think this issue highlights the importance of science education.

Metafrastis

@Gargron That battle has been lost about 40 years ago. It will just be called AI from now on. Those who can‘t accept it will be forgotten in the same corner as the "it‘s not <the internet>, it’s called <the World Wide Web>" guys.

Mx Amber Alex

@Gargron alternative suggestions:

- glorified autocomplete
- plausible-sounding text generation
- primitive pattern recognition less intelligent than an average five-year-old

Newro

@Gargron I hear you. 99% of what we do at work is classic ML using statistical models. The math exists since the 1970s but the availability of data and cloud proccessing have really elevated its usefulness and companies jump on these wherever they can. Marketing does the rest.

But not everything we do right now is just statistics. Looking at advances in ANN for example.

M

@Gargron I feel the machines can respond more intelligently that most humans, they may not be right but, indeed, after several elections results in countries, where humans seem to vote contrary to their best interest, I believe they will do better in any field than us, over not too long time.

Dasha Dayter

@Gargron very recently seen the suggestion to call it “automation” (don’t remember from whom, maybe @emilymbender ?..)

gmsizemore

@Gargron And let me guess...your definition of "intelligence" specifically excludes what people are calling AI. But here's the thing...there is nothing called "intelligence." Find out what the independent-variables are that are relevant to the behavior we _call_ "intelligent" - even in nonhumans. And, actually, after the era of GOFAI ended, they did...sort of. AI people have ignored the natural science of behavior and it has hurt their efforts for AGI. Think "conditioning," people. Sheesh!

pc_roto

@Gargron i have been long thinking about this. Isn't our own brain electric pulses? We are social animals, we learn imitating others, so, what is intelligence? Not trolling, i'd like to read you

Dimples

@Gargron or as I like to call it “garbage in, garbage out”.

步丈九州不想六月四日在广场上面对坦克           

@Gargron But human can be viewed as a function with several inputs and outputs,when software can simulate this function at some extent, call it intelligence is not improper.

Althouth the Turing test not so proper for today's deep leaning, you cannot say it is not about intelligence because it is largely based on statistics. They are investing on principles people uses daily, but the principles is got by statistics.

parksanahsap

@Gargron they will be really intelligent than us

AdeptVeritatis

@gargron

Where did people read about the autocorrect metaphor?

When more than one is using the term, it is highly likely they saw it somewhere.

It doesn't give any sense to me. Maybe I understand, when reading the original thoughts.

The picture, I am usually using, is the Rorschach test with prepared pieces of paper.

[DATA EXPUNGED]
Gegenwind :antifa:🇺🇳🖖

@Gargron

If silicat based lifeforms come to earth, what if they say: „they cant become sentient. Its just chemistry“?

Don’t underestimate emerging effects. Sure, we are not at this point, yet, however: never say never.

PureTryOut

@Gargron Exactly this! Far too many are relying on it for actual answers about everything, as if it actually knows. No it doesn't, it's just statistics, it doesn't "know" shit.

satmd

@Gargron I call it “absent intelligence”. (German: KI, “keine Intelligenz”)

Chrisi79

@Gargron Those systems behave intelligently even if they are not. They understand complex questions and complex code for example. Even if it is all statistics at end, the results are breathtaking. #AI #ChatGPT #OpenAI

Shtgaus

@Gargron I'm glad that those are going through this Ouroboros effect. Internet is flooding of "AI" sh*t and it's so annoying 🫠

glelkaitis

@Gargron "AI" is a testament to how we assign a different meaning to the word as we see fit or fancy. Which "AI" is incapable of.

Clemens Grabmayer

@Gargron Please compare with Geoffrey Hinton's answer in last week's interview with @erictopol : mstdn.social/@erictopol/111546

ET: "Human Intuition vs what a LLM (Large Language Model) can do: are these complementary?"

GH: "Yes & No. [...] I think they work pretty much in the same way as us. Most people who say `they don't work like us' don't actually have a model about how we work. It might interest them that these models were actually introduced in order to understand how our brain works".

I'm Mozart

@Gargron No. The danger they pose is very real and present, through the massive abuse of $2/hour Kenyan workers, to the wholesale stealing of licensed works, to the fraudulent marketing, to the very real impacts on the poor and minorities.
Read Emily Bender and Timnit Gebru's works, amongst others, to understand the actual harms being done to real people, today.

Luke

@Gargron I used to do research (20+ years ago) getting statistical models to generate plausible sounding musical performances. We got quite good and learnt a lot about what makes music musical but there was certainly no intelligence there. Lots of musicians worried that work like ours would take away their livelihoods, but actually it was streaming that did that ;-) LLMs are probably a similar red herring.

k cavaliere

@Gargron Well put. I'll give "AI" points for being clever, maybe even useful in some limited cases, but "garbage in, garbage out" still applies.

sharpstick

@Gargron One of the blind spots of the Turing Test is the gullibility of the human administering the test. Or at least the experience that human has had with honest real-life interactions with humans. As our expectations of genuine human interaction continues to be shaped more and more by text and images and less and less by the nuance of voice and facial expressions the easier it is for us to grade the Turing Test on a curve. Our plummeting expectations are being met by a slight rise in computer capacity and we think that it has suddenly reached quasi-sentience.

@Gargron One of the blind spots of the Turing Test is the gullibility of the human administering the test. Or at least the experience that human has had with honest real-life interactions with humans. As our expectations of genuine human interaction continues to be shaped more and more by text and images and less and less by the nuance of voice and facial expressions the easier it is for us to grade the Turing Test on a curve. Our plummeting expectations are being met by a slight rise in computer...

Dave Wilburn :donor:

@Gargron as seen elsewhere...

ML is written in Python.

AI is written in PowerPoint.

DELETED

@Gargron

I call them LLMs

I call general intelligences to be Digital Intelligence because when eventually real they wont be Artificial

Martijn Vos

AI is a field of research that has existed for decades, and has already resulted in chess computers, expert systems, automatic pattern recognition, image recognition and more.

Of course LLMs are AI too. What they are not, is Strong AI or AGI: human-level general intelligence.

william.maggos

@Gargron

how do we know we're any different? we can't prove it. but I'll agree they need more scrutiny and less hype. they seem designed to sell compute services.

Timduru

@Gargron Yeah it's just big LM and knowledge graphs currently
Google home is dumb as fuck.

Fortunately we're not there yet, but it will become "sentient" once code writing AI become good enough.

AND

that some "genius" think it'll be a good idea to give it access to its own code in order to improve itself ;)
or that such "AI" iterates to improve another "dumb AI"'s code ;D

That's when we'll be doomed. and what we should never open the door to :P

But it'll happen soon or later...

@Gargron Yeah it's just big LM and knowledge graphs currently
Google home is dumb as fuck.

Fortunately we're not there yet, but it will become "sentient" once code writing AI become good enough.

AND

that some "genius" think it'll be a good idea to give it access to its own code in order to improve itself ;)
or that such "AI" iterates to improve another "dumb AI"'s code ;D

Kai Wegner

@Gargron that’s like saying it’s not the heat that kills you in an explosion but the change in pressure. It doesn’t change the result and in Marketing the plume of flames is what people will recognise and understand.

Harald R.

@Gargron
As I say in one of my pinned Toots: "Aritificial intelligence" is just digitally amplified natural stupidity.

SmokeCrack.lol

@Gargron It's tempting to call it AI, but I'd rather refer to it as 'sophisticated pattern recognition software.' Ultimately it's just a fancy algorithm, or a glorified data matcher.

Whyfullyblind

@Gargron what i dont get is the implicit bias they build in the machine decision making that they don’t even see. Its bizarre and dangerous. Hubris.

noplasticshower

@Gargron nicely put. Been working on this for several years ... berryvilleiml.com/

Preslav Rachev

@Gargron wrote about that a few months ago: preslav.me/2023/05/22/i-believ

The best we get to call it is Augmented (Human) Intelligence. Like those glasses that overlay things in front of your eyes, current AI is mostly a tool that does stuff for you, and it just happens to do it better than tools before it. New types of problems bring in new tools to solve them. But it’s just brute-forcing an answer in the end, I agree.

David D. Levine

@Gargron I try to always put "AI" in quotes, or use LLM or chatbot instead.

RememberUsAlways

@Gargron

The basic concept of AI, although hyped, is to increase frequency in correlation of data to provide new hypothesis.
How that can lead to sentient computers learning will be discussed at the time when language models are massive. 10 to 15 years from now.
That's it IMHO

DELETED

@Gargron Instead of focusing on the possibilities, the description signals the desire to belittle, classic human behavior under stress. We know nothing about future developments.

DELETED

@Gargron My son gets this lecture every time someone so much as breathes the word. “It’s not AI. It’s just a chat bot with internet access. It’s also inaccurate. AI means intelligence. They have none. They’re just computer programs.”

(Not disparaging anyone who writes aforementioned programs, but they’re not artificial intelligence, sorry. Also our councils and the government don’t use computer programs to detect fraud, but “AI” which just adds to the whole lecture.)

Roundball

@Gargron Yes and yes, but current views of human language processing suggests it is highly "statistical" in nature insofar as that it seems to track and record conditional probabilities, which are also at the heart of modern AI.

Bernd Paysan R.I.P Natenom 🕯️

@Gargron It is called "artificial" for reasons. AIs are trying to solve classes of problems which require natural intelligence to be solved by human beings. That doesn't mean the computer solution is doing it the same way.

One problem with current LLMs is that their knowledge base are vast amounts of web pages created by humans which lack natural intelligence. Garbage in, garbage out. The machine didn't original research.

Pavel Machek
@Gargron You can easily tell if people know what they are talking about. If they say "language model" they probably do, if they say "generative AI", they may, if they say "AI" they usually don't.
Sebastian Lasse

@Gargron

just an example, you are looking for details about the rockets of Hamas terrorists.

Google [DE] automatically asks for you:
How many rockets owns Hamas?

Google [DE] automatically answers for you:
The IDF recently said 7.000

:digitalcourage:

This is complete bullshit, 7.000 is not the count but a number of rockets fired into Israel within a few days.
This is google fueling Jew-hatred by guessing numbers.
A huge danger @EC_Commissioner_Breton

yoused

@Gargron No, the biggest danger they pose is that we computerize/network everyfuckingthing. Intelligence is not different from logic circuitry – we just have emo-chemistry to flavor it. AI could pose a real, physical danger because we let it.

Evan Prodromou

@Gargron come now! This overstates our current knowledge of the nature of intelligence. LLMs are adaptive, they have memory, they use language to communicate, and they integrate disparate experiences to solve problems. They have many of the hallmarks of what we call intelligence. They have more such characteristics than, say, dolphins or chimps. Us knowing how they work is not a disqualifier for them being intelligent.

Go Up