Email or username:

Password:

Forgot your password?
david_chisnall

Someone recently suggested to me that AI systems bring the users' ability closer to the average. I was intrigued by this idea because it reflects my experience. I am, for example, terrible at any kind of visual art, but with something like Stable Diffusion I can produce things that are merely quite bad, whereas without it I can produce things that are absolutely terrible. Conversely, with GitHub Copilot I can write code with more bugs that's harder to read. Watching non-programmers use it and ChatGPT with Python, they can produce fairly mediocre code that mostly works.

I suppose it shouldn't surprise anyone that a machine that's trained to produce output from a statistical model built from a load of examples would tend towards the mean.

An unflattering interpretation of this would suggest that the people who are most excited by AI in any given field are the people with the least talent in that field.

34 comments
Bjorn Stahl

@david_chisnall convergence to mean in multiple dimensions. Take any model and use the domain specific version of ye olde drawing exercise of shading by using your thumb to substitute high frequency detail for shading and bet the bank that those details didn't matter.

What this fails to capture is depriving both participants of the 'jam session'; turning people into unattributed autocompletes makes you forget the person and deprives them of all of the interaction, whether that would've led to collaboration or mentoring.

@david_chisnall convergence to mean in multiple dimensions. Take any model and use the domain specific version of ye olde drawing exercise of shading by using your thumb to substitute high frequency detail for shading and bet the bank that those details didn't matter.

What this fails to capture is depriving both participants of the 'jam session'; turning people into unattributed autocompletes makes you forget the person and deprives them of all of the interaction, whether that would've led to collaboration...

Metin Seven 🎨

@lritter @david_chisnall Well said, though I'd personally phrase it this way: "People who are most excited by AI in any given field are the people who are the laziest in studying and practicing to become good in that field."

Marty Fouts

@metin @lritter @david_chisnall there are such people as you describe but there are also people like my friends who have worked hard enough to obtain MFAs and made a living from their art who are excited because they are finding novel ways to incorporate AI into their work.

Wanja

@david_chisnall Yep, by using AI I can be mediocre at Russian, Chinese or Spanish, which is a huge improvement. I don't care for it when coding.

Kyle Memoir

@david_chisnall

I have seen exactly this: a person glowing with pride over an IA-produced full-of-spelling-errors label (how exactly does that happen?) meant for cottage industry goods being prepared for sale at a craft show.

I made no comment. What can one say?

It's depressing.

J.Sʜᴀʀᴘ🌍🇺🇦Fʀᴇᴇᴅᴏᴍ&Dᴇᴍᴏᴄʀᴀᴄʏ

@david_chisnall

You can just imagine what comes next from kids in school:

"I don't need to learn anything, I'll just use AI!"

Ken Butler

@JSharp1436 @david_chisnall already happening. See eg. the Professors subreddit.

FlowChainSensei

@JSharp1436 @david_chisnall And just what WOULD kids need to learn when they have (reliable) AI at their fingertips?
#ParadigmShift

Oblomov

@david_chisnall regression to the mean is a regression only if you're above the mean, after all …

David McMullin

@david_chisnall
It makes sense that average would represent an improvement for half the population. But very few people think they themselves are below average, except maybe in some specific area they don’t care much about. So who is the market for “become average”? My guess is it’s employers who assume all their workers are below average, and in any case don’t like paying them.

J Miller

@mcmullin @david_chisnall

Also, those at or near average who can now get results with much less effort. And for things with a normal distribution, that’s a lot of people. Of course, for things that are important, it’s a disaster.

david_chisnall

@mcmullin I would interpret it somewhat more positively. Most people are good at a fairly small set of things. I've had four books published in English, but French is my second-best language and there's no way I'd be able to write a book in it. There are vast numbers of languages where I'm even worse, yet with a language model I may be able to at least be comprehensible, if not erudite.

Looking at things like painting or music composition, I'm well below the average of people who produce anything that would go into a training dataset.

Most people are bad at most things. Talented people are typically talented only in a small set of things. It takes time and practive to become really good at something. If, without AI, I can be good at a handful of things and completely useless at a lot of things, but with AI I can be good at the same set of things and mediocre in a lot of things, that's a win.

Unfortunately, so far, the set of things where it can get me up to even mediocrity is fairly small.

@mcmullin I would interpret it somewhat more positively. Most people are good at a fairly small set of things. I've had four books published in English, but French is my second-best language and there's no way I'd be able to write a book in it. There are vast numbers of languages where I'm even worse, yet with a language model I may be able to at least be comprehensible, if not erudite.

David McMullin

@david_chisnall
I think your interpretation is correct. But I look at this mainly from the point of view of how these tools will be used for art and music. A mediocre drawing by a real person still conveys something of that person. Quality aside, bot “art” is empty in a way human art cannot be, no matter how badly it’s done.

Matt Campbell

@david_chisnall @mcmullin That's not a win for the people, including merely mediocre people (which is the majority), who no longer have a job.

MadMatheMatiker

@mcmullin @david_chisnall I would think of employers perspective like this: For most everyday tasks, it is enough to have average performance. However, in few cases you need a performance as good as possible (e.g. to sort things out when the average did poorly). So, instead of hiring 5 slightly-above-average people, you hire 1-2 very good people and use AI for the standard tasks

Latte macchiato :blobcoffee: :ablobcat_longlong:

@mcmullin@musicians.today @david_chisnall@infosec.exchange
Don't forget that it also reduces the effort needed for mediocrity, which is often all you need.

Nicco

@david_chisnall good take on the AI hype that is currently overwhelming all the so-called professional magazines for IT people

Greg Stolze

@david_chisnall No, you've got it. Coming at this from the writing/LLM side, the apex of AI writing is "mediocre," and I suspect the bigger the dataset gets, the MORE mediocre it will become.

TerryB

@david_chisnall Or C suite managers, who resent paying for talent when the money could better be spent on executive bonuses.

david_chisnall

@terryb More cynically, I suspect that they are impressed because it can do their job very well: Spout plausible bullshit with no understanding of the underlying topic.

Stuart Gray

@david_chisnall That's not wrong, and it's a great observation, but it's also the default output i.e. if you give it a basic prompt and nothing else, you'll get an average or mediocre response.

LLMs are quite capable of generating responses outside of the average, but you need to both craft a better prompt & usually include one to ten quality examples of your expected output.

That doesn't mean they don't have limits, just that beyond average output requires a little skill & effort.

Inga stands with Ukraine

@david_chisnall "the people who are most excited by AI in any given field are the people with the least talent in that field"
As many people will say, ChatGPT and its kind provide extremely good and detailed and helpful answers on all topics except the topics in which the one saying this is an expert. For these topics, ChatGPT performance drops from "very helpful and detailed and informative and correct answers" to "bullshit superficially resembling an answer in shape " 🙃

J Λ M Ξ S

@david_chisnall this matches research done by the director of pedagogy at Wharton, Lilach Mollick. she framed it as "AI elevates less experienced people more than very experienced people"

Marty Fouts

@david_chisnall This is another variation on the argument made against photography 150 years ago. To an extent it’s right: few of the billions of photos made every day are art.

like photography, generative models (which I assume you mean by “AI”) can also be used creatively. It took photography 50 years of creative struggle to develop as an art form that is still evolving.

It might be a little early to write off generative AI. (although not too early to condemn LLM excesses.)

sidereal

@MartyFouts @david_chisnall Not a great comparison. Photography had /immediate/ scientific and strategic/military applications. Meanwhile generative AI doesn’t work without significant human intervention and probably never will.

From a material perspective, photography allowed people to create more precise images with less labor/energy than painting. Generative AI makes less precise images with more labor/energy! I see it as a major step backwards.

Marty Fouts

@sidereal @david_chisnall
Photography, especially in the early days, required significant human intervention. As recently as 20 years ago I would spend many hours in the darkroom to make a print.

In the late 19th century, a proficiency draftsman could produce a more accurate picture more easily than a photographer.

Curioso 🍉 🇺🇦 (jgg)

@david_chisnall

Generative AIs are designed to answer prompts giving the most likely answer to them if you put it in Internet. They don't even try to be right (they don't even know what they are talking about), they try to be credible.

There are two ways to do so:

* Copying a real answer a real person gave to that question, and retouch it minimally in order to fit. That kind of answers are usually great, but nothing you can get easily using standard Google search.

* Faking it and use a wording that makes it look like it makes sense. They are really great at it. So much, that debunking them can require a lot of expertise and effort. So if you are not an expert or you are in a bad day, you will take that bullshit as the perfect answer.

The most amazing thing about it is not even the AI has the slightest idea of which of the two paths it has taken, because the algorithm for both is exactly the same.

@david_chisnall

Generative AIs are designed to answer prompts giving the most likely answer to them if you put it in Internet. They don't even try to be right (they don't even know what they are talking about), they try to be credible.

There are two ways to do so:

* Copying a real answer a real person gave to that question, and retouch it minimally in order to fit. That kind of answers are usually great, but nothing you can get easily using standard Google search.

Ken Butler

@jgg @david_chisnall that is to say, *by definition* bullshit.

sidereal

@david_chisnall I generally agree with this, but: AI doesn’t change the users ability.

If I tell AI to make a picture of a cat. It would be wrong for me to say “I made a picture of a cat.” The AI made it. If I hired an artist to draw a picture of a cat, I also wouldn’t say “I drew a picture of a cat.” In either case, I would be no more capable of making my own cat picture without AI or an artist. My ability is unchanged.

Said it before and I’ll say it again: AI = the emperor’s new clothes.

Nini

@david_chisnall I'll take your terrible over undifferentiated bad, everyone can produce bad LLM slop but at least your terrible is yours and not an averaged out bland expression of nothing. But yeah, AI is brilliant if you're unable to do the thing it's being applied to hence why it's mostly geared to creativity applications, the folk behind it had no humanities studies nor are the ones being sold on it as it's all mathematics and numbers people.

Impertinenzija

@david_chisnall I'm a supposedly decent German writer (or that's what ppl keep telling me) and I think ChatGPT output sucks.

Rich Felker

@david_chisnall Thus the management/C-suite being the most excited of all.

MylesRyden

@david_chisnall

I agree with this take on the AI Hype in that, pretty obviously what we have in general AI applications is a regression to the mean.

The real shame is that there are quite a few fields in science where machine learning is really extremely useful in finding patterns in data that might be too subtle for ordinary perception. The hype covers up (for people in the general public) this actual advance.

Go Up