Email or username:

Password:

Forgot your password?
Top-level
Cory Doctorow

All those low-value, low-stakes applications are flooding the internet with botshit. After all, the one thing AI is unarguably *very* good at is producing bullshit at scale. As the web becomes an anaerobic lagoon for botshit, the quantum of human-generated "content" in any internet core sample is dwindling to homeopathic levels:

pluralistic.net/2024/03/14/inh

8/

29 comments | Expand all CWs
Cory Doctorow

This means that adding another order of magnitude more training data to AI won't just add massive computational expense - the data will be many orders of magnitude more expensive to acquire, even without factoring in the additional liability arising from new legal theories about scraping:

pluralistic.net/2023/09/17/how

9/

Cory Doctorow

That leaves us with "humans in the loop" - the idea that an AI's business model is selling software to businesses that will pair it with human operators who will closely scrutinize the code's guesses. There's a version of this that sounds plausible - the one in which the human operator is in charge, and the AI acts as an eternally vigilant "sanity check" on the human's activities.

10/

Cory Doctorow

For example, my car has a system that notices when I activate my blinker while there's another car in my blind-spot. I'm pretty consistent about checking my blind spot, but I'm also a fallible human and there've been a couple times where the alert saved me from making a potentially dangerous maneuver. As disciplined as I am, I'm also sometimes forgetful about turning off lights, or waking up in time for work, or remembering someone's phone number (or birthday).

11/

Cory Doctorow replied to Cory

I like having an automated system that does the robotically perfect trick of never forgetting something important.

There's a name for this in automation circles: a "centaur." I'm the human head, and I've fused with a powerful robot body that supports me, doing things that humans are innately bad at.

12/

Cory Doctorow replied to Cory

That's the good kind of automation, and we all benefit from it. But it only takes a small twist to turn this good automation into a *nightmare*. I'm speaking here of the *reverse-centaur*: automation in which the computer is in charge, bossing a human around so it can get its job done.

13/

Cory Doctorow replied to Cory

Think of Amazon warehouse workers, who wear haptic bracelets and are continuously observed by AI cameras as autonomous shelves shuttle in front of them and demand that they pick and pack items at a pace that destroys their bodies and drives them mad:

pluralistic.net/2022/04/17/rev

Automation centaurs are great: they relieve humans of drudgework and let them focus on the creative and satisfying parts of their jobs.

14/

Cory Doctorow replied to Cory

That's how AI-assisted coding is pitched: rather than looking up tricky syntax and other tedious programming tasks, an AI "co-pilot" is billed as freeing up its human "pilot" to focus on the creative puzzle-solving that makes coding so satisfying.

15/

Cory Doctorow replied to Cory

But an hallucinating AI is a *terrible* co-pilot. It's just good enough to get the job done much of the time, but it also sneakily inserts booby-traps that are statistically *guaranteed* to look as plausible as the *good* code (that's what a next-word-guessing program does: guesses the statistically most likely word).

16/

Cory Doctorow replied to Cory

This turns AI-"assisted" coders into *reverse* centaurs. The AI can churn out code at superhuman speed, and you, the human in the loop, must maintain perfect vigilance and attention as you review that code, spotting the cleverly disguised hooks for malicious code that the AI can't be prevented from inserting into its code. As "Lena" writes, "code review [is] difficult relative to writing new code":

twitter.com/qntm/status/177377

17/

Cory Doctorow replied to Cory

Why is that? "Passively reading someone else's code just doesn't engage my brain in the same way. It's harder to do properly":

twitter.com/qntm/status/177378

There's a name for this phenomenon: "automation blindness." Humans are just not equipped for eternal vigilance. We get good at spotting patterns that occur frequently - so good that we miss the anomalies.

18/

Cory Doctorow replied to Cory

That's why TSA agents are so good at spotting harmless shampoo bottles on X-rays, even as they miss nearly every gun and bomb that a red team smuggles through their checkpoints:

pluralistic.net/2023/08/23/aut

"Lena"'s thread points out that this is as true for AI-assisted driving as it is for AI-assisted coding: "self-driving cars replace the experience of driving with the experience of being a driving instructor":

twitter.com/qntm/status/177384

19/

That's why TSA agents are so good at spotting harmless shampoo bottles on X-rays, even as they miss nearly every gun and bomb that a red team smuggles through their checkpoints:

pluralistic.net/2023/08/23/aut

"Lena"'s thread points out that this is as true for AI-assisted driving as it is for AI-assisted coding: "self-driving cars replace the experience of driving with the experience of being a driving instructor":

Cory Doctorow replied to Cory

In other words, they turn you into a reverse-centaur. Whereas my blind-spot double-checking robot allows me to make maneuvers at human speed and points out the things I've missed, a "supervised" self-driving car makes maneuvers at a computer's frantic pace, and demands that its human supervisor tirelessly and perfectly assesses each of those maneuvers.

20/

Cory Doctorow replied to Cory

No wonder Cruise's murderous "self-driving" taxis replaced each low-waged driver with 1.5 high-waged technical robot supervisors:

pluralistic.net/2024/01/11/rob

AI radiology programs are said to be able to spot cancerous masses that human radiologists miss.

21/

Cory Doctorow replied to Cory

A centaur-based AI-assisted radiology program would keep the same number of radiologists in the field, but they would get *less* done: every time they assessed an X-ray, the AI would give them a second opinion. If the human and the AI disagreed, the human would go back and re-assess the X-ray. We'd get better radiology, at a higher price (the price of the AI software, plus the additional hours the radiologist would work).

22/

Cory Doctorow replied to Cory

But back to making the AI bubble pay off: for AI to pay off, the human in the loop has to *reduce* the costs of the business buying an AI. No one who invests in an AI company believes that their returns will come from business customers to agree to *increase* their costs. The AI can't do your job, but the AI salesman can convince your boss to fire you and replace you with an AI anyway - that pitch is the most successful form of AI disinformation in the world.

23/

Cory Doctorow replied to Cory

An AI that "hallucinates" bad advice to fliers can't replace human customer service reps, but airlines are firing reps and replacing them with chatbots:

bbc.com/travel/article/2024022

An AI that "hallucinates" bad legal advice to New Yorkers can't replace city services, but Mayor Adams still tells New Yorkers to get their legal advice from his chatbots:

arstechnica.com/ai/2024/03/nyc

24/

An AI that "hallucinates" bad advice to fliers can't replace human customer service reps, but airlines are firing reps and replacing them with chatbots:

bbc.com/travel/article/2024022

An AI that "hallucinates" bad legal advice to New Yorkers can't replace city services, but Mayor Adams still tells New Yorkers to get their legal advice from his chatbots:

Cory Doctorow replied to Cory

The only reason bosses want to buy robots is to fire humans and lower their costs. That's why "AI art" is such a pisser. There are plenty of harmless ways to automate art production with software - everything from a "healing brush" in Photoshop to deepfake tools that let a video-editor alter the eye-lines of all the extras in a scene to shift the focus.

25/

Cory Doctorow replied to Cory

A graphic novelist who models a room in The Sims and then moves the camera around to get traceable geometry for different angles is a centaur - they are genuinely offloading some finicky drudgework onto a robot that is perfectly attentive and vigilant.

But the pitch from "AI art" companies is "fire your graphic artists and replace them with botshit."

26/

Cory Doctorow replied to Cory

They're pitching a world where the robots get to do all the creative stuff (badly) and humans have to work at robotic pace, with robotic vigilance, in order to catch the mistakes that the robots make at superhuman speed.

Reverse centaurism is *brutal*. That's not news: Charlie Chaplin documented the problems of reverse centaurs nearly 100 years ago:

en.wikipedia.org/wiki/Modern_T

27/

Cory Doctorow replied to Cory

As ever, the problem with a gadget isn't what it does: it's who it does it *for* and who it does it *to*. There are plenty of benefits from being a centaur - lots of ways that automation can help workers. But the only path to AI profitability lies in *reverse* centaurs, automation that turns the human in the loop into the crumple-zone for a robot:

estsjournal.org/index.php/ests

28/

Cory Doctorow replied to Cory

I'm touring my new, nationally bestselling novel *The Bezzle*! Catch me in Boston with Randall "XKCD" Munroe (Apr 11), then Providence (Apr 12) and beyond!

pluralistic.net/2024/02/16/nar

eof/

Thirteenth Worrier replied to Cory

@pluralistic
"turns the human in the loop into the crumple-zone for a robot"

Adding this to my list of perfect turns of phrase that also fucking suck.

Martin Owens :inkscape: replied to Cory

@pluralistic

"Sorry to bother you" reverse-centaur

Binks replied to Cory

@pluralistic Has anyone ever established that that was an AI chatbot? Lots of articles seem to assume it was AI; but the decision referred to an out-of-date support script - I think it was just a normal, dumb “choose your own adventure” bot interaction.

flo

@pluralistic
I disagree in calling AI generally bullshit.

I agree, if you refer specifically to LLM and image creation.

I disagree, if it's about finding solutions in the fields of e.g. science or construction.

mirabilos

@fasnix @pluralistic it does not find solutions. It spits out what is likely to match the context, i.e. not only does it not think out of the box, it even stays relatiely narrow to the centre of the box

flo

@mirabilos
In a scientific context, "AI" was prompted to "find highly toxic chemicals" (not exact wording).

It found some 40.000+ combinations, many of them also highly explosive, if I remember correctly.
Scientists themselves would have never thougt about those.
After some debate, whether to publish them, they were published.

theverge.com/2022/3/17/2298319

---

How far "in the centre of the box" is this example, in your opinion?

@pluralistic

@mirabilos
In a scientific context, "AI" was prompted to "find highly toxic chemicals" (not exact wording).

It found some 40.000+ combinations, many of them also highly explosive, if I remember correctly.
Scientists themselves would have never thougt about those.
After some debate, whether to publish them, they were published.

mirabilos replied to flo

@fasnix @pluralistic that is of course relative to the total size of the box and the context; I’m sure they would not use somethint like ChatGPT for that but something specialised.

Lea

@pluralistic
>"As the web becomes an anaerobic lagoon for botshit, the quantum of human-generated "content" in any internet core sample is dwindling to homeopathic levels:"

This sentence is perfect.

Go Up