Email or username:

Password:

Forgot your password?
Greg Egan

The fallacies about bad software:

• If software would do harm, it won’t be deployed.
• If it *does* get deployed, the harm won’t be significant.
• If the harm *is* significant, it will quickly be identified, acknowledged and rectified.

Australia has gone through its own horrendous episode of persecution and suicides with the Robodebt scandal:

en.wikipedia.org/wiki/Robodebt

so I’m looking forward to watching the acclaimed dramatisation of the UK’s Horizon scandal:

en.wikipedia.org/wiki/British_

“Mr Bates vs The Post Office”, which screens on Channel 7 / 7plus in Australia starting next Wednesday.

Needless to say, everyone rushing to insert LLMs into every conceivable nook and cranny of the commercial, scientific, administrative and judicial systems should be strapped to their chairs with their eyelids pinned open and made to watch this fifteen times.

11 comments
DELETED

@gregeganSF

The UK government woke up to the Horizon scandal that had been going for more than a decade thanks to watching this.
That's two months after awarding a half-billion quid contract to Palantir for the NHS data.
They still want to put "AI" into everything.
*Slow clap*

Lien Rag

@gregeganSF

Software doesn't do harm.
It's managers using software they don't understand and making decisions without verification who are the cause of harm.
Please, with your audience and your stature, pay attention to the difference and make it clear when you address your public.

In a sane world with the same defective program, those post office people would have been flagged as suspects by the software, and then cleared by the resulting inquiry,

Daniel Darabos

@lienrag @gregeganSF I'm a software engineer myself, so I'm generally quick to blame management and processes instead of engineers. But we can't push all responsibility to non-engineers. Ultimately they can only ask the engineer, "is there a bug?" The engineer checks and says "I don't think so."

Here 3,000+ people were severely harmed due to software issues. It would feel just to see proportional punishment visited on the creators of the software.

Daniel Darabos

@lienrag @gregeganSF I would be terrified to work on a software where a bug could land me in prison. But I *should* be terrified! This is a system where a bug has landed 230 people in prison. If the system carries such a high risk for its users, it has to carry a real risk for the creators too.

If I worked on a system where a bug could land me in prison, I would be extremely careful and implement a crazy amount of safeguards. Which sounds like what we want of such a system, right?

Lien Rag

@darabos

Again, the bug landed nobody in prison.
Authoritarian and incompetent managers sent people to prison.
All the bug did was flag the people. It's the managers who chose to sent flagged people to prison without due process.

This distinction is fundamental, and not making this distinction forbids to address the real problem.

@gregeganSF

Lien Rag

@darabos

I'm not asking for the creators to get out of it scot-free, but to keep branding in the public conscience the fundamental principle that "a computer cannot make a management decision".

If the creators had shoddy practices, it should be denounced, but it has nothing to do with sending people to jail.

If a software bug makes an automated machine gun kill random people, the ones going to jail should be the ones who deployed automated machine guns, not the developers.

@gregeganSF

Paul Cantrell

@gregeganSF @dgoldsmith I only regret that I have but one boost to give to this post

Qybat

@gregeganSF The post office scandal isn't really about the software failure, but the institutional failure of the post office legal department. They took the most adversarial possible interpretation of an adversarial legal system - their job was to get that guilty conviction at any cost, including deliberately quashing any evidence or investigation that might cast doubt on their prosecution. The software flaws went undiscovered because the PO was actively blocking any possibility of audit.

Greg Egan

@Qybat That's my point. Bad software doesn't cause harm by magic, it causes harm because the institutions that actually deploy it can't be relied on to do what they should to detect and fix the problem.

In turn, though, it's also widespread naivety and/or dishonesty about how badly software can actually be wrong that enables these institutions to pretend that it can't possibly fail that badly.

Kent Pitman

@gregeganSF

Good points.

The issues isn't "software doing harm", it's "there being downstream consequences of software failing."

To see the fallacy in simplest form, consider a program whose entire job was to open or close a switch. Would that be regulated as mission-critical/life-threatening? And yet if what the switch opened was medication, a dangerous chemical, or a valuable food or drug necessary to life, an error could be catastrophic. One can only know what things matter in context. They are not a property of the device. So all kinds of devices that are hijackable because no one imagined they needed to be hardened can end up used in places that make us all vulnerable.

Even ignoring hacking, a cell phone that fails unexpectedly when you're hurt or lost in a forest can be a serious problem.

@gregeganSF

Good points.

The issues isn't "software doing harm", it's "there being downstream consequences of software failing."

To see the fallacy in simplest form, consider a program whose entire job was to open or close a switch. Would that be regulated as mission-critical/life-threatening? And yet if what the switch opened was medication, a dangerous chemical, or a valuable food or drug necessary to life, an error could be catastrophic. One can only know what things matter in context. They are...

Go Up