Email or username:

Password:

Forgot your password?
Top-level
Cory Doctorow's linkblog

Decades ago, security practitioners began a long argument about how best to address that looming urgency. The most vexing aspect of this argument was a modern, cybernetic variant on a debate that was as old as the ancient philosophers - a debate that Rene Descartes immortalized in the 17th Century.

2/

23 comments
Cory Doctorow's linkblog

You've doubtless heard the phrase, "I think therefore I am" (*Cogito, ergo sum*). It comes from Descartes' 1637 *Discourse on the Method*, which asks the question, "How can we know things?" Or, more expansively, "Given that all my reasoning begins with things I encounter through my senses, and given that my senses are sometimes wrong, how can I know *anything*?"

3/

Cory Doctorow's linkblog

Descartes' answer: "I know God is benevolent, because when I conceive of God, I conceive of benevolence, and God gave me my conceptions. A benevolent God wouldn't lead me astray. Thus, the things I learn through my senses and understand through my reason are right, because a benevolent God wouldn't have it any other way."

4/

Cory Doctorow's linkblog

I've hated this answer since my freshman philosophy class, and even though the TA rejected my paper explaining why it was bullshit, I *still* think it's bullshit. I mean, I'm a science fiction writer, so I can handily conceive of a wicked God whose evil plan starts with *making you think He is benevolent* and then systematically misleading you in your senses and reasoning, tormenting you for His own sadistic pleasure.

5/

Cory Doctorow's linkblog

The debate about trust and certainty has been at the center of computer security since its inception. When Ken "Unix" Thompson accepted the 1984 Turing Prize he gave an acceptance speech called "Reflections on Trusting Trust":

cs.cmu.edu/~rdriley/487/papers

It's a *bombshell*. In it, Thompson proposes an evil compiler, one that inserted a back-door into any operating system it compiled, and that inserted a back-door-generator into any compiler it was asked to compile.

6/

The debate about trust and certainty has been at the center of computer security since its inception. When Ken "Unix" Thompson accepted the 1984 Turing Prize he gave an acceptance speech called "Reflections on Trusting Trust":

cs.cmu.edu/~rdriley/487/papers

Cory Doctorow's linkblog

Since Thompson had created the original Unix compiler - which was used to compile every other compiler and thus every other flavor of Unix - this was a pretty wild thought experiment, especially since he didn't outright deny having done it.

7/

Cory Doctorow's linkblog

Trusting trust is still the most important issue in information security. Sure, you can run a virus-checker, but that virus checker has to ask your operating system to tell it about what files are on the drive, what data is in memory, and what processes are being executed. What if the OS is compromised?

8/

Cory Doctorow's linkblog

Okay, so maybe you are sure the OS isn't compromised, but how does the OS know if it's even running on the "bare metal" of your computer. Maybe it is running inside a virtual machine, and the *actual* OS on the computer is a malicious program that sits between your OS and the chips and circuits, distorting the data it sends and receives. This is called a "rootkit," and it's a deadass nightmare that actually exists in the actual world.

9/

Cory Doctorow's linkblog

A computer with a rootkit is a brain in a jar, a human battery in the Matrix. You, the computer user, can ask the operating system questions about its operating environment that it will answer faithfully and truthfully, and *those answers will all be wrong*, because the *actual computer* is being controlled by the rootkit and it only tells your operating system what it wants it to know.

10/

Cory Doctorow's linkblog

20 years ago, some clever Microsoft engineers proposed a solution to this conundrum: "Trusted Computing." They proposed adding a second computer to your system, a sealed, secure chip with very little microcode, so little that it could all be audited in detail and purged of bugs.

11/

Cory Doctorow's linkblog

The chip itself would be securely affixed to your motherboard, such that any attempt to remove it and replace it with a compromised chip would be immediately obvious to you (for example, it might encapsulate some acid in a layer of epoxy that would rupture if you tried to remove the chip).

They called this "Next Generation Secure Computing Base," or "Palladium" for short. They came to the Electronic Frontier Foundation offices to present it. It was a memorable day:

pluralistic.net/2020/12/05/tru

12/

The chip itself would be securely affixed to your motherboard, such that any attempt to remove it and replace it with a compromised chip would be immediately obvious to you (for example, it might encapsulate some acid in a layer of epoxy that would rupture if you tried to remove the chip).

They called this "Next Generation Secure Computing Base," or "Palladium" for short. They came to the Electronic Frontier Foundation offices to present it. It was a memorable day:

Cory Doctorow's linkblog

My then-colleague Seth Schoen - EFF's staff technologist, the most technically sophisticated person to have been briefed on the technology without signing an NDA - made several pointed critiques of Palladium:

web.archive.org/web/2002080214

And suggested a hypothetical way to make sure it only served computer users, and not corporations or governments who wanted to control them:

linuxjournal.com/article/7055

13/

Cory Doctorow's linkblog

But his most salient concern was this: "what if malware gets into the trusted computing chip?"

The point of trusted computing was to create a nub of certainty, a benevolent God whose answers to your questions could always be trusted. The output from a trusted computing element would be ground truth, axiomatic, trusted without question. By having a reliable external observer of your computer and its processes, you could always tell whether you were in the Matrix or in the world.

14/

Cory Doctorow's linkblog

It was a red pill for your computer.

What if it was turned? What if some villain convinced it to switch sides, by subverting its code, or by subtly altering it at the manufacturer?

That is, what if Descartes' God was a sadist who *wanted* to torment him?

This was a nightmare scenario in 2002, one that the trusted computing advocates never adequately grappled with. In the years since, it's only grown more salient, as trusted computing variations have spread to many kinds of computer.

15/

Cory Doctorow's linkblog

The most common version is the UEFI - ("Unified Extensible Firmware Interface") - a separate operating system, often running on its own chip (though sometimes running in a notionally "secure" region of your computer's main processors) that is charged with observing and securing your computer's boot process.

16/

Cory Doctorow's linkblog

UEFI poses lots of dangers to users; it can (and is) used by manufacturers to block third-party operating systems, which allows them to lock you into using their own products, including their app stores, letting them restrict your choices and pick your pocket.

17/

Cory Doctorow's linkblog

But in exchange, UEFI is said to deliver a far more important benefit: a provably benevolent God, one who will never lie to your operating system about whether it is in the Matrix or in the real world, providing the foundational ground truth needed to find and block malicious software.

18/

Cory Doctorow's linkblog

So it's a big deal that Kaspersky has detected a UEFI-infecting rootkit (which they've dubbed a "bootkit"), which they call Cosmicstrand, which can reinstall itself after your reformat your drive and reinstall your OS:

securelist.com/cosmicstrand-ue

Cosmicstrand does some *really* clever, technical things to compromise your UEFI, which then allows it to act with near-total impunity and undetectability. Indeed, Kaspersky warns that there are probably *lots* of these bootkits floating around.

19/

So it's a big deal that Kaspersky has detected a UEFI-infecting rootkit (which they've dubbed a "bootkit"), which they call Cosmicstrand, which can reinstall itself after your reformat your drive and reinstall your OS:

securelist.com/cosmicstrand-ue

Cosmicstrand does some *really* clever, technical things to compromise your UEFI, which then allows it to act with near-total impunity and undetectability. Indeed, Kaspersky warns that there are probably *lots* of these...

Cory Doctorow's linkblog

If you want a good lay-oriented breakdown of how Cosmicstrand installs a wicked God in your computer, check out Dan Goodin's excellent *Ars Technica* writeup:

arstechnica.com/information-te

Cosmicstrand dates back at least to 2016, a year after we learned about the NSA's BIOS attacks, thanks to the Snowden docs:

wired.com/2015/03/researchers-

20/

If you want a good lay-oriented breakdown of how Cosmicstrand installs a wicked God in your computer, check out Dan Goodin's excellent *Ars Technica* writeup:

arstechnica.com/information-te

Cosmicstrand dates back at least to 2016, a year after we learned about the NSA's BIOS attacks, thanks to the Snowden docs:

Cory Doctorow's linkblog

But despite its long tenure, Cosmicstrand was only just discovered. That's because of the fundamental flaw inherent in designing a computer that its owners can't fully inspect or alter: if you design a component that is supposed to be immune from owner override, then anyone who compromises that component *can't be detected or countered by the computer's owner*.

21/

Cory Doctorow's linkblog

This is the core of a two-decade-old debate among security people, and it's one that the "benevolent God" faction has consistently had the upper hand in. They're the "curated computing" advocates who insist that preventing you from choosing an alternative app store or side-loading a program is for your own good - because if it's possible for you to override the manufacturer's wishes, then malicious software may impersonate you to do so, or you might be tricked into doing so.

22/

Cory Doctorow's linkblog

This benevolent dictatorship model only works so long as the dictator is both perfectly benevolent and perfectly competent. We know the dictators aren't always benevolent. Apple won't invade your privacy to sell you things, but they'll take away ever Chinese user's privacy to retain their ability to manufacture devices in China:

nytimes.com/2021/05/17/technol

23/

Cory Doctorow's linkblog

But even if you trust a dictator's benevolence, you can't trust in their perfection. Everyone makes mistakes. Benevolent dictator computing works well, but fails badly. Designing a computer that intentionally can't be fully controlled by its owner is a nightmare, because that is a computer that, once compromised, can attack its owner with impunity.

--

Image:
Cryteria (modified)
commons.wikimedia.org/wiki/Fil

CC BY 3.0:
creativecommons.org/licenses/b

eof/

But even if you trust a dictator's benevolence, you can't trust in their perfection. Everyone makes mistakes. Benevolent dictator computing works well, but fails badly. Designing a computer that intentionally can't be fully controlled by its owner is a nightmare, because that is a computer that, once compromised, can attack its owner with impunity.

marnanel

@pluralistic do you know the Mitchell and Webb sketch about the computer Colosson whose improbable emergency shutdown trigger is seeing a human holding up a photo of a duck?

Go Up